BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163722Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T164000 DTEND;TZID=Australia/Melbourne:20231213T173100 UID:siggraphasia_SIGGRAPH Asia 2023_sess126@linklings.com SUMMARY:TechScape DESCRIPTION:The Shortest Route Is Not Always the Fastest: Probability-Mode led Stereoscopic Eye Movement Completion Time in VR\n\nSpeed and consisten cy of target-shifting play a crucial role in human ability to perform comp lex tasks. Shifting our gaze between objects of interest quickly and consi stently requires changes both in depth and direction. Gaze changes in dept h are driven by slow, inconsistent vergence movements which...\n\n\nBudmon de Duinkharjav and Benjamin Liang (New York University), Anjul Patney and Rachel Brown (NVIDIA Research), and Qi Sun (New York University)\n-------- -------------\nPerceptual Requirements for World-Locked Rendering in AR an d VR\n\nStereoscopic, head-tracked display systems can show users realisti c, world-locked virtual objects and environments. However, discrepancies b etween the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism , immersion,...\n\n\nPhillip Guan, Eric Penner, Joel Hegland, Benjamin Let ham, and Douglas Lanman (Meta Reality Labs Research)\n-------------------- -\nComparing Cinematic Conventions through Emotional Responses in Cinemati c VR and Traditional Mediums\n\nThis study explores the emotional impact o f cinematic techniques in Cinematic Virtual Reality versus traditional med iums, highlighting CVR's unique engagement potential and suggesting a refi ned visual language for immersive storytelling.\n\n\nZhiyuan Yu and Cheng- Hung Lo (Xi'an Jiaotong Liverpool University), Mutian Niu (Monash Universi ty), and Hai-Ning Liang (Xi'an Jiaotong Liverpool University)\n----------- ----------\nThe Effects of Avatar Voice and Facial Expression Intensity on Emotional Recognition and User Perception\n\nThis study investigates the role of avatars' vocal and facial expression intensity on perceived realis m and emotional recognition. Vocal intensity affected emotional recogniti on while facial expression intensity affected perceived realism.\n\n\nTrin ity Suma (Columbia University); Birate Sonia (University of Virginia); Kwa me Agyemang Baffour (Graduate Center, City University of New York); and Oy ewole Oyekoya (Hunter College, City University of New York)\n------------- --------\nPerceptually Adaptive Real-Time Tone Mapping\n\nTone mapping ope rators aim to remap content to a display's dynamic range. Virtual reality is a popular new display modality that has significant differences from ot her media, making the use of traditional tone mapping techniques difficult . Moreover, real-time adaptive estimation of tone curves that ...\n\n\nTai moor Tariq (Meta, Università della Svizzera italiana) and Nathan Matsuda, Eric Penner, Jerry Jia, Douglas Lanman, Ajit Ninan, and Alexandre Chapiro (Meta)\n\nRegistration Category: Full Access\n\nSession Chair: Jin Ryong K im (TBU) END:VEVENT END:VCALENDAR