BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163649Z
LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231214T151500
DTEND;TZID=Australia/Melbourne:20231214T161000
UID:siggraphasia_SIGGRAPH Asia 2023_sess158@linklings.com
SUMMARY:Multidisciplinary Fusion
DESCRIPTION:ShapeSonic: Sonifying Fingertip Interactions for Non-Visual Vi
 rtual Shape Perception\n\nFor sighted users, computer graphics and virtual
  reality allow them to model and perceive imaginary objects and worlds. Ho
 wever, these approaches are inaccessible to blind and visually impaired (B
 VI) users, since they primarily rely on visual feedback. To this end, we i
 ntroduce ShapeSonic, a system ...\n\n\nJialin Huang (George Mason Universi
 ty), Alexa Siu (Adobe Research), Rana Hanocka (University of Chicago), and
  Yotam Gingold (George Mason University)\n---------------------\nAn Archit
 ecture and Implementation of Real-Time Sound Propagation Hardware for Mobi
 le Devices\n\nThis paper presents a high-performance and low-power hardwar
 e architecture for real-time sound rendering on mobile devices. Traditiona
 l sound rendering algorithms require high-performance CPUs or GPUs for pro
 cessing because of its high computational complexities to realize ultra-re
 alistic 3D audio. ...\n\n\nEUNJAE KIM, SUKWON CHOI, and JIYOUNG KIM (Sejon
 g University, Sejongpia); JAE-HO NAH (Sangmyung Univesrity); WOONAM JUNG (
 Sejongpia); TAE-HYEONG LEE (Sejong University); YEON-KUG MOON (Korea Elect
 ronics Technology Institute); and WOO-CHAN PARK (Sejong University, Sejong
 pia)\n---------------------\nHand Pose Estimation with Mems-Ultrasonic Sen
 sors\n\nHand tracking is an important aspect of human-computer interaction
  and has a wide range of applications in extended reality devices. However
 , current hand motion capture methods suffer from various limitations. For
  instance, visual-based hand pose estimation is susceptible to self-occlus
 ion and chan...\n\n\nQiang Zhang, Yuanqiao Lin, Yubin Lin, and Szymon Rusi
 nkiewicz (Princeton University)\n---------------------\nEgo3DPose: Capturi
 ng 3D Cues from Binocular Egocentric Views\n\nWe present Ego3DPose, a high
 ly accurate binocular egocentric 3D pose reconstruction system. The binocu
 lar egocentric setup offers practicality and usefulness in various applica
 tions, however, it remains largely under-explored. It has been suffering f
 rom low pose estimation accuracy due to viewing di...\n\n\nTaeho Kang and 
 Kyungjin Lee (Seoul National University), Jinrui Zhang (Central South Univ
 ersity), and Youngki Lee (Seoul National University)\n--------------------
 -\nThin On-Sensor Nanophotonic Array Cameras\n\nToday's commodity camera s
 ystems rely on compound optical systems to map light originating from the 
 scene to positions on the sensor where it gets recorded as an image. To ac
 hieve an accurate mapping without optical aberrations, i.e., deviations fr
 om Gauss' linear optics model, typical lens systems ...\n\n\nPraneeth Chak
 ravarthula (Princeton University); Jipeng Sun (Princeton University, North
 western University); Xiao Li, Chenyang Lei, Gene Chou, and Mario Bijelic (
 Princeton University); Johannes Froesch and Arka Majumdar (University of W
 ashington); and Felix Heide (Princeton University)\n\nRegistration Categor
 y: Full Access\n\nSession Chair: Jae-Ho Nah (Sangmyung University)
END:VEVENT
END:VCALENDAR
