BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070311Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T151500 DTEND;TZID=Australia/Melbourne:20231214T161000 UID:siggraphasia_SIGGRAPH Asia 2023_sess158@linklings.com SUMMARY:Multidisciplinary Fusion DESCRIPTION:Technical Papers\n\nEgo3DPose: Capturing 3D Cues from Binocula r Egocentric Views\n\nWe present Ego3DPose, a highly accurate binocular eg ocentric 3D pose reconstruction system. The binocular egocentric setup off ers practicality and usefulness in various applications, however, it remai ns largely under-explored. It has been suffering from low pose estimation accuracy due to viewing di...\n\n\nTaeho Kang and Kyungjin Lee (Seoul Nati onal University), Jinrui Zhang (Central South University), and Youngki Lee (Seoul National University)\n---------------------\nAn Architecture and I mplementation of Real-Time Sound Propagation Hardware for Mobile Devices\n \nThis paper presents a high-performance and low-power hardware architectu re for real-time sound rendering on mobile devices. Traditional sound rend ering algorithms require high-performance CPUs or GPUs for processing beca use of its high computational complexities to realize ultra-realistic 3D a udio. ...\n\n\nEUNJAE KIM, SUKWON CHOI, and JIYOUNG KIM (Sejong University , Sejongpia); JAE-HO NAH (Sangmyung Univesrity); WOONAM JUNG (Sejongpia); TAE-HYEONG LEE (Sejong University); YEON-KUG MOON (Korea Electronics Techn ology Institute); and WOO-CHAN PARK (Sejong University, Sejongpia)\n------ ---------------\nThin On-Sensor Nanophotonic Array Cameras\n\nToday's comm odity camera systems rely on compound optical systems to map light origina ting from the scene to positions on the sensor where it gets recorded as a n image. To achieve an accurate mapping without optical aberrations, i.e., deviations from Gauss' linear optics model, typical lens systems ...\n\n\ nPraneeth Chakravarthula (Princeton University); Jipeng Sun (Princeton Uni versity, Northwestern University); Xiao Li, Chenyang Lei, Gene Chou, and M ario Bijelic (Princeton University); Johannes Froesch and Arka Majumdar (U niversity of Washington); and Felix Heide (Princeton University)\n-------- -------------\nHand Pose Estimation with Mems-Ultrasonic Sensors\n\nHand t racking is an important aspect of human-computer interaction and has a wid e range of applications in extended reality devices. However, current hand motion capture methods suffer from various limitations. For instance, vis ual-based hand pose estimation is susceptible to self-occlusion and chan.. .\n\n\nQiang Zhang, Yuanqiao Lin, Yubin Lin, and Szymon Rusinkiewicz (Prin ceton University)\n---------------------\nShapeSonic: Sonifying Fingertip Interactions for Non-Visual Virtual Shape Perception\n\nFor sighted users, computer graphics and virtual reality allow them to model and perceive im aginary objects and worlds. However, these approaches are inaccessible to blind and visually impaired (BVI) users, since they primarily rely on visu al feedback. To this end, we introduce ShapeSonic, a system ...\n\n\nJiali n Huang (George Mason University), Alexa Siu (Adobe Research), Rana Hanock a (University of Chicago), and Yotam Gingold (George Mason University)\n\n Registration Category: Full Access\n\nSession Chair: Jae-Ho Nah (Sangmyung University) END:VEVENT END:VCALENDAR