BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023303Z LOCATION:G510\, G Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241204T105000 DTEND;TZID=Asia/Tokyo:20241204T114000 UID:siggraphasia_SIGGRAPH Asia 2024_sess282@linklings.com SUMMARY:Mixed Reality & Holography DESCRIPTION:Technical Communications\n\nThe Technical Communications progr am at SIGGRAPH Asia serves as an invaluable platform for presenting cuttin g-edge work that may not neatly align with the Technical Papers session. A ttendees can expect to explore fresh and thought-provoking ideas, glean pr actical insights from real-world production work, and discover innovative applications spanning various disciplines, from graphics and vision to AI and VR.\n\nDuring these sessions, leading experts from academia and indust ry will present their latest findings, offering a glimpse into cutting-edg e research and development. From geometry and animation to virtual reality and machine learning, attendees can expect to explore a diverse array of topics at the intersection of graphics and other fields.\n\nUnder the over arching theme of Curious Minds, attendees can delve into discussions surro unding innovation, interdisciplinary discovery, and the role of education in shaping the future of technology. Whether you’re a seasoned researcher or a curious enthusiast, the Technical Communications program promises to offer insights that spark curiosity and inspire new perspectives.\n\nShrun ken Reality: Augmenting Real-World Contexts in Real-Time on Realistic Mini ature Dioramas\n\nWe propose Shrunken Reality that captures real-world and real-time contexts using cameras and projects them onto realistically rep licated miniature dioramas. This creates a unique user experience and offe rs potential applications.\n\n\nMinjae Lee, Jiho Bae, Ungsik Kim, Sang-Min Choi, and Suwon Lee (Gyeongsang National University)\n------------------- --\nFocal Surface Holographic Light Transport using Learned Spatially Adap tive Convolutions\n\nWe introduce a learned focal surface light propagatio n network for Computer-Generated Holography (CGH), improving efficiency by mapping complex fields onto focal surfaces and reducing hologram optimiza tion time by 1.5x.\n\n\nChuanjun Zheng and Yicheng Zhan (University Colleg e London (UCL)), Liang Shi (Massachusetts Institute of Technology), Ozan C akmakci (Google), and Kaan Akşit (University College London (UCL))\n------ ---------------\nAutomatic Generation of Multimodal 4D Effects for Immersi ve Video Watching Experiences\n\nThe system extracts multiple features of movies using AI, and generates multisensory 4D effects for home theater au tomatically. It successfully enhances the user experience, proved via a us er study.\n\n\nSeoyong Nam, Minho Chung, Haerim Kim, Eunchae Kim, Taehyeon Kim, and Yongjae Yoo (Hanyang University)\n---------------------\nSee-Thr ough Face Display: Enabling Gaze Communication for Any Face—Human or AI\n\ nWe present See-Through Face Display, an eye-contact display system design ed to enhance gaze awareness in both human-to-human and human-to-avatar co mmunication.\n\n\nKazuya Izumi, Ryosuke Hyakuta, and Ippei Suzuki (Digital Nature Group, University of Tsukuba) and Yoichi Ochiai (Research and Deve lopment Center for Digital Nature)\n\nRegistration Category: Full Access, Full Access Supporter\n\nLanguage Format: English Language\n\nSession Chai r: Yifan Peng (University of Hong Kong) END:VEVENT END:VCALENDAR