BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070311Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T174500 DTEND;TZID=Australia/Melbourne:20231213T182900 UID:siggraphasia_SIGGRAPH Asia 2023_sess127@linklings.com SUMMARY:Beyond Skin Deep DESCRIPTION:Technical Communications, Technical Papers\n\nPortrait Express ion Editing With Mobile Photo Sequence\n\nWe introduce ExShot, a system th at allows high-quality portrait expression editing from mobile photo seque nces to extract expression information and ensure editing qualities.\n\n\n Yiqin Zhao (Worcester Polytechnic Institute); Rohit Pandey, Yinda Zhang, R uofei Du, Feitong Tan, and Chetan Ramaiah (Google Inc.); Tian Guo (Worcest er Polytechnic Institute); and Sean Fanello (Google Inc.)\n--------------- ------\nLitNeRF: Intrinsic Radiance Decomposition for High-Quality View Sy nthesis and Relighting of Faces\n\nHigh-fidelity, photorealistic 3D captur e of a human face is a long-standing problem in computer graphics -- the c omplex material of skin, intricate geometry of hair, and fine scale textur al details make it challenging. Traditional techniques rely on very large and expensive capture rigs to reconstru...\n\n\nKripasindhu Sarkar (Google Inc.); Marcel Bühler and Simon Li (ETH Zürich, Google Inc.); and Daoye Wa ng, Delio Vicini, Jérémy Riviere, Yinda Zhang, Sergio Orts-Escolano, Paulo Gotardo, Thabo Beeler, and Abhimitra Meka (Google Inc.)\n---------------- -----\nEfficient Incremental Potential Contact for Actuated Face Simulatio n\n\nWe present a simulator for face animation driven by muscle actuation. Through the integration of IPC and our proposed optimization strategies, we can efficiently simulate challenging expressions with rich contact.\n\n \nBo Li, Lingchen Yang, and Barbara Solenthaler (ETH Zürich)\n------------ ---------\nMapping and Recognition of Facial Expressions on Another Person 's Look-Alike Avatar\n\nAnalyzes facial expressions and eye movements from three actors mapped onto one look-alike avatar, aiming to enhance VR real ism and study the effects on identification and recognition.\n\n\nBirate S onia (University of Virginia), Trinity Suma (Columbia University), and Kwa me Agyemang and Oyewole Oyekoya (Hunter College)\n---------------------\nE motional Speech-Driven Animation with Content-Emotion Disentanglement\n\nT o be widely adopted, 3D facial avatars need to be animated easily, realist ically, and directly, from speech signals. While the best recent methods g enerate 3D animations that are synchronized with the input audio, they lar gely ignore the impact of emotions on facial expressions. Instead, their f ocu...\n\n\nRadek Daněček (Max Planck Institute for Intelligent Systems); Kiran Chhatre (KTH Royal Institute of Technology); Shashank Tripathi, Yand ong Wen, and Michael Black (Max Planck Institute for Intelligent Systems); and Timo Bolkart (Max Planck Institut for Intelligent Systems)\n\nRegistr ation Category: Full Access\n\nSession Chair: Jernej Barbic (University of Southern California) END:VEVENT END:VCALENDAR