BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B5 (1)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241206T094600 DTEND;TZID=Asia/Tokyo:20241206T095800 UID:siggraphasia_SIGGRAPH Asia 2024_sess139_papers_1121@linklings.com SUMMARY:The Lips, the Teeth, the tip of the Tongue: LTT Tracking DESCRIPTION:Technical Papers\n\nFeisal Rasras, Stanislav Pidhorskyi, and T omas Simon (Reality Labs Research); Hallison Paz (Instituto Nacional de Ma temática Pura e Aplicada (IMPA)); and He Wen, Jason Saragih, and Javier Ro mero (Reality Labs Research)\n\nA mesh-based generative model of the inner -mouth system is presented, which includes teeth and gums for the upper an d lower jaw, the tongue, and their placement inside the human head. The mo del is capable of capturing person-specific detail, enabling the creation of highly accurate avatars that exceed the quality of prior mesh-based rep resentations. The method combines data from dental mouth scans and facial performances captured in a multi-camera capture rig. The system employs a precise segmentation model that can differentiate complex tongue motion. A novel inverse-rendering formulation is used in a staged modeling procedur e, producing accurate registration of tongue, teeth, and jaw as well as im proved disentanglement of non-rigid face motion from rigid head motion. Th e system is demonstrated on novel held-out subjects, where we demonstrate highly accurate reconstructions that exceed prior mesh-based avatar repres entations.\n\nRegistration Category: Full Access, Full Access Supporter\n\ nLanguage Format: English Language\n\nSession Chair: Kui Wu (LightSpeed St udios) URL:https://asia.siggraph.org/2024/program/?id=papers_1121&sess=sess139 END:VEVENT END:VCALENDAR