BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T090000 DTEND;TZID=Australia/Melbourne:20231214T091500 UID:siggraphasia_SIGGRAPH Asia 2023_sess124_papers_891@linklings.com SUMMARY:BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthe sis DESCRIPTION:Technical Papers, TOG\n\nHao-Bin Duan (Beihang University); Mi ao Wang (Beihang University, Zhongguancun Laboratory); Jin-Chuan Shi and X u-Chuan Chen (Beihang University); and Yan-Pei Cao (Tencent)\n\nSynthesizi ng photorealistic 4D human head avatars from videos is essential for VR/AR , telepresence, and video game applications. Although existing Neural Radi ance Fields (NeRF)-based methods achieve high-fidelity results, the comput ational expense limits their use in real-time applications. To overcome th is limitation, we introduce BakedAvatar, a novel representation for real-t ime neural head avatar synthesis, deployable in a standard polygon rasteri zation pipeline. Our approach extracts deformable multi-layer meshes from learned isosurfaces of the head and computes expression-, pose-, and view- dependent appearances that can be baked into static textures for efficient rasterization. We thus propose a three-stage pipeline for neural head ava tar synthesis, which includes learning continuous deformation, manifold, a nd radiance fields, extracting layered meshes and textures, and fine-tunin g texture details with differential rasterization. Experimental results de monstrate that our representation generates synthesis results of comparabl e quality to other state-of-the-art methods while significantly reducing t he inference time required. We further showcase various head avatar synthe sis results from monocular videos, including view synthesis, face reenactm ent, expression editing, and pose editing, all at interactive frame rates. \n\nRegistration Category: Full Access\n\nSession Chair: Lin Gao (Universi ty of Chinese Academy of Sciences) URL:https://asia.siggraph.org/2023/full-program?id=papers_891&sess=sess124 END:VEVENT END:VCALENDAR