BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070240Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_891@linklings.com SUMMARY:BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthe sis DESCRIPTION:Technical Papers\n\nHao-Bin Duan (Beihang University); Miao Wa ng (Beihang University, Zhongguancun Laboratory); Jin-Chuan Shi and Xu-Chu an Chen (Beihang University); and Yan-Pei Cao (Tencent)\n\nSynthesizing ph otorealistic 4D human head avatars from videos is essential for VR/AR, tel epresence, and video game applications. Although existing Neural Radiance Fields (NeRF)-based methods achieve high-fidelity results, the computation al expense limits their use in real-time applications. To overcome this li mitation, we introduce BakedAvatar, a novel representation for real-time n eural head avatar synthesis, deployable in a standard polygon rasterizatio n pipeline. Our approach extracts deformable multi-layer meshes from learn ed isosurfaces of the head and computes expression-, pose-, and view-depen dent appearances that can be baked into static textures for efficient rast erization. We thus propose a three-stage pipeline for neural head avatar s ynthesis, which includes learning continuous deformation, manifold, and ra diance fields, extracting layered meshes and textures, and fine-tuning tex ture details with differential rasterization. Experimental results demonst rate that our representation generates synthesis results of comparable qua lity to other state-of-the-art methods while significantly reducing the in ference time required. We further showcase various head avatar synthesis r esults from monocular videos, including view synthesis, face reenactment, expression editing, and pose editing, all at interactive frame rates.\n\nR egistration Category: Full Access, Enhanced Access, Trade Exhibitor, Exper ience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_891&sess=sess209 END:VEVENT END:VCALENDAR