BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163641Z
LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231214T090000
DTEND;TZID=Australia/Melbourne:20231214T091500
UID:siggraphasia_SIGGRAPH Asia 2023_sess124_papers_891@linklings.com
SUMMARY:BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthe
 sis
DESCRIPTION:Hao-Bin Duan (Beihang University); Miao Wang (Beihang Universi
 ty, Zhongguancun Laboratory); Jin-Chuan Shi and Xu-Chuan Chen (Beihang Uni
 versity); and Yan-Pei Cao (Tencent)\n\nSynthesizing photorealistic 4D huma
 n head avatars from videos is essential for VR/AR, telepresence, and video
  game applications. Although existing Neural Radiance Fields (NeRF)-based 
 methods achieve high-fidelity results, the computational expense limits th
 eir use in real-time applications. To overcome this limitation, we introdu
 ce BakedAvatar, a novel representation for real-time neural head avatar sy
 nthesis, deployable in a standard polygon rasterization pipeline. Our appr
 oach extracts deformable multi-layer meshes from learned isosurfaces of th
 e head and computes expression-, pose-, and view-dependent appearances tha
 t can be baked into static textures for efficient rasterization. We thus p
 ropose a three-stage pipeline for neural head avatar synthesis, which incl
 udes learning continuous deformation, manifold, and radiance fields, extra
 cting layered meshes and textures, and fine-tuning texture details with di
 fferential rasterization. Experimental results demonstrate that our repres
 entation generates synthesis results of comparable quality to other state-
 of-the-art methods while significantly reducing the inference time require
 d. We further showcase various head avatar synthesis results from monocula
 r videos, including view synthesis, face reenactment, expression editing, 
 and pose editing, all at interactive frame rates.\n\nRegistration Category
 : Full Access\n\nSession Chair: Lin Gao (University of Chinese Academy of 
 Sciences)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_891&sess=sess124
END:VEVENT
END:VCALENDAR
