BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_183@linklings.com SUMMARY:Towards Practical Capture of High-Fidelity Relightable Avatars DESCRIPTION:Technical Papers\n\nHaotian Yang, Mingwu Zheng, Wanquan Feng, and Haibin Huang (Kuaishou Technology); Yu-Kun Lai (Cardiff University); a nd Pengfei Wan, Zhongyuan Wang, and Chongyang Ma (Kuaishou Technology)\n\n In this paper, we propose a novel framework, Tracking-free Relightable Ava tar (TRAvatar), for capturing and reconstructing high-fidelity 3D avatars. Compared to previous methods, TRAvatar works in a more practical and effi cient setting. Specifically, TRAvatar is trained with dynamic image sequen ces captured in a Light Stage under varying lighting conditions, enabling realistic relighting and real-time animation for avatars in diverse scenes . Additionally, TRAvatar allows for tracking-free avatar capture and obvia tes the need for accurate surface tracking under varying illumination cond itions. Our contributions are two-fold: First, we propose a novel network architecture that explicitly builds on and ensures the satisfaction of the linear nature of lighting. Trained on simple group light captures, TRAvat ar can predict the appearance in real-time with a single forward pass, ach ieving high-quality relighting effects under illuminations of arbitrary en vironment maps. Second, we jointly optimize the facial geometry and religh table appearance from scratch based on image sequences, where the tracking is implicitly learned. This tracking-free approach brings robustness for establishing temporal correspondences between frames under different light ing conditions. Extensive qualitative and quantitative experiments demonst rate that our framework achieves superior performance for photorealistic a vatar animation and relighting.\n\nRegistration Category: Full Access, Enh anced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_183&sess=sess209 END:VEVENT END:VCALENDAR