BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070241Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_902@linklings.com SUMMARY:FLARE: Fast Learning of Animatable and Relightable Mesh Avatars DESCRIPTION:Technical Papers\n\nShrisha Bharadwaj (Max Planck Institute fo r Intelligent Systems); Yufeng Zheng (ETH Zürich, Max Planck Institute for Intelligent Systems); Otmar Hilliges (ETH Zürich); and Michael Black and Victoria Fernandez Abrevaya (Max Planck Institute for Intelligent Systems) \n\nOur goal is to efficiently learn personalized animatable 3D head avata rs from videos that are geometrically accurate, realistic, relightable, an d compatible with current rendering systems. While 3D meshes enable effici ent processing and are highly portable, they lack realism in terms of shap e and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh represen tations via differentiable rendering, by exploiting highly-optimized metho ds from traditional computer graphics and approximating some of the compon ents with neural networks. Specifically, we introduce FLARE, a technique t hat enables fast creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mes h representation, enabling efficient differentiable rasterization and stra ightforward animation via learned blendshapes and linear blend skinning we ights. Second, we follow physically-based rendering and factor observed co lors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Si nce our input videos are captured on a single device with a narrow field o f view, modeling the surrounding environment light is non-trivial. Based o n the split-sum approximation for modeling specular reflections, we addres s this by approximating the pre-filtered environment map with a multi-laye r perceptron (MLP) modulated by the surface roughness, eliminating the nee d to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material and lighting MLP s, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.\n\nRe gistration Category: Full Access, Enhanced Access, Trade Exhibitor, Experi ence Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_902&sess=sess209 END:VEVENT END:VCALENDAR