BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070244Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T141000 DTEND;TZID=Australia/Melbourne:20231213T142500 UID:siggraphasia_SIGGRAPH Asia 2023_sess164_papers_902@linklings.com SUMMARY:FLARE: Fast Learning of Animatable and Relightable Mesh Avatars DESCRIPTION:Technical Papers, TOG\n\nShrisha Bharadwaj (Max Planck Institu te for Intelligent Systems); Yufeng Zheng (ETH Zürich, Max Planck Institut e for Intelligent Systems); Otmar Hilliges (ETH Zürich); and Michael Black and Victoria Fernandez Abrevaya (Max Planck Institute for Intelligent Sys tems)\n\nOur goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightabl e, and compatible with current rendering systems. While 3D meshes enable e fficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are real istic but lack compatibility and are slow to train and render. Our key ins ight is that it is possible to efficiently learn high-fidelity 3D mesh rep resentations via differentiable rendering, by exploiting highly-optimized methods from traditional computer graphics and approximating some of the c omponents with neural networks. Specifically, we introduce FLARE, a techni que that enables fast creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinni ng weights. Second, we follow physically-based rendering and factor observ ed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scene s. Since our input videos are captured on a single device with a narrow fi eld of view, modeling the surrounding environment light is non-trivial. Ba sed on the split-sum approximation for modeling specular reflections, we a ddress this by approximating the pre-filtered environment map with a multi -layer perceptron (MLP) modulated by the surface roughness, eliminating th e need to explicitly model the light. We demonstrate that our mesh-based a vatar formulation, combined with learned deformation, material and lightin g MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.\ n\nRegistration Category: Full Access\n\nSession Chair: Parag Chaudhuri (I ndian Institute of Technology Bombay) URL:https://asia.siggraph.org/2023/full-program?id=papers_902&sess=sess164 END:VEVENT END:VCALENDAR