BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T144500 DTEND;TZID=Asia/Tokyo:20241203T145600 UID:siggraphasia_SIGGRAPH Asia 2024_sess107_papers_701@linklings.com SUMMARY:DifFRelight: Diffusion-Based Facial Performance Relighting DESCRIPTION:Technical Papers\n\nMingming He (Netflix Eyeline Studios); Pas cal Clausen (Netflix Eyeline Studios, Osylum); and Ahmet Levent Taşel, Li Ma, Oliver Pilarski, Wenqi Xian, Laszlo Rikker, Xueming Yu, Ryan Burgert, Ning Yu, and Paul Debevec (Netflix Eyeline Studios)\n\nWe present a novel framework for free-viewpoint facial performance relighting using diffusion -based image-to-image translation. Leveraging a subject-specific dataset c ontaining diverse facial expressions captured under various lighting condi tions, including flat-lit and one-light-at-a-time (OLAT) scenarios, we tra in a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs. Our framework includes spatially -aligned conditioning of flat-lit captures and random noise, along with in tegrated lighting information for global control, utilizing prior knowledg e from the pre-trained Stable Diffusion model. This model is then applied to dynamic facial performances captured in a consistent flat-lit environme nt and reconstructed for novel-view synthesis using a scalable dynamic 3D Gaussian Splatting method to maintain quality and consistency in the relit results. In addition, we introduce unified lighting control by integratin g a novel area lighting representation with directional lighting, allowing for joint adjustments in light size and direction. We also enable high dy namic range imaging (HDRI) composition using multiple directional lights t o produce dynamic sequences under complex lighting conditions. Our evaluat ions demonstrate the model's efficiency in achieving precise lighting cont rol and generalizing across various facial expressions while preserving de tailed features such as skin texture and hair. The model accurately reprod uces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency, advancing photorealism within our frame work.\n\nRegistration Category: Full Access, Full Access Supporter\n\nLang uage Format: English Language\n\nSession Chair: Hongzhi Wu (Zhejiang Unive rsity; State Key Laboratory of CAD&CG, Zhejiang University) URL:https://asia.siggraph.org/2024/program/?id=papers_701&sess=sess107 END:VEVENT END:VCALENDAR