BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023313Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241206T110800 DTEND;TZID=Asia/Tokyo:20241206T111900 UID:siggraphasia_SIGGRAPH Asia 2024_sess143_papers_1054@linklings.com SUMMARY:GGHead: Fast and Generalizable 3D Gaussian Heads DESCRIPTION:Technical Papers\n\nTobias Kirschstein, Simon Giebenhain, and Jiapeng Tang (Technical University of Munich); Markos Georgopoulos (Indepe ndent); and Matthias Nießner (Technical University of Munich)\n\nLearning 3D head priors from large 2D image collections is an important step toward s high-quality 3D-aware human modeling. \nA core requirement is an efficie nt architecture that scales well to large-scale datasets and large image r esolutions. \nUnfortunately, existing 3D GANs struggle to scale to generat e samples at high resolutions due to their relatively slow train and rende r speeds, and typically have to rely on 2D superresolution networks at the expense of global 3D consistency. \nTo address these challenges, we propo se Generative Gaussian Heads (GGHead), which adopts the recent 3D Gaussian Splatting representation within a 3D GAN framework. \nTo generate a 3D re presentation, we employ a powerful 2D CNN generator to predict Gaussian at tributes in the UV space of a template head mesh. \nThis way, GGHead explo its the regularity of the template's UV layout, substantially facilitating the challenging task of predicting an unstructured set of 3D Gaussians. \ nWe further improve the geometric fidelity of the generated 3D representat ions with a novel total variation loss on rendered UV coordinates. \nIntui tively, this regularization encourages that neighboring rendered pixels sh ould stem from neighboring Gaussians in the template’s UV space. \nTaken t ogether, our pipeline can efficiently generate 3D heads trained only from single-view 2D image observations. \nOur proposed framework matches the qu ality of existing 3D head GANs on FFHQ while being both substantially fast er and fully 3D consistent. \nAs a result, we demonstrate real-time genera tion and rendering of high-quality 3D-consistent heads at 1024x1024 resolu tion for the first time.\n\nRegistration Category: Full Access, Full Acces s Supporter\n\nLanguage Format: English Language\n\nSession Chair: Iain Ma tthews (Epic Games, Carnegie Mellon University) URL:https://asia.siggraph.org/2024/program/?id=papers_1054&sess=sess143 END:VEVENT END:VCALENDAR