BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070247Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T115000 DTEND;TZID=Australia/Melbourne:20231214T120000 UID:siggraphasia_SIGGRAPH Asia 2023_sess170_papers_291@linklings.com SUMMARY:Single-Image 3D Human Digitization with Shape-guided Diffusion DESCRIPTION:Technical Papers\n\nBadour AlBahar (Kuwait University); Shunsu ke Saito, Hung-Yu Tseng, Changil Kim, and Johannes Kopf (Meta); and Jia-Bi n Huang (University of Maryland)\n\nWe present an approach to generate a 3 60-degree view of a person with a consistent, high-resolution appearance f rom a single input image. NeRF and its variants typically require videos o r images from different viewpoints. Most existing approaches taking monocu lar input either rely on ground-truth 3D scans for supervision or lack 3D consistency. While recent 3D generative models show promise of 3D consiste nt human digitization, these approaches do not generalize well to diverse clothing appearances, and the results lack photorealism. Unlike existing w ork, we utilize high-capacity 2D diffusion models pretrained for general i mage synthesis tasks as an appearance prior of clothed humans. To achieve better 3D consistency while retaining the input identity, we progressively synthesize multiple views of the human in the input image by inpainting m issing regions with shape-guided diffusion conditioned on silhouette and s urface normal. We then fuse these synthesized multi-view images via invers e rendering to obtain a fully textured high-resolution 3D mesh of the give n person. Experiments show that our approach outperforms prior methods and achieves photorealistic 360-degree synthesis of a wide range of clothed h umans with complex textures from a single image.\n\nRegistration Category: Full Access\n\nSession Chair: Xiangyu Xu (Xi'an Jiaotong University) URL:https://asia.siggraph.org/2023/full-program?id=papers_291&sess=sess170 END:VEVENT END:VCALENDAR