BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T145000 DTEND;TZID=Australia/Melbourne:20231213T150500 UID:siggraphasia_SIGGRAPH Asia 2023_sess164_tog_108@linklings.com SUMMARY:High-Resolution Volumetric Reconstruction for Clothed Humans DESCRIPTION:Technical Papers, TOG\n\nSicong Tang (Simon Fraser University) ; Guangyuan Wang, Qing Ran, Lingzhi Li, and Li Shen (Alibaba); and Ping Ta n (Simon Fraser University)\n\nWe present a novel method for reconstructin g clothed humans from a sparse set of, e.g., 1-6 RGB images. We revisit th e volumetric approach and demonstrate that better performance can be achie ved with proper system design. The volumetric representation offers signif icant advantages in leveraging 3D spatial context through 3D convolutions, and the notorious quantization error is largely negligible with a reasona bly large yet affordable volume resolution, e.g., 512. Extensive experimen tal results show that our method significantly reduces the mean point-to-s urface (P2S) precision of state-of-the-art methods by more than 50% to ach ieve approximately 2mm accuracy with a 512 volume resolution. Additionally , images rendered from our textured model achieve a higher peak signal-to- noise ratio (PSNR) compared to state-of-the-art methods.\n\nRegistration C ategory: Full Access\n\nSession Chair: Parag Chaudhuri (Indian Institute o f Technology Bombay) URL:https://asia.siggraph.org/2023/full-program?id=tog_108&sess=sess164 END:VEVENT END:VCALENDAR