BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070241Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_tog_108@linklings.com SUMMARY:High-Resolution Volumetric Reconstruction for Clothed Humans DESCRIPTION:Technical Papers\n\nSicong Tang (Simon Fraser University); Gua ngyuan Wang, Qing Ran, Lingzhi Li, and Li Shen (Alibaba); and Ping Tan (Si mon Fraser University)\n\nWe present a novel method for reconstructing clo thed humans from a sparse set of, e.g., 1-6 RGB images. We revisit the vol umetric approach and demonstrate that better performance can be achieved w ith proper system design. The volumetric representation offers significant advantages in leveraging 3D spatial context through 3D convolutions, and the notorious quantization error is largely negligible with a reasonably l arge yet affordable volume resolution, e.g., 512. Extensive experimental r esults show that our method significantly reduces the mean point-to-surfac e (P2S) precision of state-of-the-art methods by more than 50% to achieve approximately 2mm accuracy with a 512 volume resolution. Additionally, ima ges rendered from our textured model achieve a higher peak signal-to-noise ratio (PSNR) compared to state-of-the-art methods.\n\nRegistration Catego ry: Full Access, Enhanced Access, Trade Exhibitor, Experience Hall Exhibit or URL:https://asia.siggraph.org/2023/full-program?id=tog_108&sess=sess209 END:VEVENT END:VCALENDAR