BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T164000 DTEND;TZID=Australia/Melbourne:20231213T165000 UID:siggraphasia_SIGGRAPH Asia 2023_sess153_papers_662@linklings.com SUMMARY:LiveNVS: Neural View Synthesis on Live RGB-D Streams DESCRIPTION:Technical Communications, Technical Papers\n\nLaura Fink (Frie drich-Alexander-Universität Erlangen-Nürnberg, Fraunhofer IIS); Darius Rüc kert and Linus Franke (Friedrich-Alexander-Universität Erlangen-Nürnberg); Joachim Keinert (Fraunhofer IIS); and Marc Stamminger (Friedrich-Alexande r-Universität Erlangen-Nürnberg)\n\nExisting real-time RGB-D reconstructio n approaches, like Kinect Fusion, lack real-time photo-realistic visualiza tion. This is due to noisy, oversmoothed or incomplete geometry and blurry textures which are fused from imperfect depth maps and camera poses. Rece nt neural rendering methods can overcome many of such artifacts but are mo stly optimized for offline usage, hindering the integration into a live re construction pipeline.\n\nIn this paper, we present LiveNVS, a system that allows for neural novel view synthesis on a live RGB-D input stream with very low latency and real-time rendering. Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target vi ew via a densely fused depth map and aggregating the features in image-spa ce to a target feature map. A generalizable neural network then translates the target feature map into a high-quality RGB image. LiveNVS achieves st ate-of-the-art neural rendering quality of unknown scenes during capturing , allowing users to virtually explore the scene and assess reconstruction quality in real-time.\n\nRegistration Category: Full Access\n\nSession Cha ir: Jonah Brucker-Cohen (Lehman College / CUNY, New Inc.) URL:https://asia.siggraph.org/2023/full-program?id=papers_662&sess=sess153 END:VEVENT END:VCALENDAR