BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070243Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T171000 DTEND;TZID=Australia/Melbourne:20231212T172000 UID:siggraphasia_SIGGRAPH Asia 2023_sess162_papers_672@linklings.com SUMMARY:Inovis: Instant Novel-View Synthesis DESCRIPTION:Technical Papers\n\nMathias Harrer and Linus Franke (Friedrich -Alexander-Universität Erlangen-Nürnberg); Laura Fink (Friedrich-Alexander -Universität Erlangen-Nürnberg, Fraunhofer IIS); and Marc Stamminger and T im Weyrich (Friedrich-Alexander-Universität Erlangen-Nürnberg)\n\nNovel-vi ew synthesis is an ill-posed problem in that it requires inference of prev iously unseen information. Recently, reviving the traditional field of ima ge-based rendering, neural methods proved particularly suitable for this i nterpolation/extrapolation task; however, they often require a-priori scen e-completeness or costly pre-processing steps and generally suffer from lo ng (scene-specific) training times. Our work draws from recent progress in neural spatio-temporal supersampling to enhance a state-of-the-art neural renderer’s ability to infer novel-view information at inference time. We adapt a supersampling architecture [Xiao et al. 2020], which resamples pre viously rendered frames, to instead recombine nearby camera images in a mu lti-view dataset. These input frames are warped into a joint target frame, guided by the most recent (point-based) scene representation, followed by neural interpolation. The resulting architecture gains sufficient robustn ess to significantly improve transferability to previously unseen datasets . In particular, this enables novel applications for neural rendering wher e dynamically streamed content is directly incorporated in a (neural) imag e-based reconstruction of a scene. As we will show, our method reaches sta te-of-the-art performance when compared to previous works that rely on sta tic and sufficiently densely sampled scenes; in addition, we demonstrate o ur system's particular suitability for dynamically streamed content, where our approach is able to produce high-fidelity novel-view synthesis even w ith significantly fewer available frames than competing neural methods.\n\ nRegistration Category: Full Access\n\nSession Chair: Binh-Son Hua (Trinit y College Dublin) URL:https://asia.siggraph.org/2023/full-program?id=papers_672&sess=sess162 END:VEVENT END:VCALENDAR