BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T165000 DTEND;TZID=Australia/Melbourne:20231213T170000 UID:siggraphasia_SIGGRAPH Asia 2023_sess153_papers_661@linklings.com SUMMARY:VET: Visual Error Tomography for Point Cloud Completion and High-Q uality Neural Rendering DESCRIPTION:Technical Communications, Technical Papers\n\nLinus Franke, Da rius Rückert, and Laura Fink (Friedrich-Alexander Universität Erlangen-Nür nberg); Matthias Innmann (NavVis GmbH); and Marc Stamminger (Friedrich-Ale xander Universität Erlangen-Nürnberg)\n\nIn the last few years, deep neura l networks opened the doors for big advances in novel view synthesis. Many of these approaches are based on a (coarse) proxy geometry obtained by st ructure from motion algorithms. Small deficiencies in this proxy can be fi xed by neural rendering, but larger holes or missing parts, as they common ly appear for thin structures or for glossy regions however still lead to very distracting artifacts and temporal instability. In this paper, we pre sent a novel neural rendering based approach to detect and fix such defici encies. As a proxy, we use a point cloud, which allows us to easily remove outlier geometry and to fill in missing geometry without complicated topo logical operations. Keys to our approach are (i) a differentiable, blendin g point-based renderer that can blend out redundant points, as well as (ii ) the concept of Visual Error Tomography (VET), which allows us to lift 2D error maps to identify 3D-regions lacking geometry and to spawn novel poi nts accordingly. Furthermore, (iii) by adding points as nested environment maps, our approach allows us to generate high-quality renderings of the s urroundings in the same pipeline. In our results, we show that our approac h can significantly improve the quality of a point cloud obtained by struc ture from motion and thus increase novel view synthesis quality. In contra st to point growing techniques, the approach can also fix large-scale hole s and missing thin structures effectively. Rendering quality outperforms s tate-of-the-art methods and temporal stability is significantly improved, while rendering is possible with real-time frame rates.\n\nRegistration Ca tegory: Full Access\n\nSession Chair: Jonah Brucker-Cohen (Lehman College / CUNY, New Inc.) URL:https://asia.siggraph.org/2023/full-program?id=papers_661&sess=sess153 END:VEVENT END:VCALENDAR