BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_661@linklings.com SUMMARY:VET: Visual Error Tomography for Point Cloud Completion and High-Q uality Neural Rendering DESCRIPTION:Technical Papers\n\nLinus Franke, Darius Rückert, and Laura Fi nk (Friedrich-Alexander Universität Erlangen-Nürnberg); Matthias Innmann ( NavVis GmbH); and Marc Stamminger (Friedrich-Alexander Universität Erlange n-Nürnberg)\n\nIn the last few years, deep neural networks opened the door s for big advances in novel view synthesis. Many of these approaches are b ased on a (coarse) proxy geometry obtained by structure from motion algori thms. Small deficiencies in this proxy can be fixed by neural rendering, b ut larger holes or missing parts, as they commonly appear for thin structu res or for glossy regions however still lead to very distracting artifacts and temporal instability. In this paper, we present a novel neural render ing based approach to detect and fix such deficiencies. As a proxy, we use a point cloud, which allows us to easily remove outlier geometry and to f ill in missing geometry without complicated topological operations. Keys t o our approach are (i) a differentiable, blending point-based renderer tha t can blend out redundant points, as well as (ii) the concept of Visual Er ror Tomography (VET), which allows us to lift 2D error maps to identify 3D -regions lacking geometry and to spawn novel points accordingly. Furthermo re, (iii) by adding points as nested environment maps, our approach allows us to generate high-quality renderings of the surroundings in the same pi peline. In our results, we show that our approach can significantly improv e the quality of a point cloud obtained by structure from motion and thus increase novel view synthesis quality. In contrast to point growing techni ques, the approach can also fix large-scale holes and missing thin structu res effectively. Rendering quality outperforms state-of-the-art methods an d temporal stability is significantly improved, while rendering is possibl e with real-time frame rates.\n\nRegistration Category: Full Access, Enhan ced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_661&sess=sess209 END:VEVENT END:VCALENDAR