BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T151900 DTEND;TZID=Asia/Tokyo:20241203T153100 UID:siggraphasia_SIGGRAPH Asia 2024_sess107_papers_162@linklings.com SUMMARY:Reflection-Aware Neural Radiance Fields DESCRIPTION:Technical Papers\n\nChen Gao, Yipeng Wang, and Changil Kim (Me ta); Jia-Bin Huang (University of Maryland, College Park); and Johannes Ko pf (Meta)\n\nNeural Radiance Fields (NeRF) have demonstrated exceptional c apabilities in reconstructing complex scenes with high fidelity. However, NeRF's view dependency can only handle low-frequency reflections. It falls short when handling complex planar reflections, often interpreting them a s erroneous scene geometries and leading to duplicated and inaccurate scen e representations. To address this challenge, we introduce a reflection-aw are NeRF that jointly models planar reflectors, such as windows, and expli citly casts reflected rays to capture the source of the high-frequency ref lections. We query a single radiance field to render the primary color and the source of the reflection. We propose a sparse edge regularization to help utilize the true sources of reflections for rendering planar reflecti ons rather than creating a duplicate along the primary ray at the same dep th. As a result, we obtain accurate scene geometry. Rendering along the pr imary ray results in a clean, reflection-free view, while explicitly rende ring along the reflected ray allows us to reconstruct highly detailed refl ections. Our extensive quantitative and qualitative evaluations of real-wo rld datasets demonstrate our method's enhanced performance in accurately h andling reflections.\n\nRegistration Category: Full Access, Full Access Su pporter\n\nLanguage Format: English Language\n\nSession Chair: Hongzhi Wu (Zhejiang University; State Key Laboratory of CAD&CG, Zhejiang University) URL:https://asia.siggraph.org/2024/program/?id=papers_162&sess=sess107 END:VEVENT END:VCALENDAR