BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T153100 DTEND;TZID=Asia/Tokyo:20241203T154300 UID:siggraphasia_SIGGRAPH Asia 2024_sess107_papers_296@linklings.com SUMMARY:NeRF-Casting: Improved View-Dependent Appearance with Consistent R eflections DESCRIPTION:Technical Papers\n\nDor Verbin, Pratul P. Srinivasan, Peter He dman, and Ben Mildenhall (Google Research); Benjamin Attal (Carnegie Mello n University); and Richard Szeliski and Jonathan T. Barron (Google Researc h)\n\nNeural Radiance Fields (NeRFs) typically struggle to reconstruct and render highly specular objects, whose appearance varies quickly with chan ges in viewpoint. Recent works have improved NeRF's ability to render deta iled specular appearance of distant environment illumination, but are unab le to synthesize consistent reflections of closer content. Moreover, these techniques rely on large computationally-expensive neural networks to mod el outgoing radiance, which severely limits optimization and rendering spe ed. We address these issues with an approach based on ray tracing: instead of querying an expensive neural network for the outgoing view-dependent r adiance at points along each camera ray, our model casts reflection rays f rom these points and traces them through the NeRF representation to render feature vectors which are decoded into color using a small inexpensive ne twork. We demonstrate that our model outperforms prior methods for view sy nthesis of scenes containing shiny objects, and that it is the only existi ng NeRF method that can synthesize photorealistic specular appearance and reflections in real-world scenes, while requiring comparable optimization time to current state-of-the-art view synthesis models.\n\nRegistration Ca tegory: Full Access, Full Access Supporter\n\nLanguage Format: English Lan guage\n\nSession Chair: Hongzhi Wu (Zhejiang University; State Key Laborat ory of CAD&CG, Zhejiang University) URL:https://asia.siggraph.org/2024/program/?id=papers_296&sess=sess107 END:VEVENT END:VCALENDAR