BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070249Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T105500 DTEND;TZID=Australia/Melbourne:20231215T110500 UID:siggraphasia_SIGGRAPH Asia 2023_sess154_papers_729@linklings.com SUMMARY:MCNeRF: Monte Carlo Rendering and Denoising for Real-Time NeRFs DESCRIPTION:Technical Papers\n\nKunal Gupta (UC San Diego); Milos Hasan, Z exiang Xu, Fujun Luan, Kalyan Sunkavalli, and Xin Sun (Adobe Inc.); Manmoh an Chandraker (UC San Diego); and Sai Bi (Adobe Inc.)\n\nThe volume render ing step used in Neural Radiance Fields (NeRFs) produces highly photoreali stic results, but is inherently slow because it evaluates an MLP at a larg e number of sample points per ray. Previous work has addressed this by eit her proposing neural scene representations that are faster to evaluate or by pre-computing (and approximating) scene properties to reduce render tim es. In this work, we propose \mcnerf, a \emph{general} Monte Carlo-based r endering algorithm that can speed up \emph{any} NeRF representation. We sh ow that the NeRF volume rendering integral can be efficiently computed via Monte Carlo integration using an importance sampling scheme based on ray transmittance distributions. This allows us to, at render time, vary the n umber of color samples evaluated per ray to trade-off visual quality (nois e variance) against performance. These noisy Monte Carlo estimates can be further denoised using an inexpensive image-space denoiser trained per-sce ne. We demonstrate that \mcnerf can be used to speed up NeRF representatio ns like TensoRF and Instant-NGP by $7\times$ while closely matching their visual quality and without making the scene approximations that real-time NeRF rendering methods usually make.\n\nRegistration Category: Full Access \n\nSession Chair: Yuchi Huo (Zhejiang University, Korea Advanced Institut e of Science and Technology) URL:https://asia.siggraph.org/2023/full-program?id=papers_729&sess=sess154 END:VEVENT END:VCALENDAR