BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241205T091400 DTEND;TZID=Asia/Tokyo:20241205T092800 UID:siggraphasia_SIGGRAPH Asia 2024_sess125_papers_563@linklings.com SUMMARY:Online Neural Denoising with Cross-Regression for Interactive Rend ering DESCRIPTION:Technical Papers\n\nHajin Choi (Gwangju Institute of Science a nd Technology); Seokpyo Hong (Samsung Advanced Institute of Technology); I nwoo Ha (Samsung Advanced Institute of Technology, KAIST); Nahyup Kang (Sa msung Advanced Institute of Technology); and Bochang Moon (Gwangju Institu te of Science and Technology)\n\nGenerating a rendered image sequence thro ugh Monte Carlo ray tracing is an appealing option when one aims to accura tely simulate various lighting effects. Unfortunately, interactive renderi ng scenarios limit the allowable sample size for such sampling-based light transport algorithms, resulting in an unbiased but noisy image sequence. Image denoising has been widely adopted as a post-sampling process to conv ert such noisy image sequences into biased but temporally stable ones. The state-of-the-art strategy for interactive image denoising involves devisi ng a deep neural network and training this network via supervised learning , i.e., optimizing the network parameters using training datasets that inc lude an extensive set of image pairs (noisy and ground truth images). This paper adopts the prevalent approach for interactive image denoising, whic h relies on a neural network. However, instead of supervised learning, we propose a different learning strategy that trains our network parameters o n the fly, i.e., updating them online using runtime image sequences. To ac hieve our denoising objective with online\nlearning, we tailor local regre ssion to a cross-regression form that can guide robust training of our den oising neural network. We demonstrate that our denoising framework effecti vely reduces noise in input image sequences while robustly preserving both geometric and non-geometric edges, without requiring the manual effort in volved in preparing an external dataset.\n\nRegistration Category: Full Ac cess, Full Access Supporter\n\nLanguage Format: English Language\n\nSessio n Chair: Wenzel Jakob (École Polytechnique Fédérale de Lausanne) URL:https://asia.siggraph.org/2024/program/?id=papers_563&sess=sess125 END:VEVENT END:VCALENDAR