BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T172800 DTEND;TZID=Asia/Tokyo:20241203T174000 UID:siggraphasia_SIGGRAPH Asia 2024_sess110_papers_1007@linklings.com SUMMARY:Neural Global Illumination via Superposed Deformable Feature Field s DESCRIPTION:Technical Papers\n\nChuankun Zheng, Yuchi Huo, Hongxiang Huang , and Hongtao Sheng (State Key Laboratory of CAD&CG, Zhejiang University); Junrong Huang (City University of Hong Kong); Rui Tang and Hao Zhu (Manyc ore Inc.); and Rui Wang and Hujun Bao (State Key Laboratory of CAD&CG, Zhe jiang University)\n\nInteractive rendering of dynamic scenes with complex global illumination has been a long-standing problem in computer graphics. \nRecent advances in neural rendering demonstrate new promising possibilit ies.\nHowever, while existing methods have achieved impressive results, co mplex rendering effects (e.g., caustics) remain challenging.\nThis paper p resents a novel neural rendering method that is able to generate high-qual ity global illumination effects, including but not limited to caustics, so ft shadows, and indirect highlights, for dynamic scenes with varying camer a, lighting conditions, materials, and object transformations.\nInspired b y object-oriented transfer field representations, we employ deformable neu ral feature fields to implicitly model the impacts of individual objects o r light sources on global illumination.\nBy employing neural feature field s, our method gains the ability to represent high-frequency details, thus supporting complex rendering effects.\nWe superpose these feature fields i n latent space and utilize a lightweight decoder to obtain global illumina tion estimates, which allows our neural representations to spontaneously a dapt to the contribution of individual objects or light sources to global illumination in a data-driven manner, thus further improving the quality.\ nOur experiments demonstrate the effectiveness of our method on a wide ran ge of scenes with complex light paths, materials, and geometry.\n\nRegistr ation Category: Full Access, Full Access Supporter\n\nLanguage Format: Eng lish Language\n\nSession Chair: Michael Wimmer (TU Wien) URL:https://asia.siggraph.org/2024/program/?id=papers_1007&sess=sess110 END:VEVENT END:VCALENDAR