BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T145600 DTEND;TZID=Asia/Tokyo:20241203T150800 UID:siggraphasia_SIGGRAPH Asia 2024_sess107_papers_238@linklings.com SUMMARY:GS^3: Efficient Relighting with Triple Gaussian Splatting DESCRIPTION:Technical Papers\n\nZoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei , Xiang Feng, Kun Zhou, and Hongzhi Wu (State Key Laboratory of CAD&CG, Zh ejiang University)\n\nWe present a spatial and angular Gaussian based repr esentation and a triple splatting process, for real-time, high-quality nov el lighting-and-view synthesis from multi-view point-lit input images. To describe complex appearance, we employ a Lambertian plus a mixture of angu lar Gaussians as an effective reflectance function for each spatial Gaussi an. To generate self-shadow, we splat all spatial Gaussians towards the li ght source to obtain shadow values, which are further refined by a small m ulti-layer perceptron. To compensate for other effects like global illumin ation, another network is trained to compute and add a per-spatial-Gaussia n RGB tuple. The effectiveness of our representation is demonstrated on 30 samples with a wide variation in geometry (from solid to fluffy) and appe arance (from translucent to anisotropic), as well as using different forms of input data, including rendered images of synthetic/reconstructed objec ts, photographs captured with a handheld camera and a flash, or from a pro fessional lightstage. We achieve a training time of 40-70 minutes and a re ndering speed of 90 fps on a single commodity GPU. Our results compare fav orably with state-of-the-art techniques in terms of quality/performance.\n \nRegistration Category: Full Access, Full Access Supporter\n\nLanguage Fo rmat: English Language\n\nSession Chair: Hongzhi Wu (Zhejiang University; State Key Laboratory of CAD&CG, Zhejiang University) URL:https://asia.siggraph.org/2024/program/?id=papers_238&sess=sess107 END:VEVENT END:VCALENDAR