BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023309Z LOCATION:Hall B5 (2)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241203T150800 DTEND;TZID=Asia/Tokyo:20241203T151900 UID:siggraphasia_SIGGRAPH Asia 2024_sess107_papers_922@linklings.com SUMMARY:OLAT Gaussians for Generic Relightable Appearance Acquisition DESCRIPTION:Technical Papers\n\nZhiyi Kuang (State Key Laboratory of CAD&C G, Zhejiang University); Yanchao Yang and Siyan Dong (University of Hong K ong); Jiayue Ma (State Key Laboratory of CAD&CG, Zhejiang University); Hon gbo Fu (Hong Kong University of Science and Technology); and Youyi Zheng ( State Key Laboratory of CAD&CG, Zhejiang University)\n\nOne-light-at-a-tim e (OLAT) images sample a broader range of object appearance changes than i mages captured under constant lighting and are superior as input to object relighting. Although existing methods have produced reasonable relighting quality using OLAT images, they utilize surface-like representations, lim iting their capacity to model volumetric objects, such as furs. Besides, t heir rendering process is time-consuming and still far from being used in real-time. To address these issues, we propose OLAT Gaussians to build rel ightable representations of objects from multi-view OLAT images. We build our pipeline on 3D Gaussian Splatting (3DGS), which achieves real-time hig h-quality rendering. To augment 3DGS with relighting capability, we assign each Gaussian a learnable feature vector, serving as an index to query th e objects’ appearance field. Specifically, we decompose the appearance fie ld into light transport and scattering functions. The former accounts for light transmittance and foreshortening effects, while the latter represent s the object’s material properties to scatter light. Rather than using an off-the-shelf physically-based parametric rendering formulation, we model both functions using multi-layer perceptrons (MLPs). This makes our method suitable for various objects, e.g., opaque surfaces, semi-transparent vol umes, furs, fabrics, etc. Given a camera view and a point light position, we compute each Gaussian’s color as the product of the light transport val ue, the scattering value, and the light intensity, and then render the tar get image through the 3DGS rasterizer. To enhance rendering quality, we fu rther utilize a proxy mesh to provide OLAT Gaussians with normals to impro ve highlights and visibility cues to improve shadows. Extensive experiment s demonstrate that our method produces state-of-the-art rendering quality with significantly more details in texture-rich areas than previous method s. Our method also achieves real-time rendering, allowing users to interac tively modify views and lights to get immediate rendering results, which a re not available from the offline rendering of previous methods.\n\nRegist ration Category: Full Access, Full Access Supporter\n\nLanguage Format: En glish Language\n\nSession Chair: Hongzhi Wu (Zhejiang University; State Ke y Laboratory of CAD&CG, Zhejiang University) URL:https://asia.siggraph.org/2024/program/?id=papers_922&sess=sess107 END:VEVENT END:VCALENDAR