BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T160000 DTEND;TZID=Australia/Melbourne:20231213T161500 UID:siggraphasia_SIGGRAPH Asia 2023_sess125_papers_779@linklings.com SUMMARY:ReShader: View-Dependent Highlights for Single Image View-Synthesi s DESCRIPTION:Technical Papers, TOG\n\nAvinash Paliwal and Brandon G. Nguyen (Texas A&M University), Andrii Tsarov (Leia Inc.), and Nima Khademi Kalan tari (Texas A&M University)\n\nIn recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advanceme nts in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel v iews, they often do not handle the view-dependent effects properly. Specif ically, the highlights in their synthesized images usually appear to be gl ued to the surfaces, making the novel views unrealistic. To address this m ajor problem, we make a key observation that the process of synthesizing n ovel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel resh ading and relocation. During the reshading process, we take the single ima ge as the input and adjust its shading based on the novel camera. This res haded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that o ur approach produces plausible novel view images with realistic moving hig hlights on a variety of real world scenes.\n\nRegistration Category: Full Access\n\nSession Chair: Michael Gharbi (Adobe, MIT) URL:https://asia.siggraph.org/2023/full-program?id=papers_779&sess=sess125 END:VEVENT END:VCALENDAR