BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070241Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_779@linklings.com SUMMARY:ReShader: View-Dependent Highlights for Single Image View-Synthesi s DESCRIPTION:Technical Papers\n\nAvinash Paliwal and Brandon G. Nguyen (Tex as A&M University), Andrii Tsarov (Leia Inc.), and Nima Khademi Kalantari (Texas A&M University)\n\nIn recent years, novel view synthesis from a sin gle image has seen significant progress thanks to the rapid advancements i n 3D scene representation and image inpainting techniques. While the curre nt approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specificall y, the highlights in their synthesized images usually appear to be glued t o the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camer a, and moving them to appropriate locations. Therefore, we propose to spli t the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to r elocate the pixels and produce the final novel view image. We propose to u se a neural network to perform reshading and generate a large set of synth etic input-reshaded pairs to train our network. We demonstrate that our ap proach produces plausible novel view images with realistic moving highligh ts on a variety of real world scenes.\n\nRegistration Category: Full Acces s, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_779&sess=sess209 END:VEVENT END:VCALENDAR