BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_tog_109@linklings.com SUMMARY:Spatiotemporally Consistent HDR Indoor Lighting Estimation DESCRIPTION:Technical Papers\n\nZhengqin Li (Meta, University of Californi a San Diego); Yu Li and Mikhail Okunev (Meta); Manmohan Chandraker (Univer sity of California San Diego); and Zhao Dong (Meta)\n\nWe propose a physic ally-motivated deep learning framework to solve a general version of the c hallenging indoor lighting estimation problem. Given a single LDR image wi th a depth map, our method predicts spatially consistent lighting at any g iven image position. Particularly, when the input is an LDR video sequence , our framework not only progressively refines the lighting prediction as it sees more regions, but also preserves temporal consistency by keeping t he refinement smooth. Our framework reconstructs a spherical Gaussian ligh ting volume (SGLV) through a tailored 3D encoder-decoder, which enables sp atially consistent lighting prediction through volume ray\ntracing, a hybr id blending network for detailed environment maps, an innetwork Monte-Carl o rendering layer to enhance photorealism for virtual object insertion, an d recurrent neural networks (RNN) to achieve temporally consistent lightin g prediction with a video sequence as the input. For training, we signific antly enhance the OpenRooms public dataset of photorealistic synthetic ind oor scenes with around 360K HDR environment maps of much higher resolution and 38K video sequences, rendered with GPU-based path tracing. Experiment s show that our framework achieves lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods, leading to photorealistic AR applications such as object insertion.\n\nRegistrati on Category: Full Access, Enhanced Access, Trade Exhibitor, Experience Hal l Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=tog_109&sess=sess209 END:VEVENT END:VCALENDAR