BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163654Z
LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T154500
DTEND;TZID=Australia/Melbourne:20231213T160000
UID:siggraphasia_SIGGRAPH Asia 2023_sess125_tog_109@linklings.com
SUMMARY:Spatiotemporally Consistent HDR Indoor Lighting Estimation
DESCRIPTION:Zhengqin Li (Meta, University of California San Diego); Yu Li 
 and Mikhail Okunev (Meta); Manmohan Chandraker (University of California S
 an Diego); and Zhao Dong (Meta)\n\nWe propose a physically-motivated deep 
 learning framework to solve a general version of the challenging indoor li
 ghting estimation problem. Given a single LDR image with a depth map, our 
 method predicts spatially consistent lighting at any given image position.
  Particularly, when the input is an LDR video sequence, our framework not 
 only progressively refines the lighting prediction as it sees more regions
 , but also preserves temporal consistency by keeping the refinement smooth
 . Our framework reconstructs a spherical Gaussian lighting volume (SGLV) t
 hrough a tailored 3D encoder-decoder, which enables spatially consistent l
 ighting prediction through volume ray\ntracing, a hybrid blending network 
 for detailed environment maps, an innetwork Monte-Carlo rendering layer to
  enhance photorealism for virtual object insertion, and recurrent neural n
 etworks (RNN) to achieve temporally consistent lighting prediction with a 
 video sequence as the input. For training, we significantly enhance the Op
 enRooms public dataset of photorealistic synthetic indoor scenes with arou
 nd 360K HDR environment maps of much higher resolution and 38K video seque
 nces, rendered with GPU-based path tracing. Experiments show that our fram
 ework achieves lighting prediction with higher quality compared to state-o
 f-the-art single-image or video-based methods, leading to photorealistic A
 R applications such as object insertion.\n\nRegistration Category: Full Ac
 cess\n\nSession Chair: Michael Gharbi (Reve AI, Massachusetts Institute of
  Technology (MIT))\n\n
URL:https://asia.siggraph.org/2023/full-program?id=tog_109&sess=sess125
END:VEVENT
END:VCALENDAR
