BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163642Z
LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T164000
DTEND;TZID=Australia/Melbourne:20231213T165000
UID:siggraphasia_SIGGRAPH Asia 2023_sess153_papers_662@linklings.com
SUMMARY:LiveNVS: Neural View Synthesis on Live RGB-D Streams
DESCRIPTION:Laura Fink (Friedrich-Alexander-Universität Erlangen-Nürnberg,
  Fraunhofer IIS); Darius Rückert and Linus Franke (Friedrich-Alexander-Uni
 versität Erlangen-Nürnberg); Joachim Keinert (Fraunhofer IIS); and Marc St
 amminger (Friedrich-Alexander-Universität Erlangen-Nürnberg)\n\nExisting r
 eal-time RGB-D reconstruction approaches, like Kinect Fusion, lack real-ti
 me photo-realistic visualization. This is due to noisy, oversmoothed or in
 complete geometry and blurry textures which are fused from imperfect depth
  maps and camera poses. Recent neural rendering methods can overcome many 
 of such artifacts but are mostly optimized for offline usage, hindering th
 e integration into a live reconstruction pipeline.\n\nIn this paper, we pr
 esent LiveNVS, a system that allows for neural novel view synthesis on a l
 ive RGB-D input stream with very low latency and real-time rendering. Base
 d on the RGB-D input stream, novel views are rendered by projecting neural
  features into the target view via a densely fused depth map and aggregati
 ng the features in image-space to a target feature map. A generalizable ne
 ural network then translates the target feature map into a high-quality RG
 B image. LiveNVS achieves state-of-the-art neural rendering quality of unk
 nown scenes during capturing, allowing users to virtually explore the scen
 e and assess reconstruction quality in real-time.\n\nRegistration Category
 : Full Access\n\nSession Chair: Jonah Brucker-Cohen (Lehman College / CUNY
 , New Inc.)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_662&sess=sess153
END:VEVENT
END:VCALENDAR
