BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163633Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_480@linklings.com SUMMARY:A Neural Implicit Representation for the Image Stack: Depth, All i n Focus, and High Dynamic Range DESCRIPTION:Chao Wang (Max-Planck-Institut für Informatik); Ana Serrano (U niversidad de Zaragoza); and Xingang Pan, Bin Chen, Hans-Peter Seidel, Kar ol Myszkowski, Christian Theobalt, Krzysztof Wolski, and Thomas Leimkühler (Max-Planck-Institut für Informatik)\n\nIn everyday photography, physical limitations of camera sensors and lenses frequently lead to a variety of degradations in captured images such as saturation or defocus blur. A comm on approach to overcome these limitations is to resort to image stack fusi on, which involves capturing multiple images with different focal distance s or exposures. For instance, to obtain an all-in-focus image, a set of mu lti-focus images is captured. Similarly, capturing multiple exposures allo ws for the reconstruction of high dynamic range (HDR).\nIn this paper, we present a novel approach that combines neural fields with an expressive ca mera model to achieve a unified reconstruction of an all-in-focus HDR imag e from an image stack. \nOur approach is composed of a set of specialized neural fields tailored to address specific sub-problems along our pipeline :\nWe use fields to predict flow to overcome misalignments arising from le ns breathing, depth and all-in-focus images to account for depth of field, as well as tonemapping to deal with sensor responses and saturation -- al l trained using a physically inspired supervision structure with a differe ntiable thin lens model at its core.\nAn important benefit of our approach is its ability to handle these tasks simultaneously or independently, pro viding flexible post-editing capabilities such as refocusing and exposure adjustment.\nBy sampling the three primary factors in photography within o ur framework (focal distance, aperture, and exposure time), we conduct a t horough exploration to gain valuable insights into their significance and impact on the overall image quality. \nThrough extensive validation, we de monstrate that our method outperforms existing approaches in both depth-fr om-defocus and all-in-focus image reconstruction tasks. Moreover, our appr oach exhibits promising results in each of these three dimensions, showcas ing its potential to enhance captured image quality and provide greater co ntrol in post-processing.\n\nRegistration Category: Full Access, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor\n\n URL:https://asia.siggraph.org/2023/full-program?id=papers_480&sess=sess209 END:VEVENT END:VCALENDAR