BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163643Z
LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231214T122500
DTEND;TZID=Australia/Melbourne:20231214T124000
UID:siggraphasia_SIGGRAPH Asia 2023_sess170_papers_517@linklings.com
SUMMARY:Low-Light Image Enhancement with Wavelet-based Diffusion Models
DESCRIPTION:Hai Jiang (Sichuan University); Ao Luo and Haoqiang Fan (Megvi
 i); Songchen Han (Sichuan University); and Shuaicheng Liu (University of E
 lectronic Science and Technology of China, Megvii)\n\nDiffusion models hav
 e achieved promising results in image restoration tasks, yet suffer from t
 ime-consuming, excessive computational resource consumption, and unstable 
 restoration. To address these issues, we propose a robust and efficient Di
 ffusion-based Low-Light image enhancement approach, dubbed DiffLL. Specifi
 cally, we present a wavelet-based conditional diffusion model (WCDM) that 
 leverages the generative power of diffusion models to produce results with
  satisfactory perceptual fidelity. Additionally, it also takes advantage o
 f the strengths of wavelet transformation to greatly accelerate inference 
 and reduce computational resource usage without sacrificing information. T
 o avoid chaotic content and diversity, we perform both forward diffusion a
 nd denoising in the training phase of WCDM, enabling the model to achieve 
 stable denoising and reduce randomness during inference. Moreover, we furt
 her design a high-frequency restoration module (HFRM) that utilizes the ve
 rtical and horizontal details of the image to complement the diagonal info
 rmation for better fine-grained restoration. Extensive experiments on publ
 icly available real-world benchmarks demonstrate that our method outperfor
 ms the existing state-of-the-art methods both quantitatively and visually,
  and it achieves remarkable improvements in efficiency compared to previou
 s diffusion-based methods. In addition, we empirically show that the appli
 cation for low-light face detection also reveals the latent practical valu
 es of our method. Code is available at https://github.com/JianghaiSCU/Diff
 usion-Low-Light.\n\nRegistration Category: Full Access\n\nSession Chair: X
 iangyu Xu (Xi'an Jiaotong University)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_517&sess=sess170
END:VEVENT
END:VCALENDAR
