BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070247Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T122500 DTEND;TZID=Australia/Melbourne:20231214T124000 UID:siggraphasia_SIGGRAPH Asia 2023_sess170_papers_517@linklings.com SUMMARY:Low-Light Image Enhancement with Wavelet-based Diffusion Models DESCRIPTION:Technical Papers\n\nHai Jiang (Sichuan University); Ao Luo and Haoqiang Fan (Megvii); Songchen Han (Sichuan University); and Shuaicheng Liu (University of Electronic Science and Technology of China, Megvii)\n\n Diffusion models have achieved promising results in image restoration task s, yet suffer from time-consuming, excessive computational resource consum ption, and unstable restoration. To address these issues, we propose a rob ust and efficient Diffusion-based Low-Light image enhancement approach, du bbed DiffLL. Specifically, we present a wavelet-based conditional diffusio n model (WCDM) that leverages the generative power of diffusion models to produce results with satisfactory perceptual fidelity. Additionally, it al so takes advantage of the strengths of wavelet transformation to greatly a ccelerate inference and reduce computational resource usage without sacrif icing information. To avoid chaotic content and diversity, we perform both forward diffusion and denoising in the training phase of WCDM, enabling t he model to achieve stable denoising and reduce randomness during inferenc e. Moreover, we further design a high-frequency restoration module (HFRM) that utilizes the vertical and horizontal details of the image to compleme nt the diagonal information for better fine-grained restoration. Extensive experiments on publicly available real-world benchmarks demonstrate that our method outperforms the existing state-of-the-art methods both quantita tively and visually, and it achieves remarkable improvements in efficiency compared to previous diffusion-based methods. In addition, we empirically show that the application for low-light face detection also reveals the l atent practical values of our method. Code is available at https://github. com/JianghaiSCU/Diffusion-Low-Light.\n\nRegistration Category: Full Access \n\nSession Chair: Xiangyu Xu (Xi'an Jiaotong University) URL:https://asia.siggraph.org/2023/full-program?id=papers_517&sess=sess170 END:VEVENT END:VCALENDAR