BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T170000 DTEND;TZID=Australia/Melbourne:20231212T171000 UID:siggraphasia_SIGGRAPH Asia 2023_sess142_papers_644@linklings.com SUMMARY:FuseSR: Super Resolution for Real-time Rendering through Efficient Multi-resolution Fusion DESCRIPTION:Technical Communications, Technical Papers\n\nZhihua Zhong (St ate Key Lab of CAD&CG, Zhejiang University; Zhejiang University City Colle ge); Jingsen Zhu (State Key Lab of CAD&CG, Zhejiang University); Yuxin Dai (Zhejiang A&F University); Chuankun Zheng (State Key Lab of CAD&CG, Zheji ang University); Guanlin Chen (Zhejiang University City College); Yuchi Hu o (Zhejiang Lab; State Key Lab of CAD&CG, Zhejiang University); and Hujun Bao and Rui Wang (State Key Lab of CAD&CG, Zhejiang University)\n\nThe wor kload of real-time rendering is steeply increasing as the demand for high resolution, high refresh rates, and high realism rises, overwhelming most graphics cards. To mitigate this problem, one of the most popular solution s is to render images at a low resolution to reduce rendering overhead, an d then manage to accurately upsample the low-resolution rendered image to the target resolution, a.k.a. super-resolution techniques. Most existing m ethods focus on exploiting information from low-resolution inputs, such as historical frames. The absence of high frequency details in those LR inpu ts makes them hard to recover fine details in their high-resolution predic tions. In this paper, we propose an efficient and effective super-resoluti on method that predicts high-quality upsampled reconstructions utilizing l ow-cost high-resolution auxiliary G-Buffers as additional input. With LR i mages and HR G-buffers as input, the network requires to align and fuse fe atures at multi resolution levels. We introduce an efficient and effective H-Net architecture to solve this problem and significantly reduce renderi ng overhead without noticeable quality deterioration. Experiments show tha t our method is able to produce temporally consistent reconstructions in $ 4 \times 4$ and even challenging $8 \times 8$ upsampling cases at 4K resol ution with real-time performance, with substantially improved quality and significant performance boost compared to existing works.\n\nRegistration Category: Full Access\n\nSession Chair: Michael Gharbi (Adobe, MIT) URL:https://asia.siggraph.org/2023/full-program?id=papers_644&sess=sess142 END:VEVENT END:VCALENDAR