BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T154300 DTEND;TZID=Asia/Tokyo:20241205T155400 UID:siggraphasia_SIGGRAPH Asia 2024_sess135_papers_446@linklings.com SUMMARY:GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering DESCRIPTION:Technical Papers\n\nSongyin Wu (University of California Santa Barbara); Deepak Vembar, Anton Sochenov, and Selvakumar Panneer (Intel Co rporation); Sungye Kim (Intel (now AMD)); Anton Kaplanyan (Intel Corporati on); and Ling-Qi Yan (University of California Santa Barbara)\n\nReal-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate re mains challenging. Frame extrapolation methods, which do not introduce add itional latency as opposed to frame interpolation methods such as DLSS 3 a nd FSR 3, boost the frame rate by generating future frames based on previo us frames. However, it is a more challenging task because of the lack of i nformation in the disocclusion regions and complex future motions, and rec ent methods also have a high engine integration cost due to requiring G-bu ffers as input. We propose a \emph{G-buffer free} frame extrapolation meth od, GFFE, with a novel heuristic framework and an efficient neural network , to plausibly generate new frames in real time without introducing additi onal latency. We analyze the motion of dynamic fragments and different typ es of disocclusions, and design the corresponding modules of the extrapola tion block to handle them. After that, a light-weight shading correction n etwork is used to correct shading and improve overall quality. GFFE achiev es comparable or better results than previous interpolation and G-buffer d ependent extrapolation methods, with more efficient performance and easier integration.\n\nRegistration Category: Full Access, Full Access Supporter \n\nLanguage Format: English Language\n\nSession Chair: Changjian Li (Univ ersity of Edinburgh) URL:https://asia.siggraph.org/2024/program/?id=papers_446&sess=sess135 END:VEVENT END:VCALENDAR