Adaptive Recurrent Frame Prediction with Learnable Motion Vectors
SessionRendering
DescriptionThe utilization of dedicated ray tracing graphics cards has contributed to the production of stunning visual effects in real-time rendering. However, the demand for high frame rates and high resolutions remains a challenge to be addressed. A crucial technique for increasing frame rate and resolution is the pixel warping approach, which exploits spatio-temporal coherence.
To this end, existing super-resolution and frame prediction methods rely heavily on motion vectors from rendering engine pipelines to track object movements.
This work builds upon state-of-the-art heuristic approaches by exploring a novel adaptive recurrent frame prediction framework that integrates learnable motion vectors. Our framework supports the prediction of transparency, particles, and texture animations, with improved motion vectors that capture shading, reflections, and occlusions, in addition to geometry movements.
We also introduce a feature streaming neural network, dubbed FSNet, that allows for the adaptive prediction of one or multiple sequential frames. Extensive experiments against state-of-the-art methods demonstrate that FSNet can operate at lower latency with significant visual enhancements and can upscale frame rates by at least two times. This approach offers a flexible pipeline to improve the rendering frame rates of various graphics applications and devices.
Event Type
Technical Communications
Technical Papers
TimeTuesday, 12 December 20235:20pm - 5:30pm
LocationMeeting Room C4.8, Level 4 (Convention Centre)
Registration Categories