BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070243Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T172000 DTEND;TZID=Australia/Melbourne:20231212T173000 UID:siggraphasia_SIGGRAPH Asia 2023_sess142_papers_652@linklings.com SUMMARY:Adaptive Recurrent Frame Prediction with Learnable Motion Vectors DESCRIPTION:Technical Communications, Technical Papers\n\nZhizhen Wu (Stat e Key Lab of CAD&CG, Zhejiang University); Chenyu Zuo (State Key Lab of CA D&CG, State Key Laboratory of CAD & CG, Zhejiang University); Yuchi Huo (S tate Key Lab of CAD&CG, Zhejiang University; Zhejiang Lab); Yazhen Yuan (T encent); Yifan Peng (The University of Hong Kong (HKU)); Guiyang Pu (China Mobile (Hangzhou) Information Technology Co., Ltd); and Rui Wang and Huju n Bao (State Key Lab of CAD&CG, Zhejiang University)\n\nThe utilization of dedicated ray tracing graphics cards has contributed to the production of stunning visual effects in real-time rendering. However, the demand for h igh frame rates and high resolutions remains a challenge to be addressed. A crucial technique for increasing frame rate and resolution is the pixel warping approach, which exploits spatio-temporal coherence. \nTo this end, existing super-resolution and frame prediction methods rely heavily on mo tion vectors from rendering engine pipelines to track object movements. \n This work builds upon state-of-the-art heuristic approaches by exploring a novel adaptive recurrent frame prediction framework that integrates learn able motion vectors. Our framework supports the prediction of transparency , particles, and texture animations, with improved motion vectors that cap ture shading, reflections, and occlusions, in addition to geometry movemen ts. \nWe also introduce a feature streaming neural network, dubbed FSNet, that allows for the adaptive prediction of one or multiple sequential fram es. Extensive experiments against state-of-the-art methods demonstrate tha t FSNet can operate at lower latency with significant visual enhancements and can upscale frame rates by at least two times. This approach offers a flexible pipeline to improve the rendering frame rates of various graphics applications and devices.\n\nRegistration Category: Full Access\n\nSessio n Chair: Michael Gharbi (Adobe, MIT) URL:https://asia.siggraph.org/2023/full-program?id=papers_652&sess=sess142 END:VEVENT END:VCALENDAR