BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070240Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_652@linklings.com SUMMARY:Adaptive Recurrent Frame Prediction with Learnable Motion Vectors DESCRIPTION:Technical Papers\n\nZhizhen Wu (State Key Lab of CAD&CG, Zheji ang University); Chenyu Zuo (State Key Lab of CAD&CG, State Key Laboratory of CAD & CG, Zhejiang University); Yuchi Huo (State Key Lab of CAD&CG, Zh ejiang University; Zhejiang Lab); Yazhen Yuan (Tencent); Yifan Peng (The U niversity of Hong Kong (HKU)); Guiyang Pu (China Mobile (Hangzhou) Informa tion Technology Co., Ltd); and Rui Wang and Hujun Bao (State Key Lab of CA D&CG, Zhejiang University)\n\nThe utilization of dedicated ray tracing gra phics cards has contributed to the production of stunning visual effects i n real-time rendering. However, the demand for high frame rates and high r esolutions remains a challenge to be addressed. A crucial technique for in creasing frame rate and resolution is the pixel warping approach, which ex ploits spatio-temporal coherence. \nTo this end, existing super-resolution and frame prediction methods rely heavily on motion vectors from renderin g engine pipelines to track object movements. \nThis work builds upon stat e-of-the-art heuristic approaches by exploring a novel adaptive recurrent frame prediction framework that integrates learnable motion vectors. Our f ramework supports the prediction of transparency, particles, and texture a nimations, with improved motion vectors that capture shading, reflections, and occlusions, in addition to geometry movements. \nWe also introduce a feature streaming neural network, dubbed FSNet, that allows for the adapti ve prediction of one or multiple sequential frames. Extensive experiments against state-of-the-art methods demonstrate that FSNet can operate at low er latency with significant visual enhancements and can upscale frame rate s by at least two times. This approach offers a flexible pipeline to impro ve the rendering frame rates of various graphics applications and devices. \n\nRegistration Category: Full Access, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_652&sess=sess209 END:VEVENT END:VCALENDAR