BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B7 (1)\, B Block\, Level 7
DTSTART;TZID=Asia/Tokyo:20241205T144500
DTEND;TZID=Asia/Tokyo:20241205T145600
UID:siggraphasia_SIGGRAPH Asia 2024_sess135_papers_156@linklings.com
SUMMARY:ToonCrafter: Generative Cartoon Interpolation
DESCRIPTION:Technical Papers\n\nJinbo Xing (Chinese University of Hong Kon
 g); Hanyuan Liu (City University of Hong Kong); Menghan Xia, Yong Zhang, X
 intao Wang, and Ying Shan (Tencent); and Tien-Tsin Wong (Monash University
 , Chinese University of Hong Kong)\n\nWe introduce ToonCrafter, a novel ap
 proach that transcends traditional correspondence-based cartoon video inte
 rpolation, paving the way for generative interpolation. Traditional method
 s, that implicitly assume linear motion and the absence of complicated phe
 nomena like dis-occlusion, often struggle with the exaggerated non-linear 
 and large motions with occlusion commonly found in cartoons, resulting in 
 implausible or even failed interpolation results. To overcome these limita
 tions, we explore the potential of adapting live-action video priors to be
 tter suit cartoon interpolation within a generative framework. ToonCrafter
  effectively addresses the challenges faced when applying live-action vide
 o motion priors to generative cartoon interpolation. First, we design a to
 on rectification learning strategy that seamlessly adapts live-action vide
 o priors to the cartoon domain, resolving the domain gap  and content leak
 age issues. Next, we introduce a dual-reference-based 3D decoder to compen
 sate for lost details due to the highly compressed latent prior spaces, en
 suring the preservation of fine details in interpolation results. Finally,
  we design a flexible sketch encoder that empowers users with interactive 
 control over the interpolation results. Experimental results demonstrate t
 hat our proposed method not only produces visually convincing and more nat
 ural dynamics, but also effectively handles dis-occlusion. The comparative
  evaluation demonstrates the notable superiority of our approach over exis
 ting competitors.\n\nRegistration Category: Full Access, Full Access Suppo
 rter\n\nLanguage Format: English Language\n\nSession Chair: Changjian Li (
 University of Edinburgh)
URL:https://asia.siggraph.org/2024/program/?id=papers_156&sess=sess135
END:VEVENT
END:VCALENDAR
