BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T160000 DTEND;TZID=Australia/Melbourne:20231212T161000 UID:siggraphasia_SIGGRAPH Asia 2023_sess160_papers_998@linklings.com SUMMARY:MOCHA: Real-Time Motion Characterization via Context Matching DESCRIPTION:Technical Papers, TOG\n\nDeok-Kyeong Jang (KAIST, MOVIN Inc.); Yuting Ye (Meta); Jungdam Won (Seoul National University); and Sung-Hee L ee (KAIST)\n\nTransforming neutral, characterless input motions to embody the distinct style of a notable character in real time is highly compellin g for character animation. This paper introduces MOCHA, a novel online mot ion characterization framework that transfers both motion styles and body proportions from a target character to an input source motion. MOCHA begin s by encoding the input motion into a motion feature that structures the b ody part topology and captures motion dependencies for effective character ization. Central to our framework is the Neural Context Matcher, which gen erates a motion feature for the target character with the most similar con text to the input motion feature. The conditioned autoregressive model of the Neural Context Matcher can produce temporally coherent character featu res in each time frame. To generate the final characterized pose, our Char acterizer network incorporates the characteristic aspects of the target mo tion feature into the input motion feature while preserving its context. T his is achieved through a transformer model that introduces the adaptive i nstance normalization and context mapping-based cross-attention, effective ly injecting the character feature into the source feature. We validate th e performance of our framework through comparisons with prior work and an ablation study. Our framework can easily accommodate various applications, including characterization with only sparse input and real-time character ization. Additionally, we contribute a high-quality motion dataset compris ing six different characters performing a range of motions, which can serv e as a valuable resource for future research.\n\nRegistration Category: Fu ll Access\n\nSession Chair: Ioannis Karamouzas (Clemson University, Univer sity of California Riverside) URL:https://asia.siggraph.org/2023/full-program?id=papers_998&sess=sess160 END:VEVENT END:VCALENDAR