BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163631Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_998@linklings.com SUMMARY:MOCHA: Real-Time Motion Characterization via Context Matching DESCRIPTION:Deok-Kyeong Jang (KAIST, MOVIN Inc.); Yuting Ye (Meta); Jungda m Won (Seoul National University); and Sung-Hee Lee (KAIST)\n\nTransformin g neutral, characterless input motions to embody the distinct style of a n otable character in real time is highly compelling for character animation . This paper introduces MOCHA, a novel online motion characterization fram ework that transfers both motion styles and body proportions from a target character to an input source motion. MOCHA begins by encoding the input m otion into a motion feature that structures the body part topology and cap tures motion dependencies for effective characterization. Central to our f ramework is the Neural Context Matcher, which generates a motion feature f or the target character with the most similar context to the input motion feature. The conditioned autoregressive model of the Neural Context Matche r can produce temporally coherent character features in each time frame. T o generate the final characterized pose, our Characterizer network incorpo rates the characteristic aspects of the target motion feature into the inp ut motion feature while preserving its context. This is achieved through a transformer model that introduces the adaptive instance normalization and context mapping-based cross-attention, effectively injecting the characte r feature into the source feature. We validate the performance of our fram ework through comparisons with prior work and an ablation study. Our frame work can easily accommodate various applications, including characterizati on with only sparse input and real-time characterization. Additionally, we contribute a high-quality motion dataset comprising six different charact ers performing a range of motions, which can serve as a valuable resource for future research.\n\nRegistration Category: Full Access, Enhanced Acces s, Trade Exhibitor, Experience Hall Exhibitor\n\n URL:https://asia.siggraph.org/2023/full-program?id=papers_998&sess=sess209 END:VEVENT END:VCALENDAR