BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070241Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_998@linklings.com SUMMARY:MOCHA: Real-Time Motion Characterization via Context Matching DESCRIPTION:Technical Papers\n\nDeok-Kyeong Jang (KAIST, MOVIN Inc.); Yuti ng Ye (Meta); Jungdam Won (Seoul National University); and Sung-Hee Lee (K AIST)\n\nTransforming neutral, characterless input motions to embody the d istinct style of a notable character in real time is highly compelling for character animation. This paper introduces MOCHA, a novel online motion c haracterization framework that transfers both motion styles and body propo rtions from a target character to an input source motion. MOCHA begins by encoding the input motion into a motion feature that structures the body p art topology and captures motion dependencies for effective characterizati on. Central to our framework is the Neural Context Matcher, which generate s a motion feature for the target character with the most similar context to the input motion feature. The conditioned autoregressive model of the N eural Context Matcher can produce temporally coherent character features i n each time frame. To generate the final characterized pose, our Character izer network incorporates the characteristic aspects of the target motion feature into the input motion feature while preserving its context. This i s achieved through a transformer model that introduces the adaptive instan ce normalization and context mapping-based cross-attention, effectively in jecting the character feature into the source feature. We validate the per formance of our framework through comparisons with prior work and an ablat ion study. Our framework can easily accommodate various applications, incl uding characterization with only sparse input and real-time characterizati on. Additionally, we contribute a high-quality motion dataset comprising s ix different characters performing a range of motions, which can serve as a valuable resource for future research.\n\nRegistration Category: Full Ac cess, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_998&sess=sess209 END:VEVENT END:VCALENDAR