BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T135600 DTEND;TZID=Asia/Tokyo:20241205T141000 UID:siggraphasia_SIGGRAPH Asia 2024_sess132_papers_725@linklings.com SUMMARY:Body Gesture Generation for Multimodal Conversational Agents DESCRIPTION:Technical Papers\n\nSunwoo Kim, Minwook Chang, and Yoonhee Kim (NCSOFT) and Jehee Lee (Seoul National University)\n\nCreating intelligen t virtual agents with realistic conversational abilities necessitates a mu ltimodal communication approach extending beyond text. Body gestures, in p articular, play a pivotal role in delivering a lifelike user experience by providing additional context, such as agreement, confusion, and emotional states. This paper introduces an\nintegration of motion matching framewor k with a learning-based approach for generating gestures, suitable for mul timodal, real-time, interactive conversational agents mimicking natural hu man discourse. Our gesture generation framework enables accurate synchroni zation with both the rhythm and semantics of spoken language accompanied b y multimodal perceptual cues. It also incorporates gesture phasing theory from social studies to maintain critical gesture features while ensuring a gile responses to unexpected interruptions and barging-in situations. Our system demonstrates responsiveness, fluidity, and quality beyond tradition al turn-based gesture-generation methods.\n\nRegistration Category: Full A ccess, Full Access Supporter\n\nLanguage Format: English Language\n\nSessi on Chair: Yi Zhou (Adobe) URL:https://asia.siggraph.org/2024/program/?id=papers_725&sess=sess132 END:VEVENT END:VCALENDAR