BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T130000 DTEND;TZID=Asia/Tokyo:20241205T141000 UID:siggraphasia_SIGGRAPH Asia 2024_sess132@linklings.com SUMMARY:Characters and Crowds DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation. \n\nPDP: Physics-Based Character Animation via Diffusion Policy\n\nGenerat ing diverse and realistic human motion that can physically interact with a n environment remains a challenging research area in character animation. Meanwhile, diffusion-based methods, as proposed by the robotics community, have demonstrated the ability to capture highly diverse and multi-moda... \n\n\nTakara Truong, Michael Piseno, Zhaoming Xie, and Karen Liu (Stanford University)\n---------------------\nCBIL: Collective Behavior Imitation L earning for Fish from Real Videos\n\nReproducing realistic collective beha viors presents a captivating yet formidable challenge. Traditional rule-ba sed methods rely on hand-crafted principles, limiting motion diversity and realism in generated collective behaviors. Recent imitation learning meth ods learn from data but often require gro...\n\n\nYifan Wu (University of Hong Kong); Zhiyang Dou (University of Hong Kong, University of Pennsylvan ia); Yuko Ishiwaka and Shun Ogawa (SoftBank); Yuke Lou (University of Hong Kong); Wenping Wang (Texas A&M University); Lingjie Liu (University of Pe nnsylvania); and Taku Komura (University of Hong Kong)\n------------------ ---\nResolving Collisions in Dense 3D Crowd Animations\n\nWe propose a con tact-aware method for synthesizing dense 3D crowds of animated characters. Unlike existing methods, our approach prevents character intersections by modeling contacts using physics-based techniques. This results in real-ti me, collision-free animations with enhanced realism and geomet...\n\n\nGon zalo Gomez-Nogales, Melania Prieto-Martin, Cristian Romero, Marc Comino-Tr inidad, and Pablo Ramon-Prieto (Universidad Rey Juan Carlos); Anne-Hélène Olivier (INRIA, Université de Rennes, CNRS, IRISA, M2S Centre de Rennes); Ludovic Hoyet (Institut national de recherche en informatique et en autom atique (INRIA)); Miguel Otaduy (Universidad Rey Juan Carlos); Julien Pettr e (Institut national de recherche en informatique et en automatique (INRIA )); and Dan Casas (Universidad Rey Juan Carlos)\n---------------------\nBo dy Gesture Generation for Multimodal Conversational Agents\n\nCreating int elligent virtual agents with realistic conversational abilities necessitat es a multimodal communication approach extending beyond text. Body gesture s, in particular, play a pivotal role in delivering a lifelike user experi ence by providing additional context, such as agreement, confusion...\n\n\ nSunwoo Kim, Minwook Chang, and Yoonhee Kim (NCSOFT) and Jehee Lee (Seoul National University)\n---------------------\nMonkey See, Monkey Do: Harnes sing Self-attention in Motion Diffusion for Zero-shot Motion Transfer\n\nG iven the remarkable results of motion synthesis with diffusion models, a n atural question arises: how can we effectively leverage these models for m otion editing? Existing diffusion-based motion editing methods overlook th e profound potential of the prior embedded within the weights of pre-train ed ...\n\n\nSigal Raab, Inbar Gat, Nathan Sala, Guy Tevet, and Rotem Shale v-Arkushin (Tel Aviv University); Ohad Fried (Reichman University); and Am it Haim Bermano and Daniel Cohen-Or (Tel Aviv University)\n\nRegistration Category: Full Access, Full Access Supporter\n\nLanguage Format: English L anguage\n\nSession Chair: Yi Zhou (Adobe) END:VEVENT END:VCALENDAR