BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T142500 DTEND;TZID=Australia/Melbourne:20231212T144000 UID:siggraphasia_SIGGRAPH Asia 2023_sess120_papers_841@linklings.com SUMMARY:Neural Categorical Priors for Physics-Based Character Control DESCRIPTION:Technical Papers\n\nQingxu Zhu, He Zhang, Mengting Lan, and Le i Han (Tencent)\n\nRecent advances in learning reusable motion priors have demonstrated their effectiveness in generating naturalistic behaviors. In this paper, we propose a new learning framework in this paradigm for cont rolling physics-based characters with significantly improved motion qualit y and diversity over existing state-of-the-art methods. The proposed metho d uses reinforcement learning (RL) to initially track and imitate life-lik e movements from unstructured motion clips using the discrete information bottleneck, as adopted in the Vector Quantized Variational AutoEncoder (VQ -VAE). This structure compresses the most relevant information from the mo tion clips into a compact yet informative latent space, i.e., a discrete s pace over vector quantized codes. By sampling codes in the space from a tr ained categorical prior distribution, high-quality life-like behaviors can be generated, similar to the usage of VQ-VAE in computer vision. Although this prior distribution can be trained with the supervision of the encode r's output, it follows the original motion clip distribution in the datase t and could lead to imbalanced behaviors in our setting. To address the is sue, we further propose a technique named prior shifting to adjust the pri or distribution using curiosity-driven RL. The outcome distribution is dem onstrated to offer sufficient behavioral diversity and significantly facil itates upper-level policy learning for downstream tasks. We conduct compre hensive experiments using humanoid characters on two challenging downstrea m tasks, sword-shield striking and two-player boxing game. Our results dem onstrate that the proposed framework is capable of controlling the charact er to perform considerably high-quality movements in terms of behavioral s trategies, diversity, and realism.\n\nRegistration Category: Full Access\n \nSession Chair: Jungdam Won (Seoul National University) URL:https://asia.siggraph.org/2023/full-program?id=papers_841&sess=sess120 END:VEVENT END:VCALENDAR