BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T105600 DTEND;TZID=Asia/Tokyo:20241205T110800 UID:siggraphasia_SIGGRAPH Asia 2024_sess129_papers_497@linklings.com SUMMARY:RoMo: A Robust Solver for Full-body Unlabeled Optical Motion Captu re DESCRIPTION:Technical Papers\n\nXiaoyu Pan and Bowen Zheng (State Key Labo ratory of CAD&CG, Zhejiang University); Xinwei Jiang, Zijiao Zeng, and Qil ong Kou (Tencent Games Digital Content Technology Center); He Wang (Depart ment of Computer Science and UCL Centre for Artificial Intelligence, Unive rsity College London); and Xiaogang Jin (State Key Laboratory of CAD&CG, Z hejiang University)\n\nOptical motion capture (MoCap) is the "gold standar d" for accurately capturing full-body motions. To make use of raw MoCap po int data, the system labels the points with corresponding body part locati ons and solves the full-body motions. However, MoCap data often contains m islabeling, occlusion and positional errors, requiring extensive manual co rrection. To alleviate this burden, we introduce RoMo, an automatic learni ng-based framework for robustly labeling and solving raw optical motion ca pture data. In the labeling stage, RoMo employs a divide-and-conquer strat egy to break down the complex full-body labeling challenge into manageable subtasks: full-body segmentation and part-specific labeling. To utilize t he temporal continuity of markers, RoMo generates marker tracklets using a K-partite graph-based clustering algorithm, where markers serve as nodes and edges are formed based on positional and feature similarities. For mot ion solving, to prevent error accumulation along the kinematic chain, we i ntroduce a hybrid inverse kinematic solver that utilizes joint positions a s intermediate representations and adjusts the template skeleton to match estimated joint rotations. We demonstrate that RoMo achieves high labeling and solving accuracy across multiple metrics and various datasets. Extens ive comparisons show that our method outperforms state-of-the-art research methods. On a real dataset, RoMo improves the F1 score of hand labeling f rom 0.94 to 0.98, and reduces the position error of body motion solving by 25%. Furthermore, RoMo can be applied in scenarios where commercial syste ms are inadequate.\n\nRegistration Category: Full Access, Full Access Supp orter\n\nLanguage Format: English Language\n\nSession Chair: Yuting Ye (Re ality Labs Research, Meta; Meta) URL:https://asia.siggraph.org/2024/program/?id=papers_497&sess=sess129 END:VEVENT END:VCALENDAR