BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T114300 DTEND;TZID=Asia/Tokyo:20241205T115400 UID:siggraphasia_SIGGRAPH Asia 2024_sess129_papers_1174@linklings.com SUMMARY:ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling DESCRIPTION:Technical Papers\n\nDeok-Kyeong Jang (MOVIN Inc.); Dongseok Ya ng (MOVIN Inc., KAIST); Deok-Yun Jang (MOVIN Inc., GIST); Byeoli Choi (MOV IN Inc., KAIST); Donghoon Shin (MOVIN Inc.); and Sung-Hee Lee (KAIST)\n\nT his paper introduces ELMO, a real-time upsampling motion capture framework designed for a single LiDAR sensor. Modeled as a conditional autoregressi ve transformer-based upsampling motion generator, ELMO achieves 60 fps mot ion capture from a 20 fps LiDAR point cloud sequence. The key feature of E LMO is the coupling of the self-attention mechanism with thoughtfully desi gned embedding modules for motion and point clouds, significantly elevatin g the motion quality. \nTo facilitate accurate motion capture, we develop a one-time skeleton calibration model capable of predicting user skeleton offsets from a single-frame point cloud. Additionally, we introduce a nove l data augmentation technique utilizing a LiDAR simulator, which enhances global root tracking to improve environmental understanding.\nTo demonstra te the effectiveness of our method, we compare ELMO with state-of-the-art methods in both image-based and point cloud-based motion capture. We furth er conduct an ablation study to validate our design principles. \nELMO's f ast inference time makes it well-suited for real-time applications, exempl ified in our demo video featuring live streaming and interactive gaming sc enarios. \nFurthermore, we contribute a high-quality LiDAR-mocap synchroni zed dataset comprising 20 different subjects performing a range of motions , which can serve as a valuable resource for future research.\n\nRegistrat ion Category: Full Access, Full Access Supporter\n\nLanguage Format: Engli sh Language\n\nSession Chair: Yuting Ye (Reality Labs Research, Meta; Meta ) URL:https://asia.siggraph.org/2024/program/?id=papers_1174&sess=sess129 END:VEVENT END:VCALENDAR