BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B7 (1)\, B Block\, Level 7
DTSTART;TZID=Asia/Tokyo:20241205T104500
DTEND;TZID=Asia/Tokyo:20241205T115500
UID:siggraphasia_SIGGRAPH Asia 2024_sess129@linklings.com
SUMMARY:Capture Me If You Can
DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation.
 \n\nEgoHDM: An Online Egocentric-Inertial Human Motion Capture, Localizati
 on, and Dense Mapping System\n\nWe present EgoHDM, an online egocentric-in
 ertial human motion capture (mocap), localization, and dense mapping syste
 m. Our system uses 6 inertial measurement units (IMUs) and a commodity hea
 d-mounted RGB camera. EgoHDM is the first human mocap system that offers d
 ense scene mapping in near real-time...\n\n\nHandi Yin and Bonan Liu (Hong
  Kong University of Science and Technology, Guangzhou); Manuel Kaufmann (E
 TH Zürich); Jinhao He (Hong Kong University of Science and Technology, Gua
 ngzhou); Sammy Christen (ETH Zürich); and Jie Song and Pan Hui (Hong Kong 
 University of Science and Technology, Guangzhou; Hong Kong University of S
 cience and Technology)\n---------------------\nFürElise: Capturing and Phy
 sically Synthesizing Hand Motion of Piano Performance\n\nPiano playing req
 uires agile, precise, and coordinated hand control that stretches the limi
 ts of dexterity. Hand motion models with the sophistication to accurately 
 recreate piano playing have a wide range of applications in character anim
 ation, embodied AI, biomechanics, and VR/AR. In this paper, w...\n\n\nRuoc
 heng Wang, Pei Xu, Haochen Shi, Elizabeth Schumann, and C. Karen Liu (Stan
 ford University)\n---------------------\nELMO: Enhanced Real-time LiDAR Mo
 tion Capture through Upsampling\n\nThis paper introduces ELMO, a real-time
  upsampling motion capture framework designed for a single LiDAR sensor. M
 odeled as a conditional autoregressive transformer-based upsampling motion
  generator, ELMO achieves 60 fps motion capture from a 20 fps LiDAR point 
 cloud sequence. The key feature of ELMO...\n\n\nDeok-Kyeong Jang (MOVIN In
 c.); Dongseok Yang (MOVIN Inc., KAIST); Deok-Yun Jang (MOVIN Inc., GIST); 
 Byeoli Choi (MOVIN Inc., KAIST); Donghoon Shin (MOVIN Inc.); and Sung-Hee 
 Lee (KAIST)\n---------------------\nLook Ma, no markers: holistic performa
 nce capture without the hassle\n\nWe tackle the problem of highly-accurate
 , holistic performance capture for the face, body and hands simultaneously
 . Motion-capture technologies used in film and game production typically f
 ocus only on face, body or hand capture independently, involve complex and
  expensive hardware and a high degree ...\n\n\nCharlie Hewitt, Fatemeh Sal
 eh, Sadegh Aliakbarian, Lohit Petikam, Shideh Rezaeifar, Louis Florentin, 
 Zafiirah Hosenie, Thomas J. Cashman, and Julien Valentin (Microsoft); Darr
 en Cosker (Microsoft, University of Bath); and Tadas Baltrusaitis (Microso
 ft)\n---------------------\nMillimetric Human Surface Capture in Minutes\n
 \nDetailed human surface capture from multiple images is an essential comp
 onent for many 3D production, analysis and transmission tasks. Yet produci
 ng millimetric precision 3D models in practical time, and actually verifyi
 ng their 3D accuracy in a real-world capture context, remain key challenge
 s due ...\n\n\nBriac Toussaint and Laurence Boissieux (Centre Inria de l’U
 niversité Grenoble Alpes); Diego Thomas (Kyushu University); Edmond Boyer 
 (Meta Reality Labs Research); and Jean-Sébastien Franco (LJK, CNRS, Grenob
 le INP, Université Grenoble Alpes; Centre Inria de l’Université Grenoble A
 lpes)\n---------------------\nRoMo: A Robust Solver for Full-body Unlabele
 d Optical Motion Capture\n\nOptical motion capture (MoCap) is the "gold st
 andard" for accurately capturing full-body motions. To make use of raw MoC
 ap point data, the system labels the points with corresponding body part l
 ocations and solves the full-body motions. However, MoCap data often conta
 ins mislabeling, occlusion and p...\n\n\nXiaoyu Pan and Bowen Zheng (State
  Key Laboratory of CAD&CG, Zhejiang University); Xinwei Jiang, Zijiao Zeng
 , and Qilong Kou (Tencent Games Digital Content Technology Center); He Wan
 g (Department of Computer Science and UCL Centre for Artificial Intelligen
 ce, University College London); and Xiaogang Jin (State Key Laboratory of 
 CAD&CG, Zhejiang University)\n\nRegistration Category: Full Access, Full A
 ccess Supporter\n\nLanguage Format: English Language\n\nSession Chair: Yut
 ing Ye (Reality Labs Research, Meta; Meta)
END:VEVENT
END:VCALENDAR
