BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B7 (1)\, B Block\, Level 7
DTSTART;TZID=Asia/Tokyo:20241205T111900
DTEND;TZID=Asia/Tokyo:20241205T113100
UID:siggraphasia_SIGGRAPH Asia 2024_sess129_papers_231@linklings.com
SUMMARY:Look Ma, no markers: holistic performance capture without the hass
 le
DESCRIPTION:Technical Papers\n\nCharlie Hewitt, Fatemeh Saleh, Sadegh Alia
 kbarian, Lohit Petikam, Shideh Rezaeifar, Louis Florentin, Zafiirah Hoseni
 e, Thomas J. Cashman, and Julien Valentin (Microsoft); Darren Cosker (Micr
 osoft, University of Bath); and Tadas Baltrusaitis (Microsoft)\n\nWe tackl
 e the problem of highly-accurate, holistic performance capture for the fac
 e, body and hands simultaneously. Motion-capture technologies used in film
  and game production typically focus only on face, body or hand capture in
 dependently, involve complex and expensive hardware and a high degree of m
 anual intervention from skilled operators. While machine-learning-based ap
 proaches exist to overcome these problems, they usually only support a sin
 gle camera, often operate on a single part of the body, do not produce pre
 cise world-space results, and rarely generalize outside specific contexts.
  In this work, we introduce the first technique for marker-free, high-qual
 ity reconstruction of the complete human body, including eyes and tongue, 
 without requiring any calibration, manual intervention or custom hardware.
  Our approach produces stable world-space results from arbitrary camera ri
 gs as well as supporting varied capture environments and clothing. We achi
 eve this through a hybrid approach that leverages machine learning models 
 trained exclusively on synthetic data and powerful parametric models of hu
 man shape and motion. We evaluate our method on a number of body, face and
  hand reconstruction benchmarks and demonstrate state-of-the-art results t
 hat generalize on diverse datasets.\n\nRegistration Category: Full Access,
  Full Access Supporter\n\nLanguage Format: English Language\n\nSession Cha
 ir: Yuting Ye (Reality Labs Research, Meta; Meta)
URL:https://asia.siggraph.org/2024/program/?id=papers_231&sess=sess129
END:VEVENT
END:VCALENDAR
