BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023303Z LOCATION:G510\, G Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241204T152000 DTEND;TZID=Asia/Tokyo:20241204T162000 UID:siggraphasia_SIGGRAPH Asia 2024_sess285@linklings.com SUMMARY:Human Reconstruction & Modeling DESCRIPTION:Technical Communications\n\nThe Technical Communications progr am at SIGGRAPH Asia serves as an invaluable platform for presenting cuttin g-edge work that may not neatly align with the Technical Papers session. A ttendees can expect to explore fresh and thought-provoking ideas, glean pr actical insights from real-world production work, and discover innovative applications spanning various disciplines, from graphics and vision to AI and VR.\n\nDuring these sessions, leading experts from academia and indust ry will present their latest findings, offering a glimpse into cutting-edg e research and development. From geometry and animation to virtual reality and machine learning, attendees can expect to explore a diverse array of topics at the intersection of graphics and other fields.\n\nUnder the over arching theme of Curious Minds, attendees can delve into discussions surro unding innovation, interdisciplinary discovery, and the role of education in shaping the future of technology. Whether you’re a seasoned researcher or a curious enthusiast, the Technical Communications program promises to offer insights that spark curiosity and inspire new perspectives.\n\nFacia lX: A Robust Facial Expression Tracking System based on Multifaceted Expre ssion Embedding\n\nFacialX is a facial tracking tool that generates high-q uality animations from monocular video, capturing diverse expressions and angles. It outperforms FACEGOOD and Apple ARKit, ideal for the VFX industr y.\n\n\nDa Eun Kim, Geon Kim, and Joonho Park (Giantstep) and Joo-Haeng Le e (Pebblous Inc.)\n---------------------\nEyelid Fold Consistency in Facia l Modeling\n\nWe model diverse human eyelids, including hooded eyes and ep icanthal folds, in a consistent unified topology. Models trained with this diverse data demonstrate improved accuracy and fairness on face-related t asks.\n\n\nLohit Petikam, Charlie Hewitt, Fatemeh Saleh, and Tadas Baltrus aitis (Microsoft)\n---------------------\nIntrinsic Morphological Relation ship Guided 3D Craniofacial Reconstruction Using Siamese Cycle Attention G AN\n\nWe propose a novel approach for 3D craniofacial reconstruction using a Siamese cycle attention mechanism within Generative Adversarial Network s, enhanceing high-frequency features and preserving identity consistentl y of reconstructed face.\n\n\nJunli Zhao and Chengyuan Wang (Qingdao Unive rsity), Yu-Hui Wen (Beijing Jiaotong University), Fuqing Duan (Beijing Nor mal University), Ran Yi (Shanghai Jiao Tong University), Yong-Jin Liu (Tsi nghua University), Qingdong Long and Zhenkuan Pan (Qingdao University), an d Xianfeng Gu (Stony Brook University)\n---------------------\nA Theory of Stabilization by Skull Carving\n\nIntroducing the stable hull: the surfac e of the boolean intersection of stabilized head scans. Our skull carving algorithm simultaneously optimizes the stable hull shape and rigid transfo rms, outperforming existing methods.\n\n\nMathieu Lamarre, Patrick Anderso n, and Étienne Danvoye (SEED, Electronic Arts)\n---------------------\nRea l-time 3D Human Reconstruction and Rendering System from a Single RGB Came ra\n\nWe present a real-time 3D human reconstruction and rendering system using a single RGB camera at 28+ FPS, operating on a standard USB webcam a nd a consumer-level GPU.\n\n\nYuanwang Yang and Qiao Feng (Tianjin Univers ity), Yu-Kun Lai (Cardiff University), and Kun Li (Tianjin University)\n\n Registration Category: Full Access, Full Access Supporter\n\nLanguage Form at: English Language\n\nSession Chair: Yuting Ye (Reality Labs Research, M eta; Meta) END:VEVENT END:VCALENDAR