BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B5 (1)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241205T165300 DTEND;TZID=Asia/Tokyo:20241205T170500 UID:siggraphasia_SIGGRAPH Asia 2024_sess136_tog_103@linklings.com SUMMARY:TriHuman: A Real-time and Controllable Tri-plane Representation fo r Detailed Human Geometry and Appearance Synthesis DESCRIPTION:Technical Papers\n\nHeming Zhu (Max Planck Institute for Infor matics, Saarland Informatics Campus); Fangneng Zhan (Max Planck Institute for Informatics); and Christian Theobalt and Marc Habermann (Max Planck In stitute for Informatics; Saarbrücken Research Center for Visual Computing, Interaction and AI)\n\nCreating controllable, photorealistic, and geometr ically detailed digital doubles of real humans solely from video data is a key challenge in Computer Graphics and Vision, especially when real-time performance is required. Recent methods attach a neural radiance field (Ne RF) to an articulated structure, e.g., a body model or a skeleton, to map points into a pose canonical space while conditioning the NeRF on the skel etal pose. These approaches typically parameterize the neural field with a multi-layer perceptron (MLP) leading to a slow runtime. To address this d rawback, we propose TriHuman a novel human-tailored, deformable, and effic ient tri-plane representation, which achieves real-time performance, state -of-the-art pose-controllable geometry synthesis as well as photorealistic rendering quality. At the core, we non-rigidly warp global ray samples in to our undeformed tri-plane texture space, which effectively addresses the problem of global points being mapped to the same tri-plane locations. We then show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry chan ges. Our results demonstrate a clear step towards higher quality in terms of geometry and appearance modeling of humans and runtime performance.\n\n Registration Category: Full Access, Full Access Supporter\n\nLanguage Form at: English Language\n\nSession Chair: Manolis Savva (Simon Fraser Univers ity) URL:https://asia.siggraph.org/2024/program/?id=tog_103&sess=sess136 END:VEVENT END:VCALENDAR