BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023313Z
LOCATION:Hall B5 (2)\, B Block\, Level 5
DTSTART;TZID=Asia/Tokyo:20241206T111900
DTEND;TZID=Asia/Tokyo:20241206T113100
UID:siggraphasia_SIGGRAPH Asia 2024_sess143_papers_454@linklings.com
SUMMARY:Robust Dual Gaussian Splatting for Immersive Human-centric Volumet
 ric Videos
DESCRIPTION:Technical Papers\n\nYuheng Jiang, Zhehao Shen, Yu Hong, Chengc
 heng Guo, and Yize Wu (ShanghaiTech University); Yingliang Zhang (DGene In
 c.); and Jingyi Yu and Lan Xu (ShanghaiTech University)\n\nVolumetric vide
 o represents a transformative advancement in visual media, enabling users 
 to freely navigate immersive virtual experiences and narrowing the gap bet
 ween digital and real worlds. However, the need for extensive manual inter
 vention to stabilize mesh sequences and the generation of excessively larg
 e assets in existing workflows impedes broader adoption.\nIn this paper, w
 e present a novel Gaussian-based approach, dubbed DualGS, for real-time an
 d high-fidelity playback of complex human performance with excellent compr
 ession ratios. Our key idea in DualGS is to separately represent motion an
 d appearance using the corresponding skin and joint Gaussians. Such an exp
 licit disentanglement can significantly reduce motion redundancy and enhan
 ce temporal coherence. We begin by initializing the DualGS and anchoring s
 kin Gaussians to joint Gaussians at the first frame. Subsequently, we empl
 oy a coarse-to-fine training strategy for frame-by-frame human performance
  modeling. It includes a coarse alignment phase for overall motion predict
 ion as well as a fine-grained optimization for robust tracking and high-fi
 delity rendering. To integrate volumetric video seamlessly into VR environ
 ments, we efficiently compress motion using entropy encoding and appearanc
 e using codec compression coupled with a persistent codebook. Our approach
  achieves a compression ratio of up to 120 times, only requiring approxima
 tely 350KB of storage per frame. We demonstrate the efficacy of our repres
 entation through photo-realistic, free-view experiences on VR headsets, en
 abling users to immersively watch musicians in performance and feel the rh
 ythm of the notes at the performers' fingertips.\n\nRegistration Category:
  Full Access, Full Access Supporter\n\nLanguage Format: English Language\n
 \nSession Chair: Iain Matthews (Epic Games, Carnegie Mellon University)
URL:https://asia.siggraph.org/2024/program/?id=papers_454&sess=sess143
END:VEVENT
END:VCALENDAR
