BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B5 (2)\, B Block\, Level 5
DTSTART;TZID=Asia/Tokyo:20241204T111900
DTEND;TZID=Asia/Tokyo:20241204T113100
UID:siggraphasia_SIGGRAPH Asia 2024_sess113_papers_1022@linklings.com
SUMMARY:Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocu
 lar Videos
DESCRIPTION:Technical Papers\n\nColton Stearns, Adam Harley, and Mikaela U
 y (Stanford University); Florian Dubost and Federico Tombari (Google Resea
 rch); and Gordon Wetzstein and Leonidas Guibas (Stanford University)\n\nGa
 ussian splatting has become a popular representation for novel-view synthe
 sis, exhibiting clear strengths in efficiency, photometric quality, and co
 mpositional edibility. Following its success, many works have extended Gau
 ssians to 4D, showing that dynamic Gaussians maintain these benefits while
  also tracking scene geometry far better than alternative representations.
  Yet, these methods assume dense multi-view videos as supervision, constra
 ining their use to controlled capture settings. In this work, we are inter
 ested in extending the capability of Gaussian scene representations to cas
 ually captured monocular videos. We show that existing 4D Gaussian methods
  dramatically fail in this setup because the monocular setting is undercon
 strained. Building off this finding, we propose a method we call Dynamic G
 aussian Marbles, which consist of three core modifications that target the
  difficulties of the monocular setting. First, we use isotropic Gaussian "
 marbles", reducing the degrees of freedom of each Gaussian, and constraini
 ng the optimization to focus on motion and appearance over local shape. Se
 cond, we employ a hierarchical divide-and-conquer learning strategy to eff
 iciently guide the optimization towards solutions with globally coherent m
 otion. Finally, we add image-level and geometry-level priors into the opti
 mization, including a tracking loss that takes advantage of recent progres
 s in point tracking. By constraining the optimization in these ways, Dynam
 ic Gaussian Marbles learns Gaussian trajectories that enable novel-view re
 ndering and accurately capture the 3D motion of the scene elements. We eva
 luate on the (monocular) Nvidia Dynamic Scenes dataset and the Dycheck iPh
 one dataset, and show that Gaussian Marbles significantly outperforms othe
 r Gaussian baselines in quality, and is on-par with non-Gaussian represent
 ations, all while maintaining the efficiency, compositionality, editabilit
 y, and tracking benefits of Gaussians.\n\nRegistration Category: Full Acce
 ss, Full Access Supporter\n\nLanguage Format: English Language\n\nSession 
 Chair: Forrester Cole (Google)
URL:https://asia.siggraph.org/2024/program/?id=papers_1022&sess=sess113
END:VEVENT
END:VCALENDAR
