BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B5 (1)\, B Block\, Level 5 DTSTART;TZID=Asia/Tokyo:20241204T111900 DTEND;TZID=Asia/Tokyo:20241204T113100 UID:siggraphasia_SIGGRAPH Asia 2024_sess112_papers_241@linklings.com SUMMARY:PVP-Recon: Progressive View Planning via Warping Consistency for S parse-View Surface Reconstruction DESCRIPTION:Technical Papers\n\nSheng Ye, Yuze He, Matthieu Lin, Jenny She ng, and Ruoyu Fan (Tsinghua University); Yiheng Han (Beijing University of Technology); Yubin Hu (Tsinghua University); Ran Yi (Shanghai Jiao Tong U niversity); Yu-Hui Wen (Beijing Jiaotong University); Yong-Jin Liu (Tsingh ua University); and Wenping Wang (Texas A&M University)\n\nNeural implicit representations have revolutionized dense multi-view surface reconstructi on, yet their performance significantly diminishes with sparse input views . A few pioneering works have sought to tackle the challenge of sparse-vie w reconstruction by leveraging additional geometric priors or multi-scene generalizability. However, they are still hindered by the imperfect choice of input views, using images under empirically determined viewpoints to p rovide considerable overlap. We propose PVP-Recon, a novel and effective s parse-view surface reconstruction method that progressively plans the next best views to form an optimal set of sparse viewpoints for image capturin g. PVP-Recon starts initial surface reconstruction with as few as 3 views and progressively adds new views which are determined based on a novel war ping score that reflects the information gain of each newly added view. Th is progressive view planning progress is interleaved with a neural SDF-bas ed reconstruction module that utilizes multi-resolution hash features, enh anced by a progressive training scheme and a directional Hessian loss. Qua ntitative and qualitative experiments on three benchmark datasets show tha t our framework achieves high-quality reconstruction with a constrained in put budget and outperforms existing baselines.\n\nRegistration Category: F ull Access, Full Access Supporter\n\nLanguage Format: English Language\n\n Session Chair: Michael Wimmer (TU Wien) URL:https://asia.siggraph.org/2024/program/?id=papers_241&sess=sess112 END:VEVENT END:VCALENDAR