BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070310Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T170000 DTEND;TZID=Australia/Melbourne:20231212T175000 UID:siggraphasia_SIGGRAPH Asia 2023_sess162@linklings.com SUMMARY:View Synthesis DESCRIPTION:Technical Papers\n\nSinMPI: Novel View Synthesis from a Single Image with Expanded Multiplane Images\n\nSingle-image novel view synthesi s is a challenging and ongoing problem that aims to generate an infinite n umber of consistent views from a single input image. Although significant efforts have been made to advance the quality of generated novel views, le ss attention has been paid to the expansion of...\n\n\nGuo Pu, Peng-Shuai Wang, and Zhouhui Lian (Wangxuan Institute of Computer Technology, Peking University)\n---------------------\nVMesh: Hybrid Volume-Mesh Representati on for Efficient View Synthesis\n\nWith the emergence of neural radiance f ields (NeRFs), view synthesis quality has reached an unprecedented level. Compared to traditional mesh-based assets, this volumetric representation is more powerful in expressing scene geometry but inevitably suffers from high rendering costs and can hardly be ...\n\n\nYuan-Chen Guo (Tsinghua Un iversity, Tencent); Yan-Pei Cao (Tencent); Chen Wang (Tsinghua University) ; Yu He (Chinese Academy of Sciences); Ying Shan (Tencent); and Song-Hai Z hang (Tsinghua University)\n---------------------\nHigh-Fidelity and Real- Time Novel View Synthesis for Dynamic Scenes\n\nThis paper aims to tackle the challenge of dynamic view synthesis from multi-view videos. The key ob servation is that while previous grid-based methods offer consistent rende ring, they fall short in capturing appearance details on a complex dynamic scene, a domain where multi-view image-based method...\n\n\nHaotong Lin ( State Key Laboratory of CAD & CG, Zhejiang University); Sida Peng (Zhejian g University); and Zhen Xu, Tao Xie, Xingyi He, Hujun Bao, and Xiaowei Zho u (State Key Laboratory of CAD & CG, Zhejiang University)\n--------------- ------\nInovis: Instant Novel-View Synthesis\n\nNovel-view synthesis is an ill-posed problem in that it requires inference of previously unseen info rmation. Recently, reviving the traditional field of image-based rendering , neural methods proved particularly suitable for this interpolation/extra polation task; however, they often require a-priori ...\n\n\nMathias Harre r and Linus Franke (Friedrich-Alexander-Universität Erlangen-Nürnberg); La ura Fink (Friedrich-Alexander-Universität Erlangen-Nürnberg, Fraunhofer II S); and Marc Stamminger and Tim Weyrich (Friedrich-Alexander-Universität E rlangen-Nürnberg)\n---------------------\nRepurposing Diffusion Inpainters for Novel View Synthesis\n\nIn this paper, we present a method for genera ting consistent novel views from a single source image. Our approach focus es on maximizing the reuse of visible pixels from the source view. To achi eve this, we use a monocular depth estimator that transfers visible pixels from the source view to the targ...\n\n\nYash Kant (University of Toronto , Snap Inc.); Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Guler, Ji an Ren, and Sergey Tulyakov (Snap Inc.); and Igor Gilitschenski (Universit y of Toronto)\n\nRegistration Category: Full Access\n\nSession Chair: Binh -Son Hua (Trinity College Dublin) END:VEVENT END:VCALENDAR