BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B5 (2)\, B Block\, Level 5
DTSTART;TZID=Asia/Tokyo:20241205T144500
DTEND;TZID=Asia/Tokyo:20241205T155500
UID:siggraphasia_SIGGRAPH Asia 2024_sess134@linklings.com
SUMMARY:Diffusing Your Videos
DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation.
 \n\nLumiere: A Space-Time Diffusion Model for Video Generation\n\nWe intro
 duce Lumiere -- a text-to-video diffusion model designed for synthesizing 
 videos that portray realistic, diverse and coherent motion -- a pivotal ch
 allenge in video synthesis. To this end, we introduce a Space-Time U-Net a
 rchitecture that generates the entire temporal duration of the video a...\
 n\n\nOmer Bar-Tal (Google Research, Weizmann Institute of Science); Hila C
 hefer (Google Research, Tel Aviv University); Omer Tov, Charles Herrmann, 
 Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj,
  Yuanzhen Li, and Michael Rubinstein (Google Research); Tomer Michaeli (Go
 ogle Research, Technion – Israel Institute of Technology); Oliver Wang and
  Deqing Sun (Google Research); Tali Dekel (Google Research, Weizmann Insti
 tute of Science); and Inbar Mosseri (Google Research)\n-------------------
 --\nTrailBlazer: Trajectory Control for Diffusion-Based Video Generation\n
 \nLarge text-to-video (T2V) models such as Sora have the potential to revo
 lutionize visual effects and the creation of some types of movies. Current
  T2V models require tedious trial-and-error experimentation to achieve des
 ired results, however. This motivates the search for methods to directly c
 ontrol...\n\n\nWan-Duo Kurt Ma (Victoria University of Wellington), J. P. 
 Lewis (NVIDIA Research), and W. Bastiaan Kleijn (Victoria University of We
 llington)\n---------------------\nI2VEdit: First-Frame-Guided Video Editin
 g via Image-to-Video Diffusion Models\n\nThe remarkable generative capabil
 ities of diffusion models have motivated extensive research in both image 
 and video editing. Compared to video editing which faces additional challe
 nges in the time dimension, image editing has witnessed the development of
  more diverse, high-quality approaches and mo...\n\n\nWenqi Ouyang (S-Lab 
 for Advanced Intelligence, Nanyang Technological University Singapore); Yi
  Dong (Nanyang Technological University (NTU)); Lei Yang and Jianlou Si (S
 enseTime); and Xingang Pan (S-Lab for Advanced Intelligence, Nanyang Techn
 ological University Singapore)\n---------------------\nStill-Moving: Custo
 mized Video Generation without Customized Video Data\n\nCustomizing text-t
 o-image (T2I) models has seen tremendous progress recently, particularly i
 n areas such as personalization, stylization, and conditional generation. 
 However, expanding this progress to video generation is still in its infan
 cy, primarily due to the lack of customized video data. \nIn ...\n\n\nHila
  Chefer (Google Research, Tel Aviv University); Shiran Zada, Roni Paiss, A
 riel Ephrat, Omer Tov, and Michael Rubinstein (Google Research); Lior Wolf
  (Tel Aviv University); Tali Dekel (Google Research, Weizmann Institute of
  Science); Tomer Michaeli (Google Research, Technion – Israel Institute of
  Technology); and Inbar Mosseri (Google Research)\n---------------------\n
 VidPanos: Generative Panoramic Videos from Casual Panning Videos\n\nStitch
 ing frames of a panning video into a panoramic photograph is a well-unders
 tood problem for stationary scenes. When objects are moving, however, a st
 ill panorama is not enough to capture the scene. \nWe present a method for
  synthesizing a panoramic video from a casually-captured panning video, a.
 ..\n\n\nJingwei Ma (University of Washington); Erika Lu, Roni Paiss, and S
 hiran Zada (Google Deepmind); Aleksander Holynski (University of Californi
 a Berkeley, Google Deepmind); Tali Dekel (Weizmann Institute of Science, G
 oogle Deepmind); Brian Curless (University of Washington, Google Deepmind)
 ; and Michael Rubinstein and Forrester Cole (Google Deepmind)\n-----------
 ----------\nFashion-VDM: Video Diffusion Model for Virtual Try-On\n\nWe pr
 esent Fashion-VDM, a video diffusion model (VDM) for generating virtual tr
 y-on videos. Given an input garment image and person video, our method aim
 s to generate a high-quality try-on video of the person wearing the given 
 garment, while preserving the person's identity and motion. Image-based v.
 ..\n\n\nJohanna Karras (Google Research, University of Washington); Yingwe
 i Li and Nan Liu (Google Research); Luyang Zhu (Google Research, Universit
 y of Washington); Innfarn Yoo, Andreas Lugmayr, and Chris Lee (Google Rese
 arch); and Ira Kemelmacher-Shlizerman (Google Research, University of Wash
 ington)\n\nRegistration Category: Full Access, Full Access Supporter\n\nLa
 nguage Format: English Language\n\nSession Chair: Nanxuan Zhao (Adobe Rese
 arch)
END:VEVENT
END:VCALENDAR
