BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163707Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T101500 DTEND;TZID=Australia/Melbourne:20231215T111500 UID:siggraphasia_SIGGRAPH Asia 2023_sess135@linklings.com SUMMARY:Text To Anything DESCRIPTION:Text-Guided Synthesis of Eulerian Cinemagraphs\n\nWe introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs fro m text descriptions - an especially challenging task when prompts feature imaginary elements and artistic styles, given the complexity of interpreti ng the semantics and motions of these images. We focus on cinemagr...\n\n\ nAniruddha Mahapatra (Carnegie Mellon University); Aliaksandr Siarohin, Hs in-Ying Lee, and Sergey Tulyakov (Snap Inc.); and Jun-Yan Zhu (Carnegie Me llon University)\n---------------------\nBreak-A-Scene: Extracting Multipl e Concepts from a Single Image\n\nText-to-image model personalization aims to introduce a user-provided concept to the model, allowing its synthesis in diverse contexts. However, current methods primarily focus on the case of learning a single concept from multiple images with variations in back grounds and poses, and struggle when a...\n\n\nOmri Avrahami (The Hebrew U niversity of Jerusalem), Kfir Aberman (Google Research), Ohad Fried (Reich man University), Daniel Cohen-Or (Tel Aviv University), and Dani Lischinsk i (The Hebrew University of Jerusalem)\n---------------------\nCLIP-Guided StyleGAN Inversion for Text-Driven Real Image Editing\n\nResearchers have recently begun exploring the use of StyleGAN-based models for real image editing. One particularly interesting application is using natural languag e descriptions to guide the editing process. Existing approaches for editi ng images using language either resort to instance-level laten...\n\n\nAbd ul Basit Anees and Ahmet Canberk Baykal (Koç University), Duygu Ceylan (Ad obe Research), Erkut Erdem (Hacettepe University), and Aykut Erdem and Den iz Yuret (Koç University)\n---------------------\nRerender A Video: Zero-S hot Text-Guided Video-to-Video Translation\n\nLarge text-to-image diffusio n models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temp oral consistency across video frames remains a formidable challenge.\nThis paper proposes a novel zero-shot text-guided video...\n\n\nShuai Yang, Yi fan Zhou, Ziwei Liu, and Chen Change Loy (Nanyang Technological University , Singapore)\n---------------------\nFace0: Instantaneously Conditioning a Text-to-Image Model on a Face\n\nWe present Face0, a novel way to instant aneously condition a text-to-image generation model on a face, in sample t ime, without any optimization procedures such as fine-tuning or inversions . We augment a dataset of annotated images with embeddings of the included faces and train an image generation m...\n\n\nDani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan (Google Research)\n\nRegistration Categ ory: Full Access\n\nSession Chair: Chongyang Ma (ByteDance) END:VEVENT END:VCALENDAR