BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070312Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T101500 DTEND;TZID=Australia/Melbourne:20231215T111500 UID:siggraphasia_SIGGRAPH Asia 2023_sess135@linklings.com SUMMARY:Text To Anything DESCRIPTION:Technical Papers, TOG\n\nCLIP-Guided StyleGAN Inversion for Te xt-Driven Real Image Editing\n\nResearchers have recently begun exploring the use of StyleGAN-based models for real image editing. One particularly interesting application is using natural language descriptions to guide th e editing process. Existing approaches for editing images using language e ither resort to instance-level laten...\n\n\nAbdul Basit Anees and Ahmet C anberk Baykal (Koç University), Duygu Ceylan (Adobe Research), Erkut Erdem (Hacettepe University), and Aykut Erdem and Deniz Yuret (Koç University)\ n---------------------\nRerender A Video: Zero-Shot Text-Guided Video-to-V ideo Translation\n\nLarge text-to-image diffusion models have exhibited im pressive proficiency in generating high-quality images. However, when appl ying these models to video domain, ensuring temporal consistency across vi deo frames remains a formidable challenge.\nThis paper proposes a novel ze ro-shot text-guided video...\n\n\nShuai Yang, Yifan Zhou, Ziwei Liu, and C hen Change Loy (Nanyang Technological University, Singapore)\n------------ ---------\nText-Guided Synthesis of Eulerian Cinemagraphs\n\nWe introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature i maginary elements and artistic styles, given the complexity of interpretin g the semantics and motions of these images. We focus on cinemagr...\n\n\n Aniruddha Mahapatra (Carnegie Mellon University); Aliaksandr Siarohin, Hsi n-Ying Lee, and Sergey Tulyakov (Snap Inc.); and Jun-Yan Zhu (Carnegie Mel lon University)\n---------------------\nFace0: Instantaneously Conditionin g a Text-to-Image Model on a Face\n\nWe present Face0, a novel way to inst antaneously condition a text-to-image generation model on a face, in sampl e time, without any optimization procedures such as fine-tuning or inversi ons. We augment a dataset of annotated images with embeddings of the inclu ded faces and train an image generation m...\n\n\nDani Valevski, Danny Lum en, Yossi Matias, and Yaniv Leviathan (Google Research)\n----------------- ----\nBreak-A-Scene: Extracting Multiple Concepts from a Single Image\n\nT ext-to-image model personalization aims to introduce a user-provided conce pt to the model, allowing its synthesis in diverse contexts. However, curr ent methods primarily focus on the case of learning a single concept from multiple images with variations in backgrounds and poses, and struggle whe n a...\n\n\nOmri Avrahami (The Hebrew University of Jerusalem), Kfir Aberm an (Google Research), Ohad Fried (Reichman University), Daniel Cohen-Or (T el Aviv University), and Dani Lischinski (The Hebrew University of Jerusal em)\n\nRegistration Category: Full Access\n\nSession Chair: Chongyang Ma ( ByteDance) END:VEVENT END:VCALENDAR