BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070249Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T102500 DTEND;TZID=Australia/Melbourne:20231215T104000 UID:siggraphasia_SIGGRAPH Asia 2023_sess135_papers_248@linklings.com SUMMARY:Text-Guided Synthesis of Eulerian Cinemagraphs DESCRIPTION:Technical Papers, TOG\n\nAniruddha Mahapatra (Carnegie Mellon University); Aliaksandr Siarohin, Hsin-Ying Lee, and Sergey Tulyakov (Snap Inc.); and Jun-Yan Zhu (Carnegie Mellon University)\n\nWe introduce Text2 Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature imagin ary elements and artistic styles, given the complexity of interpreting the semantics and motions of these images. We focus on cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds, which exhibit conti nuous motion and repetitive textures. Existing single-image animation meth ods fall short on artistic inputs, and recent text-based video methods fre quently introduce temporal inconsistencies, struggling to keep certain reg ions static. To address these challenges, we propose an idea of synthesizi ng image twins from a single text prompt - a pair of an artistic image and its pixel-aligned corresponding natural-looking twin. While the artistic image depicts the style and appearance detailed in our text prompt, the re alistic counterpart greatly simplifies layout and motion analysis. Levera ging existing natural image and video datasets, we can accurately segment the realistic image and predict plausible motion given the semantic inform ation. The predicted motion can then be transferred to the artistic image to create the final cinemagraph. Our method outperforms existing approache s in creating cinemagraphs for natural landscapes as well as artistic and other-worldly scenes, as validated by automated metrics and user studies. Finally, we demonstrate two extensions: animating existing paintings and c ontrolling motion directions using text.\n\nRegistration Category: Full Ac cess\n\nSession Chair: Chongyang Ma (ByteDance) URL:https://asia.siggraph.org/2023/full-program?id=papers_248&sess=sess135 END:VEVENT END:VCALENDAR