BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070242Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_248@linklings.com SUMMARY:Text-Guided Synthesis of Eulerian Cinemagraphs DESCRIPTION:Technical Papers\n\nAniruddha Mahapatra (Carnegie Mellon Unive rsity); Aliaksandr Siarohin, Hsin-Ying Lee, and Sergey Tulyakov (Snap Inc. ); and Jun-Yan Zhu (Carnegie Mellon University)\n\nWe introduce Text2Cinem agraph, a fully automated method for creating cinemagraphs from text desc riptions - an especially challenging task when prompts feature imaginary e lements and artistic styles, given the complexity of interpreting the sema ntics and motions of these images. We focus on cinemagraphs of fluid eleme nts, such as flowing rivers, and drifting clouds, which exhibit continuous motion and repetitive textures. Existing single-image animation methods f all short on artistic inputs, and recent text-based video methods frequent ly introduce temporal inconsistencies, struggling to keep certain regions static. To address these challenges, we propose an idea of synthesizing im age twins from a single text prompt - a pair of an artistic image and its pixel-aligned corresponding natural-looking twin. While the artistic image depicts the style and appearance detailed in our text prompt, the realist ic counterpart greatly simplifies layout and motion analysis. Leveraging existing natural image and video datasets, we can accurately segment the r ealistic image and predict plausible motion given the semantic information . The predicted motion can then be transferred to the artistic image to cr eate the final cinemagraph. Our method outperforms existing approaches in creating cinemagraphs for natural landscapes as well as artistic and other -worldly scenes, as validated by automated metrics and user studies. Final ly, we demonstrate two extensions: animating existing paintings and contro lling motion directions using text.\n\nRegistration Category: Full Access, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_248&sess=sess209 END:VEVENT END:VCALENDAR