BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070311Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T140000 DTEND;TZID=Australia/Melbourne:20231214T150000 UID:siggraphasia_SIGGRAPH Asia 2023_sess132@linklings.com SUMMARY:Personalized Generative Models DESCRIPTION:Technical Papers\n\nDomain-Agnostic Tuning-Encoder for Fast Pe rsonalization of Text-To-Image Models\n\nText-to-image (T2I) personalizati on allows users to guide the creative image generation process by combinin g their own visual concepts in natural language prompts. \nRecently, encod er-based techniques have emerged as a new effective approach for T2I perso nalization, reducing the need for multiple ima...\n\n\nMoab Arar (Tel-Aviv University); Rinon Gal (Tel Aviv University, NVIDIA Research); Yuval Atzm on (NVIDIA Research); Gal Chechik (NVIDIA Research, Bar-Ilan University); Daniel Cohen-Or (Tel Aviv University); Ariel Shamir (Reichman University ( IDC)); and Amit H. Bermano (Tel Aviv University)\n---------------------\nP roSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models\n\nPersonalizing generative models offers a way to guide image gene ration with user-provided references. Current personalization methods can invert an object or concept into the textual conditioning space and compos e new natural sentences for text-to-image diffusion models. However, repre senting and ed...\n\n\nYuxin Zhang (MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chi nese Academy of Sciences); Weiming Dong (MAIS, Institute of Automation, Ch inese Academy of Sciences; School of AI,University of Chinese Academy of S ciences); Fan Tang (Institute of Computing Technology, Chinese Academy of Sciences); Nisha Huang (School of AI,University of Chinese Academy of Scie nces; MAIS, Institute of Automation, Chinese Academy of Sciences); Haibin Huang and Chongyang Ma (Kuaishou Technology); Tong-Yee Lee (National Cheng -Kung University); Oliver Deussen (University of Konstanz); and Changsheng Xu (MAIS, Institute of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences)\n---- -----------------\nContent-based Search for Deep Generative Models\n\nThe growing proliferation of customized and pretrained generative models has m ade it infeasible for a user to be fully cognizant of every model in exist ence. To address this need, we introduce the task of content-based model s earch: given a query and a large set of generative models, finding the mod ...\n\n\nDaohan Lu, Sheng-Yu Wang, Nupur Kumari, Rohan Agarwal, and Mia Ta ng (Carnegie Mellon University); David Bau (Northeastern University); and Jun-Yan Zhu (Carnegie Mellon University)\n---------------------\nA Neural Space-Time Representation for Text-to-Image Personalization\n\nA key aspec t of text-to-image personalization methods is the manner in which the targ et concept is represented within the generative process. This choice great ly affects the visual fidelity, downstream editability, and disk space nee ded to store the learned concept. In this paper, we explore a new t...\n\n \nYuval Alaluf, Elad Richardson, Gal Metzer, and Daniel Cohen-Or (Tel Aviv University)\n---------------------\nMyStyle++: A Controllable Personalize d Generative Prior\n\nIn this paper, we propose an approach to obtain a pe rsonalized generative prior with explicit control over a set of attributes . We build upon MyStyle, a recently introduced method, that tunes the weig hts of a pre-trained StyleGAN face generator on a few images of an individ ual. This system allows sy...\n\n\nLibing Zeng (Texas A&M University), Lel e Chen and Yi Xu (OPPO US Research Center), and Nima Kalantari (Texas A&M University)\n\nRegistration Category: Full Access\n\nSession Chair: Jun-Ya n Zhu (Carnegie Mellon University) END:VEVENT END:VCALENDAR