BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070247Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T114000 DTEND;TZID=Australia/Melbourne:20231214T115000 UID:siggraphasia_SIGGRAPH Asia 2023_sess150_papers_216@linklings.com SUMMARY:CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration DESCRIPTION:Technical Papers\n\nJingyu Hu, Ka-Hei Hui, and Zhengzhe Liu (T he Chinese University of Hong Kong); Hao (Richard) Zhang (Simon Fraser Uni versity); and Chi-Wing Fu (The Chinese University of Hong Kong)\n\nThis pa per presents CLIPXPlore, a new framework that leverages a vision-language model to guide the exploration of the 3D shape space. Many recent methods have been developed to encode 3D shapes into a learned latent shape space to enable generative design and modeling. Yet, existing methods lack effec tive exploration mechanisms, despite the rich information. To this end, we propose to leverage CLIP, a powerful pre-trained vision-language model, t o aid the shape space exploration. Our idea is threefold. First, we couple the CLIP and shape spaces by generating paired CLIP and shape codes throu gh sketch images and training a mapper network to connect the two spaces. Second, to explore the space around a given shape, we formulate\na co-opti mization strategy to search for the CLIP code that better matches the geom etry of the shape. Third, we design three exploration scenarios, binary-at tribute-guided, text-guided, and sketch-guided, to locate suitable explora tion trajectories in shape space and induce meaningful changes to the shap e. We perform a series of experiments to quantitatively and visually\ncomp are CLIPXPlore with different baselines in each of the three scenarios, sh owing that CLIPXPlore can produce many meaningful exploration results that cannot be achieved by the existing solutions.\n\nRegistration Category: F ull Access\n\nSession Chair: Peng-Shuai Wang (Peking University) URL:https://asia.siggraph.org/2023/full-program?id=papers_216&sess=sess150 END:VEVENT END:VCALENDAR