BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163648Z
LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231214T114000
DTEND;TZID=Australia/Melbourne:20231214T115000
UID:siggraphasia_SIGGRAPH Asia 2023_sess150_papers_216@linklings.com
SUMMARY:CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration
DESCRIPTION:Jingyu Hu, Ka-Hei Hui, and Zhengzhe Liu (The Chinese Universit
 y of Hong Kong); Hao (Richard) Zhang (Simon Fraser University); and Chi-Wi
 ng Fu (The Chinese University of Hong Kong)\n\nThis paper presents CLIPXPl
 ore, a new framework that leverages a vision-language model to guide the e
 xploration of the 3D shape space. Many recent methods have been developed 
 to encode 3D shapes into a learned latent shape space to enable generative
  design and modeling. Yet, existing methods lack effective exploration mec
 hanisms, despite the rich information. To this end, we propose to leverage
  CLIP, a powerful pre-trained vision-language model, to aid the shape spac
 e exploration. Our idea is threefold. First, we couple the CLIP and shape 
 spaces by generating paired CLIP and shape codes through sketch images and
  training a mapper network to connect the two spaces. Second, to explore t
 he space around a given shape, we formulate\na co-optimization strategy to
  search for the CLIP code that better matches the geometry of the shape. T
 hird, we design three exploration scenarios, binary-attribute-guided, text
 -guided, and sketch-guided, to locate suitable exploration trajectories in
  shape space and induce meaningful changes to the shape. We perform a seri
 es of experiments to quantitatively and visually\ncompare CLIPXPlore with 
 different baselines in each of the three scenarios, showing that CLIPXPlor
 e can produce many meaningful exploration results that cannot be achieved 
 by the existing solutions.\n\nRegistration Category: Full Access\n\nSessio
 n Chair: Peng-Shuai Wang (Peking University)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_216&sess=sess150
END:VEVENT
END:VCALENDAR
