BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163715Z
LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231214T103500
DTEND;TZID=Australia/Melbourne:20231214T105000
UID:siggraphasia_SIGGRAPH Asia 2023_sess152_papers_452@linklings.com
SUMMARY:IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Tr
 ansformers
DESCRIPTION:Ronghuan Wu, Wanchao Su, Kede Ma, and Jing Liao (City Universi
 ty of Hong Kong)\n\nScalable Vector Graphics (SVG) is a popular vector ima
 ge format that offers good support for interactivity and animation. Despit
 e its appealing characteristics, creating custom SVG content can be challe
 nging for users due to the steep learning curve required to understand SVG
  grammars or get familiar with professional editing software. Recent advan
 cements in text-to-image generation have inspired researchers to explore v
 ector graphics synthesis using either image-based methods (i.e., text → ra
 ster image → vector graphics) combining text-to-image generation models wi
 th image vectorization, or language-based methods (i.e., text → vector gra
 phics script) through pretrained large language models. Nevertheless, thes
 e methods suffer from limitations in terms of generation quality, diversit
 y, and flexibility.\nIn this paper, we introduce IconShop, a text-guided v
 ector icon synthesis method using autoregressive transformers. The key to 
 success of our approach is to sequentialize and tokenize SVG paths (and te
 xtual descriptions as guidance) into a uniquely decodable token sequence. 
 With that, we are able to exploit the sequence learning power of autoregre
 ssive transformers, while enabling both unconditional and text-conditioned
  icon synthesis. Through standard training to predict the next token on a 
 large-scale vector icon dataset accompanied by textural descriptions, the 
 proposed IconShop consistently exhibits better icon synthesis capability t
 han existing image-based and language-based methods both quantitatively (u
 sing the FID and CLIP scores) and qualitatively (through formal subjective
  user studies). Meanwhile, we observe a dramatic improvement in generation
  diversity, which is validated by the objective Uniqueness and Novelty mea
 sures. More importantly, we demonstrate the flexibility of IconShop with m
 ultiple novel icon synthesis tasks, including icon editing, icon interpola
 tion, icon semantic combination, and icon design auto-suggestion.\n\nRegis
 tration Category: Full Access\n\nSession Chair: Haisen Zhao (Shandong Univ
 ersity)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_452&sess=sess152
END:VEVENT
END:VCALENDAR
