BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070247Z LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231214T110000 DTEND;TZID=Australia/Melbourne:20231214T111000 UID:siggraphasia_SIGGRAPH Asia 2023_sess152_papers_639@linklings.com SUMMARY:Anything to Glyph: Artistic Font Synthesis via Text-to-Image Diffu sion Model DESCRIPTION:Technical Papers\n\nChangShuo Wang, Lei Wu, XiaoLe Liu, and Xi ang Li (Shandong University); Lei Meng (Shandong University, Shandong Rese arch Institute of Industrial Technology); and Xiangxu Meng (Shandong Unive rsity)\n\nThe automatic generation of artistic fonts is a challenging task that attracts many research interests. Previous methods specifically focu s on glyph or texture style transfer. However, we often come across creati ve fonts composed of objects in posters or logos. These fonts have proven to be a challenge for existing methods as they struggle to generate simila r designs. This paper proposes a novel method for generating creative arti stic fonts using a pre-trained text-to-image diffusion model. Our model ta kes a shape image and a prompt describing an object as input and generates an artistic glyph image consisting of such objects. Specifically, we intr oduce a novel heatmap-based weak position constraint method to guide the p ositioning of objects in the generated image, and we also propose the Late nt Space Semantic Augmentation Module that improves other information whil e constraining object position. Our approach is unique in that it can pres erve the object's original shape while constraining its position. And our training method requires only a small quantity of generated data, making i t an efficient unsupervised learning approach. Experimental results demons trate that our method can generate various glyphs, including Chinese, Engl ish, Japanese, and symbols, using different objects. We also conducted qua litative and quantitative comparisons with various position control method s for the diffusion model. The results indicate that our approach outperfo rms other methods in terms of visual quality, innovation, and user evaluat ion.\n\nRegistration Category: Full Access\n\nSession Chair: Haisen Zhao ( Shandong University) URL:https://asia.siggraph.org/2023/full-program?id=papers_639&sess=sess152 END:VEVENT END:VCALENDAR