BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023311Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241203T144500 DTEND;TZID=Asia/Tokyo:20241203T155500 UID:siggraphasia_SIGGRAPH Asia 2024_sess108@linklings.com SUMMARY:Design It All: Font, Paint, and Colors DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation. \n\nProcessPainter: Learning to draw from sequence data\n\nThe painting pr ocess of artists is inherently stepwise and varies significantly among dif ferent painters and styles. Generating detailed, step-by-step painting pro cesses is essential for art education and research, yet remains largely un derexplored. Traditional stroke-based rendering methods break d...\n\n\nYi ren Song (National University of Singapore, Show Lab); Shijie Huang, Chen Yao, and Hai Ci (National University of Singapore); Xiaojun Ye (Zhejiang U niversity); Jiaming Liu (Tiamat); Yuxuan Zhang (Shanghai Jiao Tong Univers ity); and Mike Zheng Shou (National University of Singapore)\n------------ ---------\nHFH-Font: Few-shot Chinese Font Synthesis with Higher Quality, Faster Speed, and Higher Resolution\n\nThe challenge of automatically synt hesizing high-quality vector fonts, particularly for writing systems (e.g. , Chinese) consisting of huge amounts of complex glyphs, remains unsolved. Existing font synthesis techniques fall into two categories: 1) methods t hat directly generate vector glyphs, and 2)...\n\n\nHua Li (Wangxuan Insti tute of Computer Technology, Peking University) and Zhouhui Lian (Wangxuan Institute of Computer Technology, Peking University; State Key Laboratory of General Artificial Intelligence, Peking University)\n----------------- ----\nInverse Painting: Reconstructing The Painting Process\n\nGiven an in put painting, we reconstruct a time-lapse video of how it may be painted. We formulate this as an autoregressive image generation problem, in which an initially blank "canvas'' is iteratively updated. The model learns fro m real artists by training on many painting videos.\nOur approach in...\n\ n\nBowei Chen, Yifan Wang, Brian Curless, Ira Kemelmacher-Shlizerman, and Steven M. Seitz (University of Washington)\n---------------------\nColorfu l Diffuse Intrinsic Image Decomposition in the Wild\n\nIntrinsic image dec omposition aims to separate the surface reflectance and the effects from t he illumination given a single photograph. Due to the complexity of the pr oblem, most prior works assume a single-color illumination and a Lambertia n world, which limits their use in illumination-aware image...\n\n\nChris Careaga and Yağız Aksoy (Simon Fraser University)\n---------------------\n LVCD: Reference-based Lineart Video Colorization with Diffusion Models\n\n We propose the first video diffusion framework for reference-based lineart video colorization. Unlike previous works that rely solely on image gener ative models to colorize lineart frame by frame, our approach leverages a large-scale pretrained video diffusion model to generate colorized animati on v...\n\n\nZhitong Huang (City University of Hong Kong); Mohan Zhang (We chat, Tencent Inc.); and Jing Liao (City University of Hong Kong)\n------- --------------\nSD-𝜋XL: Generating Low-Resolution Quantized Imagery via Sc ore Distillation\n\nLow-resolution quantized imagery, such as pixel art, i s seeing a revival in modern applications ranging from video game graphics to digital design and fabrication, where creativity is often bound by a l imited palette of elemental units. Despite their growing popularity, the a utomated generation of q...\n\n\nAlexandre Binninger and Olga Sorkine-Horn ung (ETH Zürich)\n\nRegistration Category: Full Access, Full Access Suppor ter\n\nLanguage Format: English Language\n\nSession Chair: I-Chao Shen (Th e University of Tokyo) END:VEVENT END:VCALENDAR