BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023312Z
LOCATION:Hall B5 (2)\, B Block\, Level 5
DTSTART;TZID=Asia/Tokyo:20241204T163000
DTEND;TZID=Asia/Tokyo:20241204T174000
UID:siggraphasia_SIGGRAPH Asia 2024_sess122@linklings.com
SUMMARY:Text, Texturing, and Stylization
DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation.
 \n\nTEXGen: a Generative Diffusion Model for Mesh Textures\n\nWhile high-q
 uality texture maps are essential for realistic 3D asset rendering, few st
 udies have explored learning directly in the texture space, especially on 
 large-scale datasets. In this work, we depart from the conventional approa
 ch of relying on pre-trained 2D diffusion models for test-time opt...\n\n\
 nXin Yu (University of Hong Kong); Ze Yuan (Beihang University); Yuan-Chen
  Guo (VAST); Ying-Tian Liu (Tsinghua University); Jianhui Liu (University 
 of Hong Kong); Yangguang Li, Yan-Pei Cao, and Ding Liang (VAST); and Xiaoj
 uan Qi (University of Hong Kong)\n---------------------\nStyleTex: Style I
 mage-Guided Texture Generation for 3D Models\n\nStyle-guided texture gener
 ation aims to generate a texture that is harmonious with both the style of
  the reference image and the geometry of the input mesh, given a reference
  style image and a 3D mesh with its text description.  \nAlthough diffusio
 n-based 3D texture generation methods, such as distil...\n\n\nZhiyu Xie, Y
 uqing Zhang, Xiangjun Tang, Yiqian Wu, and Dehan Chen (State Key Laborator
 y of CAD&CG, Zhejiang University); Gongsheng Li (Zhejiang University); and
  Xiaogang Jin (State Key Laboratory of CAD&CG, Zhejiang University)\n-----
 ----------------\nInstanceTex: Instance-level Controllable Texture Synthes
 is for 3D Scenes via Diffusion Priors\n\nAutomatically generating high-fid
 elity texture for a complex scene remains an open problem in computer grap
 hics. While pioneering text-to-texture works based on 2D diffusion models 
 have achieved fascinating results on single objects, they either suffer fr
 om style inconsistency and semantic misalignm...\n\n\nMingxin Yang (Shenzh
 en Institute of Advanced Technology, Chinese Academy of Sciences); Jianwei
  Guo (Institute of Automation, Chinese Academy Of Sciences); Yuzhi Chen (S
 chool of Artificial Intelligence, University of Chinese Academy of Science
 s); Lan Chen (Institute of Automation, Chinese Academy of Sciences); Pu Li
  (Institute of Automation, Chinese Academy Of Sciences); Zhanglin Cheng (S
 henzhen Institute of Advanced Technology, Chinese Academy of Sciences); Xi
 aopeng Zhang (Institute of Automation, Chinese Academy Of Sciences); and H
 ui Huang (Shenzhen University (SZU))\n---------------------\nCompositional
  Neural Textures\n\nTexture plays a vital role in enhancing visual richnes
 s in both real photographs and computer-generated imagery. However, the pr
 ocess of editing textures often involves laborious and repetitive manual a
 djustments of textons, which are the recurring local patterns that charact
 erize textures. This wor...\n\n\nPeihan Tu (University of Maryland, Colleg
 e Park); Li-Yi Wei (Adobe Research); and Matthias Zwicker (University of M
 aryland, College Park)\n---------------------\nText-Guided Texturing by Sy
 nchronized Multi-View Diffusion\n\nThis paper introduces a novel approach 
 to synthesize texture to dress up a given 3D object, given a text prompt. 
 \nBased on the pretrained text-to-image (T2I) diffusion model, existing me
 thods usually employ a project-and-inpaint approach, in which a view of th
 e given object is first generated and wa...\n\n\nYuxin Liu and Minshan Xie
  (Chinese University of Hong Kong); Hanyuan Liu (City University of Hong K
 ong); and Tien-Tsin Wong (Monash University, Chinese University of Hong Ko
 ng)\n---------------------\nCamera Settings as Tokens: Modeling Photograph
 y on Latent Diffusion Models\n\nText-to-image models have revolutionized c
 ontent creation, enabling users to generate images from natural language p
 rompts. While recent advancements in conditioning these models offer more 
 control over the generated results, photography—a significant artistic dom
 ain—remains inadequately...\n\n\nI-Sheng Fang, Yue-Hua Han, and Jun-Cheng 
 Chen (Academia Sinica)\n\nRegistration Category: Full Access, Full Access 
 Supporter\n\nLanguage Format: English Language\n\nSession Chair: Minhyuk S
 ung (Korea Advanced Institute of Science and Technology (KAIST))
END:VEVENT
END:VCALENDAR
