BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Tokyo X-LIC-LOCATION:Asia/Tokyo BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:JST DTSTART:18871231T000000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20250110T023312Z LOCATION:Hall B7 (1)\, B Block\, Level 7 DTSTART;TZID=Asia/Tokyo:20241205T163000 DTEND;TZID=Asia/Tokyo:20241205T174000 UID:siggraphasia_SIGGRAPH Asia 2024_sess138@linklings.com SUMMARY:Talking Heads and Moving Faces DESCRIPTION:Technical Papers\n\nEach Paper gives a 10 minute presentation. \n\nPersonaTalk: Bring Attention to Your Persona in Visual Dubbing\n\nFor audio-driven visual dubbing, it remains a considerable challenge to uphold and highlight speaker's persona while synthesizing accurate lip synchroni zation. Existing methods fall short of capturing speaker's unique speaking style or preserving facial details. In this paper, we present PersonaTalk ...\n\n\nLonghao Zhang, Shuang Liang, Zhipeng Ge, and Tianshu Hu (Bytedanc e)\n---------------------\nTALK-Act: Enhance Textural-Awareness for 2D Spe aking Avatar Reenactment with Diffusion Model\n\nRecently, 2D speaking ava tars have increasingly participated in everyday scenarios due to the fast development of facial animation techniques. However, most existing works n eglect the explicit control of human bodies. In this paper, we propose to drive not only the faces but also the torso and gestu...\n\n\nJiazhi Guan (Tsinghua University); Quanwei Yang (University of Science and Technology of China); Kaisiyuan Wang, Hang Zhou, Shengyi He, Zhiliang Xu, Haocheng Fe ng, Errui Ding, and Jingdong Wang (Baidu); Hongtao Xie (University of Scie nce and Technology of China); Youjian Zhao (Tsinghua University); and Ziwe i Liu (Nanyang Technological University (NTU))\n---------------------\nTex tToon: Real-Time Text Toonify Head Avatar from Single Video\n\nWe propose TextToon, a method to generate a drivable toonified avatar. Given a short monocular video sequence and a written instruction about the avatar style, our model can generate a high-fidelity toonified avatar that can be drive n in real-time by another video with arbitrary identities. Existing...\n\n \nLuchuan Song and Lele Chen (Univeristy of Rochester), Celong Liu (Byteda nce), Pinxin Liu (University of Rochester), and Chenliang Xu (Univeristy o f Rochester)\n---------------------\nFollow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation\n\nWe present Follow-Your-Emoj i, a diffusion-based framework for portrait animation, which animates a re ference portrait with target landmark sequences. The main challenge of por trait animation is to preserve the identity of the reference portrait and transfer the target expression to this portrait whi...\n\n\nYue Ma and Hon gyu Liu (Hong Kong University of Science and Technology); Hongfa Wang and Heng Pan (Tencent); Yingqing He (Hong Kong University of Science and Techn ology); Junkun Yuan, Ailing Zeng, and Chengfei Cai (Tencent); Heung-Yeung Shum (Tsinghua University); Wei Liu (Tencent); and Qifeng Chen (Hong Kong University of Science and Technology)\n---------------------\nVOODOO XP: E xpressive One-Shot Head Reenactment for VR Telepresence\n\nWe introduce VO ODOO XP: a 3D-aware one-shot head reenactment method that can generate hig hly expressive facial expressions from any input driver video and a single 2D portrait. Our solution is real-time, view-consistent, and can be insta ntly used without calibration or fine-tuning. We demonstrate ou...\n\n\nPh ong Tran (MBZUAI); Egor Zakharov (ETH Zurich); Long-Nhat Ho, Adilbek Karma nov, and Ariana Bermudez Venegas (MBZUAI); McLean Goldwhite, Aviral Agarwa l, and Liwen Hu (Pinscreen); Anh Tran (VinAI Research); and Hao Li (MBZUAI , Pinscreen)\n---------------------\nFabrig: A Cloth-Simulated Transferabl e 3D Face Parameterization\n\nExisting 3D face parameterization methods ar e limited to human faces and/or require a large amount of manual work to p repare face-specific blendshapes. Unfortunately, many of the automated par ameterization methods do not provide local controls for the different faci al regions and methods that allow ...\n\n\nChangAn Zhu and Chris Joslin (C arleton University)\n\nRegistration Category: Full Access, Full Access Sup porter\n\nLanguage Format: English Language\n\nSession Chair: Hongbo Fu (H ong Kong University of Science and Technology) END:VEVENT END:VCALENDAR