BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070312Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T111500 DTEND;TZID=Australia/Melbourne:20231215T121300 UID:siggraphasia_SIGGRAPH Asia 2023_sess156@linklings.com SUMMARY:Humans & Characters DESCRIPTION:Technical Communications, Technical Papers\n\nStory-to-Motion: Synthesizing Infinite and Controllable Character Animation from Long Text \n\nWe introduce Story-to-Motion, generating motion and trajectory from te xt, using a text-driven, controllable system that combines large language model, motion matching, and neural blending for diverse and realistic moti on generation.\n\n\nZhongfei Qing, Zhongang Cai, Zhitao Yang, and Lei Yang (SenseTime Research)\n---------------------\nInteractive Story Visualizat ion with Multiple Characters\n\nAccurate Story visualization requires seve ral necessary elements, such as identity consistency across frames, the al ignment between plain text and visual content, and a reasonable layout of objects in images. Most previous works endeavor to meet these requirements by fitting a text-to-image (T2I) mo...\n\n\nYuan Gong (Tsinghua Universit y); Youxin Pang (MAIS & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences); Xiaodong Cun and Menghan Xia (Tencent); Yin gqing He (Hong Kong University of Science and Technology); Haoxin Chen, Lo ngyue Wang, Yong Zhang, Xintao Wang, and Ying Shan (Tencent); and Yujiu Ya ng (Tsinghua University)\n---------------------\nDecaf: Monocular Deformat ion Capture for Face and Hand Interactions\n\nExisting methods for 3D trac king from monocular RGB videos predominantly consider articulated and rigi d objects (e.g., two hands or humans interacting with rigid environments). Modelling dense non-rigid object deformations in this setting (e.g., whe n hand are interacting with a face), remained larg...\n\n\nSoshi Shimada ( Max-Planck-Institut für Informatik; Saarbrücken Research Center for Visual Computing, Interaction and Artificial Intelligence); Vladislav Golyanik (Max-Planck-Institut für Informatik); Patrick Pérez (Valeo); and Christian Theobalt (Max-Planck-Institut für Informatik; Saarbrücken Research Center for Visual Computing, Interaction and Artificial Intelligence)\n-------- -------------\nIntrinsic Harmonization for Illumination-Aware Image Compos iting\n\nDespite significant advancements in network-based image harmoniza tion techniques, there still exists a domain gap between training pairs an d real-world composites encountered during inference. Most existing method s are trained to reverse global edits made on segmented image regions, whi ch fail to ac...\n\n\nChris Careaga, S. Mahdi H. Miangoleh, and Yağız Akso y (Simon Fraser University)\n---------------------\nEfficient Hybrid Zoom using Camera Fusion on Mobile Phones\n\nDSLR cameras can achieve multiple zoom levels via shifting lens distances or swapping lens types. However, t hese techniques are not possible on smartphone devices due to space constr aints. Most smartphone manufacturers adopt a hybrid zoom system: commonly a Wide (W) camera at a low zoom level and a ...\n\n\nXiaotong Wu, Wei-Shen g Lai, and Yichang Shih (Google Inc.); Charles Herrmann and Michael Kraini n (Google Research); Deqing Sun (Google); and Chia-Kai Liang (Google Inc.) \n\nRegistration Category: Full Access\n\nSession Chair: Sergi Pujades (Na tional Institute for Research in Computer Science and Automation (INRIA), Université Grenoble Alpes) END:VEVENT END:VCALENDAR