BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023302Z
LOCATION:Hall C\, C Block\, Level 4
DTSTART;TZID=Asia/Tokyo:20241206T160000
DTEND;TZID=Asia/Tokyo:20241206T180000
UID:siggraphasia_SIGGRAPH Asia 2024_sess208@linklings.com
SUMMARY:Real-Time Live!
DESCRIPTION:Key Events, Real-Time Live!\n\nReal-Time Live! is an unmissabl
 e showcase of the latest in real-time graphics and interactive techniques.
  Held at SIGGRAPH Asia 2024, this event promises a captivating experience 
 as creators unveil groundbreaking live demos on stage. From interactive mu
 sic playgrounds to cutting-edge medical imaging, generative AI to rapid ro
 bot design, Real-Time Live! presents a dynamic mix of innovation.\n\nJoin 
 us in Tokyo, Japan for an exclusive glimpse into the heart of astonishing 
 creations and breathtaking results. Witness the year’s most innovative gra
 phics and interactive techniques presented live by their creators. It’s a 
 spectacle you won’t want to miss!\n\nDigital Salon: An AI and Physics-Driv
 en Tool for 3D Hair Grooming and Simulation\n\nWe introduce Digital Salon,
  a novel approach to 3D hair grooming and simulation by integrating advanc
 ed AI and physics-based algorithms. This tool enables users to create deta
 iled hairstyles through natural language descriptions, seamlessly blending
  text-driven hair generation, interactive editing, ...\n\n\nChengan He (Ya
 le University); Jorge Alejandro Amador Herrera (KAUST); Yi Zhou, Zhixin Sh
 u, and Xin Sun (Adobe Research); Yao Feng (Max Planck Institute for Intell
 igent Systems, ETH Zürich); Sören Pirk (Kiel University); Dominik L. Miche
 ls (KAUST); Meng Zhang (Nanjing University of Science and Technology); Yan
 gtuanfeng Wang (Adobe Research); and Holly Rushmeier (Yale University)\n--
 -------------------\nOnline Chat with Living Neuronal Cultures\n\nWe demon
 strate a combination of neuroscience experiment and visual interaction ove
 r the internet. Activity of cultured neurons is obtained by measurement in
 strument, transmitted to WebGL visualization in real-time, and broadcasted
  online as light and sound effects. The system reveals the beauty of t...\
 n\n\nTeruki Mayama, Dai Akita, Wataru Kawakami, and Hirokazu Takahashi (Un
 iversity of Tokyo)\n---------------------\nTripo Doodle: The Next-Gen AI 3
 D Creative Tool\n\nCreating 3D digital content has been a tough challenge,
  especially when dealing with scenes packed with objects or characters per
 forming complex motions. With Tripo Doodle, we can now rapidly prototype e
 ntire scenes and fully animatable characters, with nothing more than simpl
 e doodles and text prom...\n\n\nSienna Hwang, Muqing Jia, Yan-Pei Cao, Yua
 n-Chen Guo, Yangguang Li, and Ding Liang (VAST)\n---------------------\nDe
 bate Generation System in Japanese Rap Battle Format\n\nWe propose a "Deba
 te System in Japanese Rap Battle Format." Our goal is to elevate discussio
 ns into entertainment, encouraging people of all generations to form their
  own opinions.\n\nThis system generates two distinct opinions (lyrics) whe
 n presented with a discussion topic, and based on these opinio...\n\n\nRyo
 ta Mibayashi (University of Hyogo); Toru Urakawa (Asahi Shimbun Company); 
 Dai Takanashi (Dentsu inc); Kanata Yamagishi and Tomoya Morohoshi (Dentsu)
 ; Ryuho Sekikawa and Yasuhiko Nishimura (Think & Craft, Dentsu Creative X)
 ; Yuta Takeuchi (Dentsu); Mina Shibasaki (Tokyo Metropolitan University); 
 Hideaki TAMORI (Asahi Shimbun Company); and Takehiro Yamamoto and Hiroaki 
 Ohshima (University of Hyogo)\n---------------------\nWhat’s New in Pixar’
 s Presto: ML Posing, Invertible Rigs, and Interactive Motion Blur\n\nThis 
 presentation features three distinct technologies that together will demon
 strate a unique animation experience to the Siggraph Asia audience.  Our M
 L Posing technology is the realization of the Siggraph Asia 2023 paper “Po
 se and Skeleton-aware Neural IK for Pose and Motion Editing” ...\n\n\nPaul
  Kanyuk, Arnold Moon, Haldean Brown, and Matthias Goerner (Pixar Animation
  Studios)\n---------------------\nViewtify: Next-Generation Medical Image 
 Viewer with Stereoscopic Display\n\nViewtify® fully leverages a game engin
 e and GPU to generate and render high-quality 3DCG from CT and MRI images 
 in real time. By using stereoscopic displays, it presents 3D visuals with 
 depth information. Its high-speed processing also enables the real-time di
 splay of 4D CT and 4D MRI images as anim...\n\n\nHirofumi Seo (SCIEMENT, I
 nc.)\n---------------------\nPrototyping Game Character Animations with AI
  in Unity\n\nMost 3D games leverage animations to breathe life into charac
 ters. Even at the early stages of game development, prototyping with worki
 ng animations helps developers quickly understand and adapt the game desig
 n. Controllable characters require animations that are constrained in mult
 iple ways in orde...\n\n\nFlorent Bocquelet, Félix Harvey, and Pierre-Luc 
 Loyer (Unity Technologies)\n---------------------\nBeam: Interactive Music
  Effects Playground\n\nBEAM brings 3D graphics and interactivity to the wo
 rld of music-making. BEAM is a real-time audio effects plugin made by Luna
 cy Inc, which runs inside any digital audio workstation, like Ableton, Log
 ic, and Pro Tools. It lets music producers craft complex chains of audio e
 ffects by manipulating a ri...\n\n\nBrandon Montell and Casey Kolb (Lunacy
  Inc)\n---------------------\nReal-Time Virtual Try-On Using Generative AI
 \n\nWe introduce a novel real-time virtual try-on system powered by genera
 tive AI. Our demonstration highlights key features, including real-time vi
 rtual try-on, realistic wrinkle generation, and human-garment interaction.
  We showcase the system’s ability to produce highly plausible results acro
 ss...\n\n\nZaiqiang Wu and I-Chao Shen (University of Tokyo); Yuki Shibata
  (SoftBank); Takayuki Hori (SoftBank, Waseda University); Mengjia Jin and 
 Wataru Kubo (SoftBank); and Takeo Igarashi (University of Tokyo)\n--------
 -------------\nRobotSketch: A Real-Time Live Showcase of Superfast Design 
 of Legged Robots\n\nSoon, many robots equipped with AI capable of providin
 g valuable services will appear in people’s lives, much like the Cambrian 
 explosion, but for robots. However, until now, robots have been developed 
 primarily from a technical perspective, such as sensing, locomotion, and m
 anipulation, but t...\n\n\nJoon Hyub Lee, Hyunsik Oh, Junwoo Yoon, Seung-J
 un Lee, Taegyu Jin, Jemin Hwangbo, and Seok-Hyung Bae (Korea Advanced Inst
 itute of Science and Technology (KAIST))\n---------------------\nAuthentic
  Self XR: A live dancer interacting with 3D volumetric captures in XR\n\nT
 his live performance piece explores a dancers interaction with her 3D capt
 ured virtual self. The arts-led work investigates notions of authenticity 
 by combine multiple pre-captured 3D volumetric videos and live performance
 . Using an interactive XR rig and movment generate sound the dancer eventu
 all...\n\n\nJohn McGhee and Conan Bourke (University of New South Wales 3D
 XLab), Robert Lawther (3DXLab UNSW), Oliver Bown (University of New South 
 Wales), and Charlie Wrublewski (Charlie Wrublewski)\n\nRegistration Catego
 ry: Enhanced Access, Full Access, Full Access Supporter\n\nLanguage Format
 : English Language\n\nSession Chair: Takahito Tejima (Polyphony Digital In
 c.)
END:VEVENT
END:VCALENDAR
