BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023305Z
LOCATION:Lobby Gallery (1) & (2)\, G Block\, Level B1
DTSTART;TZID=Asia/Tokyo:20241205T090000
DTEND;TZID=Asia/Tokyo:20241205T180000
UID:siggraphasia_SIGGRAPH Asia 2024_sess197@linklings.com
SUMMARY:Posters Gallery
DESCRIPTION:Poster\n\nAt SIGGRAPH Asia 2024, delve into the world of compu
 ter graphics through the captivating Posters Program. This interactive for
 um offers a glimpse into the latest ideas and practical contributions from
  animators, developers, educators, and researchers worldwide.\nEach poster
  displayed provides valuable insights into various aspects of computer gra
 phics, from animation and rendering to machine learning and virtual realit
 y. With a focus on innovation and discovery, attendees can expect to encou
 nter a diverse range of topics that showcase the evolving landscape of CG.
 \n\nUnder the theme of Curious Minds, this year’s program invites explorat
 ion and encourages attendees to engage with emerging trends and technologi
 es. Whether you’re seeking behind-the-scenes views of new commercial work 
 or exploring solutions to challenging problems, the Posters Program offers
  something for everyone.\n\nJoin us in Tokyo for SIGGRAPH Asia 2024 and wi
 tness the latest advancements in computer graphics at the Posters Program.
 \n\nThese ideas are put together into simple, visually attractive posters 
 showcased at SIGGRAPH Asia. Posters’ authors will also be present to expla
 in their findings, discuss their work, receive feedback and network with a
 ll attendees.\n\nA Sentient Space Using Light Sensing with Particle Life\n
 \nA light-based sensing system enabling dynamic spatial interaction throug
 h particle grid projections. The system captures environmental changes, al
 lowing real-time light adjustments, enhancing interactivity and responsive
 ness within the space.\n\n\nPan-Pan Shiung and June-Hao Hou (Graduate Inst
 itute of Architecture, National Yang Ming Chiao Tung University)\n--------
 -------------\nHeterogeneous Architecture for Asynchronous Seamless Image 
 Stitching\n\nThis work proposes a seamless image stitching method on a het
 erogeneous CPU-GPU system that achieves 60fps at 4K resolution without gho
 sting in real-time embedded environments.\n\n\nHyerin Cho, Jin-Woo Kim, Ji
 nhong Park, and Jeong-Ho Woo (VisioNexT)\n---------------------\nFlying Yo
 ur Imagination: Integrating AI in VR for Kite Heritage\n\n``Flying Your Im
 agination" is a project that integrates AI in VR for Kite Heritage; it inv
 estigates the innovative integration of VR, AI technology, and embodied in
 teraction design.\n\n\nKexin Nie (University of Sydney) and Mengyao Guo (S
 henzhen International School of Design, Harbin Institute of Technology; Un
 iversity of Macau)\n---------------------\nA Relighting Method for Single 
 Terrain Image based on Two-stage Albedo Estimation Model\n\nThis paper pro
 poses a method for relighting a single terrain image to match user-specifi
 ed times of day or weather conditions by estimating albedo and depth using
  deep learning.\n\n\nShun Tatsukawa (Hosei University) and Syuhei Sato (Ho
 sei University, Prometech CG Research)\n---------------------\nRethinking 
 motion keyframe extraction : a greedy procedural approach using a neural c
 ontrol rig\n\nCurrent keyframe extraction methods are unsuitable for 3D an
 imators. Our novel approach, closer to their workflows, uses a neural cont
 rol-rig and an algorithm to optimize keyframe placement on MoCap animation
 s.\n\n\nThéo Cheynel (Centre National de la Recherche Scientifique - Labor
 atoire d'informatique de l'École Polytechnique (LIX), Kinetix); Omar El Kh
 alifi and Baptiste Bellot-Gurlet (Kinetix); and Damien Rohmer and Marie-Pa
 ule Cani (Centre National de la Recherche Scientifique - Laboratoire d'inf
 ormatique de l'École Polytechnique (LIX))\n---------------------\nOverallN
 et: Scale-Arbitrary Lightweight SR Model for handling 360° Panoramic Image
 s\n\nWe propose a lightweight, scale-independent SR model incorporating va
 rious techniques named OverallNet. Further, we incorporate quantization to
  maximize efficiency during user inference, making it well-suited for proc
 essing high-quality panoramics.\n\n\nDongSik Yoon, Jongeun Kim, Seonggeun 
 Song, Yejin Lee, and Gunhee Lee (HDC LABS)\n---------------------\nTingle 
 Tennis: Menstrual Experience Sensory Simulation Sport Device\n\nTingle Ten
 nis, a menstrual period sensory simulation game, leverages VR technology a
 nd haptic feedback for an immersive experience highlighting the physical a
 nd psychological challenges female athletes face during their periods.\n\n
 \nShun-Han Chang (National Tsing Hua University); Chen-Chun Wu (National T
 sing Hua University, National Chengchi University); Zi-Yun Lai, Tsung-Yen 
 Lee, and Cheng-En Ho (National Tsing Hua University); and Min-Chun Hu (Nat
 ional Tsing Hua University, Taiwan Institute of Sports Science)\n---------
 ------------\nChoreoSurf: Scalable Surface System with 8-DOF SMA Actuators
 \n\nThe ChoreoSurf system is a scalable surface system with a shape-memory
  alloy actuator that can bend in eight directions. This system can mount a
 ctuators on the surface layers of various threedimensional shapes. Applica
 tions include a tabletop system, interactive wall, tentacles tower, kineti
 c dress,...\n\n\nAkira Nakayasu (Tokyo Metropolitan University)\n---------
 ------------\nSegmentation of 3D Gaussians using Masked Gradients\n\nA nov
 el 3D segmentation algorithm for Gaussian splatting that utilizes 2D masks
  and inference-time gradient backpropagation, significantly enhancing down
 stream applications like AR, VR, 3DGS editing, asset generation, and more.
 \n\n\nJoji Joseph, Bharadwaj Amrutur, and Shalabh Bhatnagar (Indian Instit
 ute of Science)\n---------------------\nEfficient visualization of appeara
 nce space of translucent objects using differential rendering\n\nAn effici
 ent visualization method that allows users to interactively explore the su
 bsurface scattering parameter space is presented.\n\n\nRiel Suzuki (Hokkai
 do University) and Yoshinori Dobashi (Hokkaido University, Prometech CG Re
 search)\n---------------------\nTowards Accelerating Physics Informed Grap
 h Neural Network for Fluid Simulation\n\nWe introduce a pioneering Multi-G
 NN Processor Physics-Informed Graph Neural Network (PIGNN) approach which 
 reduced training time of PIGNN to a quarter while maintaining the error ra
 te.\n\n\nYidi Wang (NVIDIA, Singapore Institute of Technology); Frank Guan
 , Malcolm Yoke Hean Low, and Daniel Wang (Singapore Institute of Technolog
 y); and Aik Beng Ng and Simon See (NVIDIA)\n---------------------\nAuditor
 y AR System to Induce Pseudo-Haptic Force Feedback for Lateral Hand Moveme
 nts Using Spatially Localized Sound Stimuli\n\nThis proposal presents a ps
 eudo-force feedback design based on spatially localized sound, tailored fo
 r the visually impaired. Sound location is adjusted to create an auditory 
 conflict between the perceived hand position in virtual space and its actu
 al position in the real world, thus inducing a forc...\n\n\nDaniel Oswaldo
  Lopez Tassara, Naoto Wakatsuki, and Keiichi Zempo (University of Tsukuba)
 \n---------------------\nSLAM-Based Illegal Parking Detection System\n\nTh
 e paper proposes a SLAM-based system for real-time illegal parking detecti
 on, improving efficiency by utilizing unmanned patrol vehicles for automat
 ed enforcement in urban areas.\n\n\nJiho Bae, Minjae Lee, Ungsik Kim, and 
 Suwon Lee (Gyeongsang National University)\n---------------------\nPianoKe
 ystroke-EMG: Piano Hand Muscle Electromyography Estimation from Easily Acc
 essible Piano Keystroke\n\nElectromyography is essential in skill acquisit
 ion despite its resource-intensive access. We focused on small hand muscle
  activities in piano performance and proposed an approach to estimate elec
 tromyography from cost-effective keystrokes.\n\n\nRuofan Liu (Tokyo Instit
 ute of Technology, Sony Computer Science Laboratories); Yichen Peng (Tokyo
  Institute of Technology); Takanori Oku (Shibaura Institute of Technology,
  NeuroPiano Institute); Erwin Wu (Huawei Japan, Tokyo Institute of Technol
 ogy); Shinichi Furuya (Sony Computer Science Laboratories); and Hideki Koi
 ke (Tokyo Institute of Technology)\n---------------------\nLatent Bias Cor
 rection in Outpainting Artworks\n\nThis paper describes research on outpai
 nting artworks. Our purpose is to eliminate unnecessary tendencies that fr
 equently occur when outpainting an artwork, and we propose a novel latent 
 correction method.\n\n\nJung-Jae Yu and Dae-Young Song (Electronics and Te
 lecommunications Research Institute (ETRI))\n---------------------\nCockta
 il-Party Communication from a Display to a Synchronized Camera\n\nWe propo
 se a Cocktail-Party Communication (CPC) system using display and camera. U
 tilizing Optical Camera Communication (OCC) technology, we successfully tr
 ansmitted audio data. Future challenges include distortion correction and 
 speed enhancement.\n\n\nAsuka Fukubayashi (Sony Semiconductor Solutions Co
 rporation), Mayu Ishii and Yu Nakayama (Tokyo University of Agriculture an
 d Technology), and Shun Kaizu (Sony Semiconductor Solutions Corporation)\n
 ---------------------\nA Study of 3D Character Control Methods_Keyboard, S
 peech, Hand Gesture, and Mixed Interfaces\n\nThis poster presents a pilot 
 study on optimal usability of desktop interfaces (keyboard, speech, hand g
 estures) for avatar control in MR-based military training, finding mixed i
 nterfaces provide the best usability.\n\n\nJunSeo Park, Hanseob Kim, and G
 erard Jounghyun Kim (Korea University)\n---------------------\nThermiapt: 
 Sensory Perception of Quantitative Thermodynamics Concepts in Education\n\
 nThis study introduces "Thermiapt," a multi-sensory device that enhances t
 hermodynamic learning by integrating visual and haptic experiences, signif
 icantly improving comprehension and retention through immersive interactio
 n.\n\n\nAnji Fujiwara (National Institute of Technology, Nara College; Nar
 a Institute of Science and Technology (NAIST)) and Kodai Iwasaki, Tamami W
 atanabe, and Hideaki Uchiyama (Nara Institute of Science and Technology (N
 AIST))\n---------------------\nA Method for Generating Tactile Sensations 
 from Textual Descriptions Using Generative AI\n\nThis study presents a nov
 el approach to generate tactile sensations from text using AI. It combines
  fingernail sensor data, AudioLDM processing, and ChatGPT-generated onomat
 opoeia to create diverse haptic feedback experiences.\n\n\nMomoka Nakayama
 , Risako Kawashima, Shintaro Murakami, Yuta Takeuchi, Tatsuya Mori, and Da
 i Takanashi (Dentsu Lab Tokyo)\n---------------------\nReborn of the White
  Bone Demon: Role-Playing Game Design Using Generative AI in XR\n\nThis pa
 per presents "Reborn of the White Bone Demon," an XR RPG using GenAI for r
 eal-time storyline generation, enhancing player immersion and personalizat
 ion through AI-driven NPC interactions.\n\n\nXiaozhan Liang, Yu Wang, and 
 Fengyi Yan (Beihang University); Zehong Ouyang and Yong Hu (Beihang Univer
 sity; State Key Laboratory of Virtual Reality Technology and Systems, Beih
 ang University); and Siyu Luo (Tsinghua University)\n---------------------
 \nShortest Path Speed-up Through Binary Image Downsampling\n\nWe propose a
  novel approach to achieve huge speed-ups for shortest path computations o
 n 2D binary images at the cost of slight inaccuracies through image downsa
 mpling techniques.\n\n\nChia-Chia Chen and Chi-Han Peng (National Yang Min
 g Chiao Tung University)\n---------------------\nAn immersive interface fo
 r remote collaboration with      multiple telepresence robots through digi
 tal twin spaces\n\nDevelopment of a smart robot for introduction into nurs
 ing care settings. A human distributes tasks to the robot from a remote en
 vironment, and the robot operates according to the instructions.\n\n\nSawa
  Yoshioka, Shinichi Fukushige, Mizuki Kawakami, and Kohta Seki (Waseda Uni
 versity)\n---------------------\nDynamically Reconfigurable Paper\n\nOur p
 roposed dynamic paper redefines traditional static paper by transforming i
 t into an interactive medium, showcasing its potential for creating highly
  responsive interfaces and innovative applications with enhanced user inte
 ractivity.\n\n\nRyuhei Furuta, Hikari Kawaguchi, Kazuki Miyasaka, and Mika
  Sai (University of Electro-Communications) and Toshiki Sato (Japan Advanc
 ed Institute of Science and Technology (JAIST))\n---------------------\nLa
 ndscape Cinemagraph Synthesis with Sketch Guidance\n\nWe proposed a sketch
 -guided approach for generating landscape cinemagraphs from freehand sketc
 hes. The proposed approach can generate visually pleasing landscape cinema
 graphs from the provided structural and motion sketches.\n\n\nHao Jin, Zhe
 ngyang Wang, Xusheng Du, Xiaoxuan Xie, and Haoran Xie (Japan Advanced Inst
 itute of Science and Technology (JAIST))\n---------------------\nOut-Of-Co
 re Diffraction for Terascale Holography\n\nDisplaying large-scale hologram
 s with a wide field of view (FoV) requires ultra-high-resolution data, oft
 en reaching tera-scale sizes. We propose an out-of-core diffraction method
  that utilizes multiple SSDs simultaneously to manage tera-scale holograph
 y within limited memory constraints. To enhance...\n\n\nJaehong Lee and Du
 ksu Kim (Korea University of Technology and Education (KOREATECH))\n------
 ---------------\nLocally Editing Steady Fluid Flow via Controlling Repulsi
 ve Forces from Terrain\n\nThis paper presents a novel control method for s
 teady fluid flows, such as rivers and waterfalls, simulated using SPH.\n\n
 \nYuki Kimura and Yoshinori Dobashi (Hokkaido University, Prometech CG Res
 earch) and Syuhei Sato (Hosei University, Prometech CG Research)\n--------
 -------------\nDeep Learning based Stereo Vision Camera System\n\nWe prese
 nt a compact, low-power stereo vision camera system. The system is based o
 n deep learning, operates in real-time, is occlusion-free, and is robust t
 o a variety of conditions.\n\n\nSunho Ki, Jinhong Park, Jin-Woo Kim, Rayun
  Boo, Hyerin Cho, and Hanjun Choi (VisioNexT); Sungmin Woo (Korea Universi
 ty of Technology and Education); and Jeong-Ho Woo (VisioNexT)\n-----------
 ----------\n3D-to-2D Animation Smear Effect Technique Based on Japanese Ha
 nd-Drawn Animation Style\n\nOur method achieves the smear effect in Japane
 se animation by combining skeletal animation with vertex displacement and 
 bending mechanism. It reduces choppiness by creating jagged outlines on fa
 st-moving objects.\n\n\nShu-Ting Lin (Test Research, Inc.; National Chengc
 hi University) and Ming-Te Chi (National Chengchi University)\n-----------
 ----------\nGaussians in the City: Enhancing 3D Scene Reconstruction under
  distractors with Text-guided Segmentation and Inpainting\n\nA novel metho
 d for 3D scene reconstruction from images with both static and dynamic dis
 tractors captured in busy area. It utilizes text-guided segmentation and i
 npainting for heavily masked regions.\n\n\nNaoki Shitanda and Jun Rekimoto
  (Sony CSL Kyoto, University of Tokyo)\n---------------------\nFluid Highl
 ights: Stylized Highlights for Anime-Style Food Rendering by Fluid Simulat
 ion\n\nWe propose a stylization method for highlights in anime-style rende
 ring, mainly for food. We use fluid simulation to represent the highlights
  of the squashed shapes that are unique to anime.\n\n\nAtsuki Haruyama and
  Yuki Morimoto (Kyushu University)\n---------------------\nPnRInfo : Inter
 active Tactical Information Visualization for Pick and Roll Event\n\nPnRIn
 fo detects and visualizes pick-and-roll plays in 3D, enhancing basketball 
 team performance through in-depth tactical analysis and interactive discus
 sion.\n\n\nLI-HUAN SHEN and JOYCE SUN (International Bilingual School at H
 sinchu Science Park) and JAN-YUE LIN, YI-HSUAN CHIU, SSU-HSUAN WU, TAI-CHE
 N TSAI, SHUN-HAN CHANG, HUNG-KUO CHU, and MIN-CHUN HU (National Tsing Hua 
 University)\n---------------------\nTransparent 360-Degree Display for Hig
 h-Resolution Naked-Eye Stereoscopic Aerial Images\n\nThis study proposes a
  thin directional display that presents high-resolution stereoscopic image
 s floating in mid-air, viewable from all directions with a curved transpar
 ent reflector.\n\n\nMari Shiina and Naoki Hashimoto (University of Electro
 -Communications)\n---------------------\nV-Wire: A Single-Wire System for 
 Simplified Hardware Prototyping and Enhanced Fault Detection in Education\
 n\nWe introduce a V-Wire which can provide communication with power source
  for small sensor/display modules on one loop wire circuit, same as series
  connection of light bulbs.\n\n\nHideaki Nii (Keio University Graduate Sch
 ool of Media Design), Kazutoshi Kashimoto (Ristmik llc), and Shozaburo Shi
 mada (VIVIWARE Japan Inc.)\n---------------------\nAnime line art coloriza
 tion by region matching using region shape\n\nWe propose colorization meth
 od for anime line art using reference images. This method is considering c
 opyright, and we aim to introduce this method to anime production sites.\n
 \n\nDaisuke Nanya and Kouki Yonezawa (Meijo University)\n-----------------
 ----\nStyle Transfer with Gesture Style Generator\n\nWe propose a new styl
 e transfer method with Gesture Style generator. It transfers style to the 
 output motion in the conventional style transfer manner while also incorpo
 rating generated Gesture Style.\n\n\nDaYeon Lee and Seungkyu Lee (Kyunghee
  University)\n---------------------\nEmpathy Engine: Using Game Design and
  Real-time Technology to Cultivate Social Connection\n\nThis VR game simul
 ates the experiences of takeaway riders, using real-time data to create sc
 enarios that foster empathy between consumers and riders, highlighting the
  challenges faced by delivery workers.\n\n\nYuanlinxi Li (Shenzhen Interna
 tional School of Design, Harbin Institute of Technology); Mengyao Guo (She
 nzhen International School of Design, Harbin Institute of Technology; Univ
 ersity of Macau); and Ze Gao (Hong Kong University of Science and Technolo
 gy)\n---------------------\nControlling Cross-Content Motion Style Transfe
 r via Statistical Style Difference\n\nThis study demonstrates style transf
 er improvement by a straightforward method to adjust style information obt
 ained by Δ𝑆𝑡𝑦𝑙𝑒 which effectively replace original style from a motion con
 tent to another target style.\n\n\nUsfita Kiftiyani and Seungkyu Lee (Kyun
 g Hee University)\n---------------------\nNot Just a Gimmick: A Preliminar
 y Study on Designing Interactive Media Art to Empower Embedded Culture’s P
 ractitioner\n\nThis paper focuses on cultural practitioners, explores new 
 design approach for Interactive Media Art (IMA), and the potential of IMA 
 to enhance traditional art creation and sustainability.\n\n\nYihao He (Sch
 ool of Arts and Media, Tongji University)\n---------------------\nAlive Yi
 : Interactive Preservation of Yi Minority Embroidery Patterns through Digi
 tal Innovation\n\nAlive Yi is an interactive project that uses TouchDesign
 er and Leap Motion, to preserve and revitalize the traditional embroidery 
 patterns of the Yi minority (cultural heritage) in China.\n\n\nZhiwei Wang
  and Yuzhe Xia (Southwest Minzu University); Kexin Nie (University of Sydn
 ey); and Mengyao Guo (Shenzhen International School of Design, Harbin Inst
 itute of Technology; University of Macau)\n---------------------\nIndividu
 al Diffusion Auralize Display Using an Array of Audio Source Position Trac
 king Ultrasonic Speakers\n\nWe developed a prototype system called the "In
 dividual Diffusion Auralize Display," which independently generates each i
 nstrument’s sound using multiple parametric array loudspeakers (PAL) and m
 onitor speakers. The system adjusts the reflection points of the sounds ba
 sed on the players’ ...\n\n\nHyuma Auchi, Akito Fukuda, Yuta Yamauchi, Hom
 ura Kawamura, and Keiichi Zempo (University of Tsukuba)\n-----------------
 ----\nGeneralizing Human Motion Style Transfer Method Based on Metadata-in
 dependent Learning\n\nThis study aims to extend the applicability of motio
 n style transfer methods to be robust for diverse and complex motions akin
  to those found in real-world data.\n\n\nYuki Era, Ren Togo, Keisuke Maeda
 , Takahiro Ogawa, and Miki Haseyama (Hokkaido University)\n---------------
 ------\nSignal2Hand: Sensor Modality Translation from Body-Worn Sensor Sig
 nals to Hand-Depth Images\n\nSignal2Hand is a hand reconstruction method t
 hat directly reconstructs hand-depth images from body-worn sensor signals.
 \n\n\nYuki Kubo (NTT Corporation) and Buntarou Shizuki (University of Tsuk
 uba)\n---------------------\nMedia Bus: XR-Based Immersive Cultural Herita
 ge Tourism\n\nThis study introduces the Media Bus prototype for digital st
 orytelling in Seoul using XR, HMDs, and TOLEDs. It integrates VPS/GPS, AR,
  and TOLED displays, showing potential for enhancing urban tourism.\n\n\nJ
 ieon Du and Heewon Lee (Art Center Nabi), Jeongmin Lee (Deep.Fine), and Ge
 won Kim (Seoul Institute of the Arts)\n---------------------\nNeural Clust
 ering for Prefractured Mesh Generation in Real-time Object Destruction\n\n
 Prefracture method is a practical implementation for real-time object dest
 ruction that is hardly achievable within performance constraints, but can 
 produce unrealistic results due to its heuristic nature. We approach the c
 lustering of prefractured mesh generation as an unordered segmentation on 
 poin...\n\n\nSeunghwan Kim, Sunha Park, and Seungkyu Lee (Kyung Hee Univer
 sity)\n---------------------\nGradient Traversal: Accelerating Real-Time R
 endering of Unstructured Volumetric Data\n\nNovel volume rendering algorit
 hm for real-time rendering of unstructured datasets. Two-pass approach wit
 h gradient estimation and gradient traversal, leveraging modern GPGPU capa
 bilities for interactive exploration of complex, dense volumetric data.\n\
 n\nMehmet Oguz Derin (Morgenrot, Inc.) and Takahiro Harada (Advanced Micro
  Devices, Inc.; Morgenrot, Inc.)\n---------------------\n3D Reconstruction
  of a Soft Object Surface and Contact Areas in Hand-Object Interactions\n\
 nIn Hand-Object Interactions (HOIs), contact information between is crucia
 l. We present a preliminary attempt to reconstruct the surface of a soft o
 bject and identify the contact area on that surface.\n\n\nKohei Miura (Osa
 ka University; Kyoto Research, Sony Computer Scence Laboratories, Inc.) an
 d Daisuke Iwai and Kosuke Sato (Osaka University)\n---------------------\n
 Phantom Audition: Using the Visualization of Electromyography and Vocal Me
 trics as Tools in Singing Training\n\nOur approach aims to use EMG and voc
 al metrics to enhance vocal training with multi-modality feedback, compari
 ng a professional singer and students to analyze muscle control and qualit
 y.\n\n\nKanyu Chen (Keio University Graduate School of Media Design, Insti
 tute of Science Tokyo); Emiko Kamiyama (Keio University Graduate School of
  Media Design); Ruiteng Li (Waseda University); Yichen Peng and Daichi Sai
 to (Institute of Science Tokyo); Erwin Wu (Institute of Science Tokyo, Hua
 wei); Hideki Koike (Institute of Science Tokyo); and Akira Kato (Keio Univ
 ersity Graduate School of Media Design)\n---------------------\nLi Bai the
  Youth: An LLM-Powered Virtual Agent for Children’s Chinese Poetry Educati
 on\n\nLi Bai the Youth is an interactive installation featuring a virtual 
 agent powered by LLMs, offering real-time poetic dialogue and enhancing ch
 ildren's engagement with cultural heritage through immersive learning.\n\n
 \nYurun Chen, Xin Lyu, Tianzhao Li, and Zihan Gao (Communication Universit
 y of China)\n---------------------\nMaterial and Colored Illumination Sepa
 ration from Single Real Image via Semi-Supervised Domain Adaptation\n\nWe 
 propose a training strategy for intrinsic decomposition networks that brid
 ges the domain gap between synthetic and real images, enabling even simple
  CNN to achieve excellent material and illumination separation.\n\n\nHao S
 ha, Tongtai Cao, and Yue Liu (Engineering Research Center of Mixed Reality
  and Advanced Display, School of Optics and Photonics, Beijing Institute o
 f Technology)\n---------------------\nXR Avatar Prototype for Art Performa
 nce Supporting the Inclusion of Neurodiverse Artists\n\nOur prototype uses
  volumetric video and AR to create an interactive Noh performance, enablin
 g neurodiverse artists and a Noh singer to transcend cross-cultural and cr
 oss-ability barriers for inclusive art making.\n\n\nShigenori Mochizuki (C
 ollege of Image Arts and Sciences, Ritsumeikan University); Jonathan Duckw
 orth and Ross Eldridge (School of Design, College of Design and Social Con
 text, RMIT University); and James Hullick (Jolt Sonic and Visual Arts Inc)
 \n---------------------\nEchoes of Antiquity: An Interactive Installation 
 for Guqin Culture  Heritage Using Mid-Air interaction and Generative AI\n\
 n"Echoes of Antiquity"  is an interactive installation that utilizes Leap 
 Motion for gesture recognition and generative AI for image processing to i
 llustrate the symbolic elements of Guqin culture.\n\n\nYuyao Heng, Yingman
  Chen, and Zihan Gao (Communication University of China)\n----------------
 -----\nReal-time Holographic Media System Utilizing HBM-based Holography P
 rocessor\n\nThis paper introduces a real-time holographic media system tha
 t converts 2D or RGBD videos into 3D holograms. The core of this system in
 cludes a Linux host that extracts depth information from 2D images and tra
 nsmits it via packets, and a holography processor leveraging high-bandwidt
 h memory (HBM) t...\n\n\nWonok Kwon, Sanghoon Cheon, Kihong Choi, and Keeh
 oon Hong (Electronics and Telecommunications Research Institute. Technolog
 y)\n---------------------\nSelf-attention Handwriting Generative Model\n\n
 This study introduces a GAN-based model, zi2zi self-attention, which incor
 porates residual blocks and Self-Attention Layers in the encoder and decod
 er. These enhancements capture handwriting font details, mimicking the wri
 ter’s style.\n\n\nYu-Chiao Wang, Tung-Ju Hsieh, and Pei-Ying Chiang (Natio
 nal Taipei University of Technology)\n---------------------\nPerceptually 
 Uniform Hue Adjustment: Hue Distortion Cage\n\nA method for making percept
 ually linear hue adjustments leveraging the OKLab color space to shift col
 ors in the L,a,b model rather than in L,c,h, as is common in other softwar
 e.\n\n\nDeinyon Lachlan Davies and Chris Cook (Canva)\n-------------------
 --\nFinger-Pointing Interface for Human Gesture Recognition Based on Real-
 Time Geometric Comprehension\n\nThis study introduces a interface using st
 ereo cameras to recognize finger-pointing gestures and estimate 3D coordin
 ates, enhancing Human-Computer Interaction and intuitive user-robot commun
 ication. Future improvements target higher accuracy and reliability.\n\n\n
 Minjae Lee, Jiho Bae, Sang-Min Choi, and Suwon Lee (Gyeongsang National Un
 iversity)\n---------------------\nMultidirectional Superimposed Projection
  for Delay-free Shadow Suppression on 3D Objects\n\nIntroducing our innova
 tive multidirectional superimposed projection system designed to eliminate
  shadows on 3D objects without any delay. This breakthrough ensures seamle
 ss user experiences, even with dynamic occlusions.\n\n\nTakahiro Okamoto, 
 Daisuke Iwai, and Kosuke Sato (Osaka University)\n---------------------\nN
 atureBlendVR: A Hybrid Space Experience for Enhancing Emotional Regulation
  and Cognitive Performance\n\nThe NatureBlendVR, interactive experience de
 signed to enhance emotional regulation and cognitive function by merging X
 R technology with bio-responsive physical elements.\n\n\nKinga Skiers, Pen
 g Danyang, and Giulia Barbareschi (Keio University Graduate School of Medi
 a Design); Pai Yun Suen (Empathic Computing Lab, The University of Aucklan
 d; Keio University Graduate School of Media Design); and Kouta Minamizawa 
 (Keio University Graduate School of Media Design)\n---------------------\n
 Incremental Gaussian Splatting: Gradual 3D Reconstruction from a Monocular
  Camera Following Physical World Changes\n\nIncremental Gaussian Splatting
  enables real-time 3D reconstruction in dynamic environments using a monoc
 ular camera. I-GS outperforms conventional methods, providing accurate rec
 onstructions resilient to moving objects, significantly enhancing remote p
 hysical collaboration.\n\n\nKeigo Minamida (University of Tokyo) and Jun R
 ekimoto (University of Tokyo, Sony CSL Kyoto)\n---------------------\nAffe
 ctive Wings: Exploring Affectionate Behaviors in Close-Proximity Interacti
 ons with Soft Floating Robots\n\nThis study presents “Affective Wings,” a 
 concept involving a soft floating robot designed to enable proximal intera
 ctions and physical contact with humans to support emotional connection.\n
 \n\nMingyang Xu and Yulan Ju (Keio University Graduate School of Media Des
 ign); Yunkai Qi (Beihang University); Xiaru Meng (Keio University Graduate
  School of Media Design); Qing Zhang (University of Tokyo); and Matthias H
 oppe, Kouta Minamizawa, Giulia Barbareschi, and Kai Kunze (Keio University
  Graduate School of Media Design)\n---------------------\n`Colorblind Game
 ' Can Enhances Awareness of Color Blindness\n\nThis study explores if a di
 gital game can boost color blindness awareness through the user study with
  'color blind' variations of Puyo Puyo. The results suggest positive effec
 ts on awareness.\n\n\nTaiju KIMURA (Kochi University of Technology) and Hi
 roki Nishino (University of Bedforshire)\n---------------------\nEngaging 
 Racing Fans through Offline E-racing Spectator Experience in AR\n\nA new s
 pectator experience for engaging race fans on non-racing days. Spectators 
 can watch an e-racer's virtual game car race against the pre-recorded car 
 data from an actual race.\n\n\nHsueh Han Wu (Rakuten Mobile, Inc.); Kelvin
  Cheng (Rakuten Institute of Technology, Rakuten Group, Inc.; Rakuten Mobi
 le, Inc.); and Jorge Luis Chávez Herrera and Koji Nishina (Rakuten Mobile,
  Inc.)\n---------------------\nSemantics-guided 3D Indoor Scene Reconstruc
 tion from a Single RGB Image with Implicit Representation\n\nWe enhance si
 ngle-view 3D scene reconstruction by integrating semantic segmentation wit
 h implicit functions, using a semantic-guided image encoder and categorica
 l attention module, achieving improved feature extraction and reconstructi
 on quality.\n\n\nYi-Ju Pan, Pei-Chun Tsai, and Kuan-Wen Chen (National Yan
 g Ming Chiao Tung University)\n---------------------\nStrainer GAN: Filter
 ing out Impurity Samples in GAN Training\n\nStrainer GAN: A method refinin
 g impure datasets to enhance GAN training. Uses automatic filtering to imp
 rove image quality and stability across various architectures. Effective f
 or real-world applications with impure data.\n\n\nJiho Shin and Seungkyu L
 ee (Kyunghee University)\n---------------------\nControlling Diversity in 
 Single-shot Motion Synthesis\n\nWe propose a VAE-GAN model for the task of
  controllable and diverse motion synthesis from a single motion sample as 
 an alternative to the data-dependent modality-to-motion methods.\n\n\nElen
 i Tselepi (University of Thessaly, Moverse) and Spyridon Thermos, Georgios
  Albanis, and Anargyros Chatzitofis (Moverse)\n---------------------\nDisk
 Play: Dynamic Projection Mapping on Rotating Platforms for Extended Hologr
 aphic Display\n\nDiskPlay is an holographic display that uses dynamic proj
 ection onto rotating disks. This system provides special visual expression
 s, such as stereoscopic images, and interactions through manual disk repla
 cement and rotation.\n\n\nHidetaka Katsuyama, Shio Miyafuji, and Hideki Ko
 ike (Institute of Science Tokyo)\n---------------------\nTracery Designer:
  A Metaball-Based Interactive Design Tool for Gothic Ornaments\n\nThis stu
 dy proposes a design support system for interactively designing Gothic orn
 aments. This system is capable of not only designing Gothic ornaments, but
  also generating shape-shifting animations of Gothic patterns.\n\n\nJoe Ta
 kayama (Musashino Art University)\n---------------------\nSensory Cravings
 : A Mixed Reality Installation Enhancing Psychological Experiences through
  Multisensory Interactions\n\nSensory Cravings utilizes mixed reality to c
 reate multisensory experiences simulating the emotional effects of consuma
 bles like coffee, alcohol, and desserts, aiming to alleviate stress and en
 hance well-being.\n\n\nShuyi Li, Yifan Ding, and Zihan Gao (Communication 
 University of China)\n---------------------\nAn Exploratory Study on Fabri
 cating of Unobtrusive Edible Tags\n\nThis paper explores alternative fabri
 cation techniques for embedding unobtrusive tags inside foods. We present 
 two techniques that do not require food 3D printing, including molding and
  stamping.\n\n\nYamato Miyatake and Parinya Punpongsanon (Saitama Universi
 ty)\n---------------------\nGentlePoles : Designing Wooden Pole Actuators 
 for Guiding People\n\nGentle Poles are wooden pole-like actuators that gen
 tly rotate to guide people without relying on text signs or staff. By arra
 nging poles with individually controllable rotation direction and speed, t
 he system gently and subtly directs people. For example, changes in rotati
 on can convey messages suc...\n\n\nMasaya Shimizu, Berend te Linde, Takato
 shi Yoshida, Arata Horie, Nobuhisa Hanamitsu, and Kouta Minamizawa (Keio U
 niversity Graduate School of Media Design)\n---------------------\n[INDRA]
  Interactive Deep-dreaming Robotic Artist: Perceived artistic agency when 
 collaborating with Embodied AI\n\n"INDRA," a painting robot, merges human 
 creativity with AI, exploring the evolving role of human agency in co-crea
 tion through exchanging painting turns between a user and AI artist on a s
 hared canvas. The system allows us to find a balance between meaningful hu
 man control for perceived artistic age...\n\n\nMarina Nakagawa and Sohei W
 akisaka (Keio University Graduate School of Media Design)\n---------------
 ------\nPerceiving 3D from a 2D Mid-air Image\n\nThis study examines depth
  perception in mid-air images. Participants compared images to real object
 s. Results show reduced depth perception in mid-air images.\n\n\nSaki Komi
 nato, Miyu Fukuoka, and Naoya Koizumi (University of Electro-Communication
 s)\n---------------------\nBridging Reality and the Virtual Environment: P
 erceptual Consistency and Visual Adaptation\n\nWe investigated perceptual 
 consistency in MR headsets, focusing on brightness and color. Through psyc
 hophysics experiments and model development, we aimed to achieve perceptua
 l consistency and establish color rendering targets.\n\n\nJun Miao, Alex S
 hin, Jeanne Vu, Takanori Miki, Guodong Rong, and Joshua Davis (Meta); Zilo
 ng Li (Rochester Institute of Technology); and Wenbin Wang, Jinglun Gao, a
 nd Jiangtao Kuang (Meta)\n---------------------\nA Novel Projection Screen
  using the Crystalline Film of a Frozen Soap Bubble\n\nThis study proposes
  a novel screen that focuses on the freezing phenomenon of soap films. Alt
 hough soap films are thin and transparent, in low-temperature environments
 , beautiful ice crystals form on the surface, reducing transparency and en
 abling image projection onto the film. This study explores ...\n\n\nShinic
 hiro Terasawa, Oki Hasegawa, and Toshiki Sato (Japan Advanced Institute of
  Science and Technology (JAIST))\n---------------------\nAn Augmented Real
 ity Experience for Climate Justice: Using Spatial Animation to Enhance Per
 ceived Togetherness\n\nThis project focuses on designing an AR/MR applicat
 ion for climate justice communication. It examines ways to use spatial gra
 phics/animation/sound for multi-user simulations with strategies regarding
  economic and environmental conflicts.\n\n\nChing-Hua Chuan, Wan-Hsiu Tsai
 , and Xueer Xia (University of Miami)\n---------------------\nDisparity Ma
 p based Synthetic IR Pattern Augmentation for Active Stereo Matching\n\nWe
  propose an efficient IR pattern augmentation method using disparity maps 
 without occlusion test and verify its performance by applying it to deep l
 earning based stereo matching methods.\n\n\nRayun Boo, Jinhong Park, Jinwo
 o Kim, Sunho Ki, and Jeong-Ho Woo (VisioNexT)\n---------------------\nMixe
 d Reality Solutions for Tremor Disorders: Ergonomic Hand Motion and AR Reh
 abilitation\n\nIntroducing a mixed reality device for tremor disorders, co
 mbining ergonomic hand assistance with AR rehabilitation. It offers adapti
 ve support, personalized exercises, and telehealth, enhancing motor contro
 l and user autonomy.\n\n\nXinjun Li (Cornell University) and Zhenhong Lei 
 (Rhode Island School of Design)\n---------------------\nNew Fashion: Perso
 nalized 3D Design with a Single Sketch Input\n\nIn this work, we democrati
 ze 3D garment design using freehand sketches and a carefully designed thre
 e-stage conditional diffusion network with random sampling augmented point
  cloud pre-training for high-quality 3D creation.\n\n\nTianrun Chen (Zheji
 ang University); Xinyu Chen, Chaotao Ding, and Ling Bai (Huzhou University
 ); Shangzhan Zhang (Zhejiang University); Lanyun Zhu (Singapore University
  of Technology and Design (SUTD)); Ying Zang and Wenjun Hu (Huzhou Univers
 ity); and Zejian Li and Lingyun Sun (Zhejiang University)\n---------------
 ------\nFitting Spherical Gaussians to Dynamic HDRI Sequences\n\nWe presen
 t a technique for fitting high dynamic range illumination (HDRI) sequences
  using anisotropic spherical Gaussians (ASGs) while preserving temporal co
 nsistency in the compressed HDRI maps.\n\n\nPascal Clausen, Li Ma, Mingmin
 g He, Ahmet Levent Taşel, Oliver Pilarski, and Paul Debevec (Netflix Eyeli
 ne Studios)\n---------------------\n3D Texture Representation in Projectio
 n Mapping onto a Surface with Micro-Vibration\n\nIn projection mapping, we
  proposed a method of adding depth vibration to the projection target and 
 projecting images at high-speed synchronized with the vibration to represe
 nt three-dimensional results.\n\n\nHayase Nishi, Daisuke Iwai, and Kosuke 
 Sato (Osaka University)\n---------------------\nHigh Spatial Resolution Pr
 ojection Mapping for Visually Consistent Reproduction of Physical Surfaces
 \n\nIncreasing pixel density by optically reducing the projection area ena
 bles the presentation of a virtual object whose appearance perceptually ma
 tches that of a real object with fine texture.\n\n\nIkuho Tani, Daisuke Iw
 ai, and Kosuke Sato (Osaka University)\n---------------------\nShadows Bei
 ng Vacuumed Away: An MR Experience of Shadow Loss of Body with Spine-Chill
 ing and Body-Trembling,  and Shadow Loss of Thing\n\nIn this MR experience
 , as your shadow is sucked into a vacuum cleaner, it simultaneously trigge
 rs spine-chilling sensations and body vibrations. The shadows of objects a
 re also drawn in.\n\n\nRyu Nakagawa, Kenta Hidaka, Shimpei Biwata, Sho Kat
 o, and Taiki Shigeno (Nagoya City University)\n---------------------\nDesi
 gning LLM Response Layouts for XR Workspaces in Vehicles\n\nThis study inv
 estigates the design of response layouts for large language models in XR e
 nvironments for vehicle settings.\n\n\nDaun Kim and Jin-Woo Jeong (Seoul N
 ational University of Science and Technology)\n---------------------\nA La
 va Well of Reflexivity: Exploring Speculative Ambient Media\n\nThis work e
 xplores ambient media that speculatively integrates reflexive elements to 
 subtly prompt reflection on environmental issues. It challenges perception
 s of human impact, aiming to foster dialogue and awareness.\n\n\nTing Han 
 Daniel Chen (Play Design Lab; Department of Art&Design, Yuan-Ze University
 )\n---------------------\nReal-Time Transfer Function Editor for Direct Vo
 lume Rendering in Mixed Reality\n\nIntroducing an innovative MR system for
  Direct Volume Rendering, enabling intuitive real-time transfer function c
 ustomization with interactive node editing, color, and opacity controls fo
 r enhanced volumetric data visualization.\n\n\nJunseo Choi, Hyeonji Kim, H
 aill An, and Younhyun Jung (Gachon University)\n---------------------\nHyb
 rid Physical Model and Status Data-Driven Dynamic Control for Digital Ligh
 t Processing 3D Printing\n\nThis paper proposes a dynamic control scheme f
 or DLP 3D printing that combines physical models with status data. By deve
 loping physical models, data capture, and analysis methods, we dynamically
  adjust the printing protocol to improve efficiency.\n\n\nLidong Zhao and 
 Xueyun Zhang (Beijing University of Technology), Lin Lu (Shandong Universi
 ty), and Lifang Wu (Beijing University of Technology)\n-------------------
 --\nHidEye: Proposal of HMD Interaction Method by Hiding One Eye\n\nWe pro
 poses HidEye, an interaction method that enables users to easily switch be
 tween virtual and real space content by superimposing VR content with a pa
 ss-through function triggered by covering one eye.\n\n\nRyunosuke Ise and 
 Koji Tsukada (Future University Hakodate)\n---------------------\nARAP-Bas
 ed Shape Editing to Manipulate the Center of Mass\n\nWe propose a method t
 o deform the shape such that the center of mass will match a given target 
 position. Users can manipulate its position in interactive speed.\n\n\nShu
 nsuke Hirata (University of Tokyo); Yuta Noma (University of Tokyo, Univer
 sity of Toronto); Koya Narumi (Keio University, University of Tokyo); and 
 Yoshihiro Kawahara (University of Tokyo)\n---------------------\nA Multimo
 dal LLM-based Assistant for User-Centric Interactive Machine Learning\n\nW
 e introduce a multimodal LLM-based system that aids non-expert users in ma
 chine learning development by translating vague user needs into concrete t
 ask formulations through interactive chat, ensuring comprehensive training
  data.\n\n\nWataru Kawabe and Yusuke Sugano (University of Tokyo)\n-------
 --------------\nAnimated Pictorial Maps\n\nWe propose a new form of map cr
 eation that makes animated cartography more approachable for non-experts. 
 We focus on improving the blending and map style transfer quality.\n\n\nDo
 ng-Yi Wu, Li-Kuan Ou, and HuiGuang Huang (National Cheng Kung University);
  Yu Cao (Hong Kong Polytechnic University, National Cheng Kung University)
 ; Xin-Wei Lin (National Cheng Kung University); Thi-Ngoc-Hanh Le (School o
 f Computer Science and Engineering, International University, Ho Chi Minh 
 City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietnam); an
 d Sheng-Yi Yao and Tong-Yee Lee (National Cheng Kung University)\n--------
 -------------\nDesign of Wall Art Utilizing Dynamic Color Changes through 
 Photoelasticity\n\nThis research presents a dynamic wall art piece using p
 hotoelasticity, where color changes occur as polarized light passes throug
 h a stretched, transparent, flexible material. Unlike pixel-based displays
  such as LCDs and LED matrices, photoelasticity creates gentle, organic co
 lor transitions with su...\n\n\nRyota Nakayama, Soshi Takeda, Gakuto Sekin
 e, and Yuichiro Katsumoto (Tokyo Denki University)\n---------------------\
 nAnalyzing and Visualizing the Correlation between Ecosystems and Environm
 ental Sustainability : Focusing on Search API Data\n\nThis study aims to a
 nalyze public awareness of environmental sustainability, particularly focu
 sing on the ecological importance of bees and their impact on human food p
 roduction. Real-time search query data were collected and analyzed to iden
 tify trends related to bees and environmental keywords. Ar...\n\n\nJungIn 
 Lee (Kyungpook National University)\n---------------------\nIT3: Immersive
  Table Tennis Training Based on 3D Reconstruction of Broadcast Video\n\nUs
 ing 3D reconstruction and physical simulation, we introduce a VR system fo
 r table tennis training that allows users to refine their receiving skills
  and experience immersive game replays.\n\n\nPei-Hsin Huang, Shang-Ching L
 iu, Li-Yang Huang, Chuan-Meng Chiu, Jo-Chien Wang, Pu Ching, and Hung-Kuo 
 Chu (National Tsing Hua University) and Min-Chun Hu (National Tsing Hua Un
 iversity, Taiwan Institute of Sports Science)\n---------------------\nSoun
 d Signatures for Geometrical Shapes\n\nWe propose automatic generation of 
 sound signatures—pseudowords which can be associated with certain features
  of geometric shapes such as roundness, spikiness, etc.\n\n\nHanqin Wang a
 nd Alexei Sourin (Nanyang Technological University (NTU))\n---------------
 ------\nMambaPainter: Neural Stroke-Based Rendering in a Single Step\n\nMa
 mbaPainter predicts a sequence of parameterized brush strokes for stroke-b
 ased rendering. We achieved over 100 brush strokes in a single inference s
 tep based on the selective SSM layers, enabling the efficient translation 
 of source images to the oil painting style.\n\n\nTomoya Sawada and Marie K
 atsurai (Doshisha University)\n---------------------\nMultimodal Learning 
 for Autoencoders\n\nWe propose Multimodal Autoencoder architecture and tra
 ining scheme, in which images are reconstructed using both image and text 
 inputs, rather than just images as its input.\n\n\nWajahat Ali Khan and Se
 ungkyu Lee (Kyung Hee University)\n---------------------\nDevelopment of T
 iny Wireless Position Tracker Enabling Real-Time Intuitive 3D Modeling\n\n
 This work implemented a wirelessly powered and communicable position track
 er in a 10 mm cubic volume, achieving a 4.75 mm localization error, 10 cm 
 communication range, and an adaptive power receiver providing appropriate 
 voltage for the tracker regardless of the distance between the wireless tr
 ansm...\n\n\nYuki Maegawa, Masanori Hashimoto, and Ryo Shirai (Kyoto Unive
 rsity)\n---------------------\nEmpowering CG Production：Cost-Effective Tec
 hniques for Voluminous Fur Rendering with Unreal Engine\n\nThis paper pres
 ents a comprehensive approach to rendering voluminous, realistic fur in Un
 real Engine. We introduce strategies such as groom node-splitting, mesh co
 nversion and density increase to optimize GPU memory usage and enhance ren
 dering capacity.\n\n\nNing Xia, Xiaofei Yin, and Xuecong Feng (Children's 
 Playground Entertainment Inc.)\n---------------------\nBoundary Conditione
 d Floor Layout Generation with Diffusion Model\n\nUsing self-attention mec
 hanisms, we generate automated floor plans that align with exterior wall b
 oundaries. Our approach improves accuracy and produces diverse, precise ve
 ctor floor plans over existing GAN methods.\n\n\nYusuke Takeuchi (Universi
 ty of Tokyo; Tetraz, Inc) and Qi An and Atsushi Yamashita (University of T
 okyo)\n---------------------\nAutomotive Holographic Head-Up Display\n\nAd
 vanced automotive holographic HUD system : offering 3D holograms, extended
  virtual image distance (>10m), large field of view (>15 degrees), and wid
 e monocular eye-box (>2cm) for comfortable viewing experience.\n\n\nJinsu 
 Lee, Keehoon Hong, and Minsik Park (Electronics and Telecommunications Res
 earch Institute (ETRI))\n---------------------\nVisualization Methods for 
 Manual Wheelchair Training: Impact on Communication Between Coaches and Us
 ers\n\nThis study evaluates a visualization method to enhance communicatio
 n and skill improvement between wheelchair users and coaches, using video,
  pose estimation, and sensor data during a 40-m sprint.\n\n\nXu Han (Tokyo
  Metropolitan University), Asuka Mano (Research Institute of National Reha
 bilitation Center for Persons with Disabilities), Saki Sakaguchi and Mina 
 Shibasaki (Tokyo Metropolitan University), Tsuyoshi Nakayama (Research Ins
 titute of National Rehabilitation Center for Persons with Disabilities), Y
 uji Higashi (Japanese Association of Occupational Therapists), and Kumiko 
 Kushiyama (Tokyo Metropolitan University)\n---------------------\nEfficien
 t Space Variant Gaussian Blur Approximation\n\nWe propose a novel approxim
 ation method to render spatially varying Gaussian blur's in real time. Thi
 s has been applied in a web app to allow for creative image blurring.\n\n\
 nOliver Richards and Chris Cook (Canva)\n---------------------\nFast and R
 obust 3D Gaussian Splatting for Virtual Reality\n\nThe method provides fas
 t and artifact-free rendering of 3D Gaussian Splatting scenes with user-gr
 ade VR hardware. We combine prior art and our own contributions to address
  popping, overly large Gaussians, and performance issues in VR settings to
  provide a smooth user experience, as confirmed by a sm...\n\n\nXuechang T
 u (Peking University, Carnegie Mellon University) and Bernhard Kerbl and F
 ernando de la Torre (Carnegie Mellon University)\n---------------------\nC
 urtain UI: Augmenting Curtains for Tangible Interactions\n\nTransform ever
 yday curtains into interactive interfaces using capacitive sensing. Curtai
 n UI enables touch-sensitive gestures to control smart home appliances, of
 fering practical and intuitive embodied interactions.\n\n\nPranshu Anand (
 IIIT Bangalore, Creative Interfaces Lab - IIIT Delhi) and Vishal Bharti an
 d Anmol Srivastava (Creative Interfaces Lab - IIIT Delhi)\n---------------
 ------\nGenerating Font Variations Using Latent Space Trajectory\n\nVariab
 le fonts can freely adjust the parameters of font properties and have been
  rapidly spreading recently.\n\nOur work will significantly contribute to 
 the further development of variable fonts.\n\n\nSotaro Kanazawa, I-Chao Sh
 en, Yuki Tatsukawa, and Takeo Igarashi (University of Tokyo)\n------------
 ---------\nDualAvatar: Robust Gaussian Avatar with Dual Representation\n\n
 We propose DualAvatar, a robust Gaussian Splatting avatar reconstruction m
 ethod that leverages a learnable mesh avatar to achieve more reliable reco
 nstruction and rendering of unseen poses from monocular video.\n\n\nJinson
 g Zhang (Tianjin University, University of Tokyo); I-Chao Shen and Jotaro 
 Sakamiya (University of Tokyo); Yu-Kun Lai (Cardiff University); Takeo Iga
 rashi (University of Tokyo); and Kun Li (Tianjin University)\n------------
 ---------\nA Simple Heat Method for Computing Geodesic Paths on General Ma
 nifold Representations\n\nWe propose a novel algorithm for computing geode
 sic paths on general manifolds given only the ability to perform closest p
 oint queries and 1D heat flow.\n\n\nNathan King (University of Waterloo), 
 Steven Ruuth (Simon Fraser University), and Christopher Batty (University 
 of Waterloo)\n---------------------\n3D Human Pose Estimation Using Ultra-
 low Resolution Thermal Images\n\nCan we estimate 3D human pose from 8x8 th
 ermal images? Our framework enhances privacy using adversarial learning, e
 nsuring robustness against temperature and subject variations while minimi
 zing personal information exposure.\n\n\nTatsuki Arai (Keio University); M
 ariko Isogawa (Keio University, JST PRESTO); Kuniharu Sakurada (Keio Unive
 rsity, University of Tokyo); and Maki Sugimoto (Keio University)\n--------
 -------------\nCo-play with Double  Self: Exploring Bodily-Self Through He
 autoscopy-Based XR Hide and Seek Game\n\nIn the hide-and-seek game, what k
 ind of experience or perception will be created if a single player plays b
 oth the role of the hider and the seeker simultaneously?\n\n\nKezhou Yang 
 and Sohei Wakisaka (Keio Media Design)\n---------------------\nDesign for 
 Hypnotic Line Art Animation from a Still Image\n\nStreamline art is a digi
 tal art using single-color lines to create 3D alike art images.  However, 
 manually creating line art animation is time-consuming. We present ideas t
 o create streamline animation.\n\n\nXin-Wei Lin, Zhi-Yang Goh, HuiGuang Hu
 ang, and Dong-Yi Wu (National Cheng-Kung University); Thi-Ngoc-Hanh Le (Sc
 hool of Computer Science and Engineering, International University, Ho Chi
  Minh City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietna
 m); and Tong-Yee Lee (National Cheng-Kung University)\n-------------------
 --\nSpecTrack: Learned Multi-Rotation Tracking via Speckle Imaging\n\nSpec
 Track uses a laser speckle-based lensless camera to obtain multi-axis abso
 lute rotations, doubling the accuracy of current rotation estimation techn
 iques and enabling dynamic rotation tracking with minimal hardware require
 ments.\n\n\nZiyang Chen (University College London (UCL)), Mustafa Doğan (
 Adobe Research), Josef Spjut (NVIDIA), and Kaan Akşit (University College 
 London (UCL))\n---------------------\nMMM: Mid-air image Moving in and out
  of the Mirror with backward glance in the mirror\n\nWe propose an optical
  system displaying mid-air images that move between the inside and outside
  of a mirror and presenting a worldview of the mirror as an adjacent virtu
 al world.\n\n\nYasunori Akashi, Changyo Han, and Takeshi Naemura (Universi
 ty of Tokyo)\n---------------------\nJaku-in: A Cultural Skills Training S
 ystem for Recording and Reproducing Three-dimensional Body, Eye, and Hand 
 Movements\n\nWe propose "Jaku-in", which displays the expert’s hand moveme
 nts and gazes in a three-dimensional reconstructed space using a point clo
 ud to help the expert's intention for novices.\n\n\nSotaro YOKOI (SONY CSL
  Kyoto, University of Tokyo); Kaishi Amitani and Natsuki Hamanishi (SONY C
 SL Kyoto); and Jun Rekimoto (SONY CSL Kyoto, University of Tokyo)\n-------
 --------------\nVibrotactile Invisible Presence: Conveying Remote Presence
  through Moving Vibrotactile Footstep Cues on a Haptic Floor\n\nWe propose
  concept to convey presence by transmitting footsteps and vibrotactile cue
 s from one space to another.  Our prototype uses floor-mounted microphones
  to capture footsteps and underfloor transducers to create moving vibratio
 ns elsewhere. Preliminary experiments showed that users can partiall...\n\
 n\nTakahiro Kusabuka, Yuichi Maki, Kakagu Komazaki, Masafumi Suzuki, Hiros
 hi Chigira, and Takayoshi Mochizuki (NTT Corporation)\n-------------------
 --\n3D Scene Reconstruction of Point Cloud Data: A Lightweight Procedural 
 Approach\n\nWe propose a pipeline for the automatic reconstruction of buil
 dings and facades in the form of procedural grammar descriptions from raw 
 point cloud data of scenes.\n\n\nVivica Wirth, Max Mühlhäuser, and Alejand
 ro Sanchez Guinea (Technical University of Darmstadt)\n-------------------
 --\nTime Light: An Interface for Comparing National Treasure Murals Across
  Time\n\nTime Light is an interface for comparing damaged 2D artworks with
  their reconstructions. This study details its application using Takamatsu
 zuka Kofun murals as an example.\n\n\nWenze Song and Takefumi Hayashi (Kan
 sai University Graduate School of Informatics)\n\nRegistration Category: E
 nhanced Access, Exhibit & Experience Access, Experience Hall Exhibitor, Fu
 ll Access, Full Access Supporter, Trade Exhibitor
END:VEVENT
END:VCALENDAR
