BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Tokyo
X-LIC-LOCATION:Asia/Tokyo
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:JST
DTSTART:18871231T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20250110T023304Z
LOCATION:Lobby Gallery (1) & (2)\, G Block\, Level B1
DTSTART;TZID=Asia/Tokyo:20241204T130000
DTEND;TZID=Asia/Tokyo:20241204T140000
UID:siggraphasia_SIGGRAPH Asia 2024_sess199@linklings.com
SUMMARY:Posters Presentation
DESCRIPTION:Poster\n\nAuthors of posters explain their findings, discuss t
 heir work, receive feedback, and network with all attendees.\n\nA Sentient
  Space Using Light Sensing with Particle Life\n\nA light-based sensing sys
 tem enabling dynamic spatial interaction through particle grid projections
 . The system captures environmental changes, allowing real-time light adju
 stments, enhancing interactivity and responsiveness within the space.\n\n\
 nPan-Pan Shiung and June-Hao Hou (Graduate Institute of Architecture, Nati
 onal Yang Ming Chiao Tung University)\n---------------------\nHeterogeneou
 s Architecture for Asynchronous Seamless Image Stitching\n\nThis work prop
 oses a seamless image stitching method on a heterogeneous CPU-GPU system t
 hat achieves 60fps at 4K resolution without ghosting in real-time embedded
  environments.\n\n\nHyerin Cho, Jin-Woo Kim, Jinhong Park, and Jeong-Ho Wo
 o (VisioNexT)\n---------------------\nFlying Your Imagination: Integrating
  AI in VR for Kite Heritage\n\n``Flying Your Imagination" is a project tha
 t integrates AI in VR for Kite Heritage; it investigates the innovative in
 tegration of VR, AI technology, and embodied interaction design.\n\n\nKexi
 n Nie (University of Sydney) and Mengyao Guo (Shenzhen International Schoo
 l of Design, Harbin Institute of Technology; University of Macau)\n-------
 --------------\nA Relighting Method for Single Terrain Image based on Two-
 stage Albedo Estimation Model\n\nThis paper proposes a method for relighti
 ng a single terrain image to match user-specified times of day or weather 
 conditions by estimating albedo and depth using deep learning.\n\n\nShun T
 atsukawa (Hosei University) and Syuhei Sato (Hosei University, Prometech C
 G Research)\n---------------------\nRethinking motion keyframe extraction 
 : a greedy procedural approach using a neural control rig\n\nCurrent keyfr
 ame extraction methods are unsuitable for 3D animators. Our novel approach
 , closer to their workflows, uses a neural control-rig and an algorithm to
  optimize keyframe placement on MoCap animations.\n\n\nThéo Cheynel (Centr
 e National de la Recherche Scientifique - Laboratoire d'informatique de l'
 École Polytechnique (LIX), Kinetix); Omar El Khalifi and Baptiste Bellot-G
 urlet (Kinetix); and Damien Rohmer and Marie-Paule Cani (Centre National d
 e la Recherche Scientifique - Laboratoire d'informatique de l'École Polyte
 chnique (LIX))\n---------------------\nOverallNet: Scale-Arbitrary Lightwe
 ight SR Model for handling 360° Panoramic Images\n\nWe propose a lightweig
 ht, scale-independent SR model incorporating various techniques named Over
 allNet. Further, we incorporate quantization to maximize efficiency during
  user inference, making it well-suited for processing high-quality panoram
 ics.\n\n\nDongSik Yoon, Jongeun Kim, Seonggeun Song, Yejin Lee, and Gunhee
  Lee (HDC LABS)\n---------------------\nTingle Tennis: Menstrual Experienc
 e Sensory Simulation Sport Device\n\nTingle Tennis, a menstrual period sen
 sory simulation game, leverages VR technology and haptic feedback for an i
 mmersive experience highlighting the physical and psychological challenges
  female athletes face during their periods.\n\n\nShun-Han Chang (National 
 Tsing Hua University); Chen-Chun Wu (National Tsing Hua University, Nation
 al Chengchi University); Zi-Yun Lai, Tsung-Yen Lee, and Cheng-En Ho (Natio
 nal Tsing Hua University); and Min-Chun Hu (National Tsing Hua University,
  Taiwan Institute of Sports Science)\n---------------------\nChoreoSurf: S
 calable Surface System with 8-DOF SMA Actuators\n\nThe ChoreoSurf system i
 s a scalable surface system with a shape-memory alloy actuator that can be
 nd in eight directions. This system can mount actuators on the surface lay
 ers of various threedimensional shapes. Applications include a tabletop sy
 stem, interactive wall, tentacles tower, kinetic dress,...\n\n\nAkira Naka
 yasu (Tokyo Metropolitan University)\n---------------------\nSegmentation 
 of 3D Gaussians using Masked Gradients\n\nA novel 3D segmentation algorith
 m for Gaussian splatting that utilizes 2D masks and inference-time gradien
 t backpropagation, significantly enhancing downstream applications like AR
 , VR, 3DGS editing, asset generation, and more.\n\n\nJoji Joseph, Bharadwa
 j Amrutur, and Shalabh Bhatnagar (Indian Institute of Science)\n----------
 -----------\nEfficient visualization of appearance space of translucent ob
 jects using differential rendering\n\nAn efficient visualization method th
 at allows users to interactively explore the subsurface scattering paramet
 er space is presented.\n\n\nRiel Suzuki (Hokkaido University) and Yoshinor
 i Dobashi (Hokkaido University, Prometech CG Research)\n------------------
 ---\nTowards Accelerating Physics Informed Graph Neural Network for Fluid 
 Simulation\n\nWe introduce a pioneering Multi-GNN Processor Physics-Inform
 ed Graph Neural Network (PIGNN) approach which reduced training time of PI
 GNN to a quarter while maintaining the error rate.\n\n\nYidi Wang (NVIDIA,
  Singapore Institute of Technology); Frank Guan, Malcolm Yoke Hean Low, an
 d Daniel Wang (Singapore Institute of Technology); and Aik Beng Ng and Sim
 on See (NVIDIA)\n---------------------\nAuditory AR System to Induce Pseud
 o-Haptic Force Feedback for Lateral Hand Movements Using Spatially Localiz
 ed Sound Stimuli\n\nThis proposal presents a pseudo-force feedback design 
 based on spatially localized sound, tailored for the visually impaired. So
 und location is adjusted to create an auditory conflict between the percei
 ved hand position in virtual space and its actual position in the real wor
 ld, thus inducing a forc...\n\n\nDaniel Oswaldo Lopez Tassara, Naoto Wakat
 suki, and Keiichi Zempo (University of Tsukuba)\n---------------------\nSL
 AM-Based Illegal Parking Detection System\n\nThe paper proposes a SLAM-bas
 ed system for real-time illegal parking detection, improving efficiency by
  utilizing unmanned patrol vehicles for automated enforcement in urban are
 as.\n\n\nJiho Bae, Minjae Lee, Ungsik Kim, and Suwon Lee (Gyeongsang Natio
 nal University)\n---------------------\nPianoKeystroke-EMG: Piano Hand Mus
 cle Electromyography Estimation from Easily Accessible Piano Keystroke\n\n
 Electromyography is essential in skill acquisition despite its resource-in
 tensive access. We focused on small hand muscle activities in piano perfor
 mance and proposed an approach to estimate electromyography from cost-effe
 ctive keystrokes.\n\n\nRuofan Liu (Tokyo Institute of Technology, Sony Com
 puter Science Laboratories); Yichen Peng (Tokyo Institute of Technology); 
 Takanori Oku (Shibaura Institute of Technology, NeuroPiano Institute); Erw
 in Wu (Huawei Japan, Tokyo Institute of Technology); Shinichi Furuya (Sony
  Computer Science Laboratories); and Hideki Koike (Tokyo Institute of Tech
 nology)\n---------------------\nLatent Bias Correction in Outpainting Artw
 orks\n\nThis paper describes research on outpainting artworks. Our purpose
  is to eliminate unnecessary tendencies that frequently occur when outpain
 ting an artwork, and we propose a novel latent correction method.\n\n\nJun
 g-Jae Yu and Dae-Young Song (Electronics and Telecommunications Research I
 nstitute (ETRI))\n---------------------\nCocktail-Party Communication from
  a Display to a Synchronized Camera\n\nWe propose a Cocktail-Party Communi
 cation (CPC) system using display and camera. Utilizing Optical Camera Com
 munication (OCC) technology, we successfully transmitted audio data. Futur
 e challenges include distortion correction and speed enhancement.\n\n\nAsu
 ka Fukubayashi (Sony Semiconductor Solutions Corporation), Mayu Ishii and 
 Yu Nakayama (Tokyo University of Agriculture and Technology), and Shun Kai
 zu (Sony Semiconductor Solutions Corporation)\n---------------------\nA St
 udy of 3D Character Control Methods_Keyboard, Speech, Hand Gesture, and Mi
 xed Interfaces\n\nThis poster presents a pilot study on optimal usability 
 of desktop interfaces (keyboard, speech, hand gestures) for avatar control
  in MR-based military training, finding mixed interfaces provide the best 
 usability.\n\n\nJunSeo Park, Hanseob Kim, and Gerard Jounghyun Kim (Korea 
 University)\n---------------------\nThermiapt: Sensory Perception of Quant
 itative Thermodynamics Concepts in Education\n\nThis study introduces "The
 rmiapt," a multi-sensory device that enhances thermodynamic learning by in
 tegrating visual and haptic experiences, significantly improving comprehen
 sion and retention through immersive interaction.\n\n\nAnji Fujiwara (Nati
 onal Institute of Technology, Nara College; Nara Institute of Science and 
 Technology (NAIST)) and Kodai Iwasaki, Tamami Watanabe, and Hideaki Uchiya
 ma (Nara Institute of Science and Technology (NAIST))\n-------------------
 --\nA Method for Generating Tactile Sensations from Textual Descriptions U
 sing Generative AI\n\nThis study presents a novel approach to generate tac
 tile sensations from text using AI. It combines fingernail sensor data, Au
 dioLDM processing, and ChatGPT-generated onomatopoeia to create diverse ha
 ptic feedback experiences.\n\n\nMomoka Nakayama, Risako Kawashima, Shintar
 o Murakami, Yuta Takeuchi, Tatsuya Mori, and Dai Takanashi (Dentsu Lab Tok
 yo)\n---------------------\nReborn of the White Bone Demon: Role-Playing G
 ame Design Using Generative AI in XR\n\nThis paper presents "Reborn of the
  White Bone Demon," an XR RPG using GenAI for real-time storyline generati
 on, enhancing player immersion and personalization through AI-driven NPC i
 nteractions.\n\n\nXiaozhan Liang, Yu Wang, and Fengyi Yan (Beihang Univers
 ity); Zehong Ouyang and Yong Hu (Beihang University; State Key Laboratory 
 of Virtual Reality Technology and Systems, Beihang University); and Siyu L
 uo (Tsinghua University)\n---------------------\nShortest Path Speed-up Th
 rough Binary Image Downsampling\n\nWe propose a novel approach to achieve 
 huge speed-ups for shortest path computations on 2D binary images at the c
 ost of slight inaccuracies through image downsampling techniques.\n\n\nChi
 a-Chia Chen and Chi-Han Peng (National Yang Ming Chiao Tung University)\n-
 --------------------\nAn immersive interface for remote collaboration with
       multiple telepresence robots through digital twin spaces\n\nDevelopm
 ent of a smart robot for introduction into nursing care settings. A human 
 distributes tasks to the robot from a remote environment, and the robot op
 erates according to the instructions.\n\n\nSawa Yoshioka, Shinichi Fukushi
 ge, Mizuki Kawakami, and Kohta Seki (Waseda University)\n-----------------
 ----\nDynamically Reconfigurable Paper\n\nOur proposed dynamic paper redef
 ines traditional static paper by transforming it into an interactive mediu
 m, showcasing its potential for creating highly responsive interfaces and 
 innovative applications with enhanced user interactivity.\n\n\nRyuhei Furu
 ta, Hikari Kawaguchi, Kazuki Miyasaka, and Mika Sai (University of Electro
 -Communications) and Toshiki Sato (Japan Advanced Institute of Science and
  Technology (JAIST))\n---------------------\nLandscape Cinemagraph Synthes
 is with Sketch Guidance\n\nWe proposed a sketch-guided approach for genera
 ting landscape cinemagraphs from freehand sketches. The proposed approach 
 can generate visually pleasing landscape cinemagraphs from the provided st
 ructural and motion sketches.\n\n\nHao Jin, Zhengyang Wang, Xusheng Du, Xi
 aoxuan Xie, and Haoran Xie (Japan Advanced Institute of Science and Techno
 logy (JAIST))\n---------------------\nLocally Editing Steady Fluid Flow vi
 a Controlling Repulsive Forces from Terrain\n\nThis paper presents a novel
  control method for steady fluid flows, such as rivers and waterfalls, sim
 ulated using SPH.\n\n\nYuki Kimura and Yoshinori Dobashi (Hokkaido Univers
 ity, Prometech CG Research) and Syuhei Sato (Hosei University, Prometech C
 G Research)\n---------------------\nOut-Of-Core Diffraction for Terascale 
 Holography\n\nDisplaying large-scale holograms with a wide field of view (
 FoV) requires ultra-high-resolution data, often reaching tera-scale sizes.
  We propose an out-of-core diffraction method that utilizes multiple SSDs 
 simultaneously to manage tera-scale holography within limited memory const
 raints. To enhance...\n\n\nJaehong Lee and Duksu Kim (Korea University of 
 Technology and Education (KOREATECH))\n---------------------\nDeep Learnin
 g based Stereo Vision Camera System\n\nWe present a compact, low-power ste
 reo vision camera system. The system is based on deep learning, operates i
 n real-time, is occlusion-free, and is robust to a variety of conditions.\
 n\n\nSunho Ki, Jinhong Park, Jin-Woo Kim, Rayun Boo, Hyerin Cho, and Hanju
 n Choi (VisioNexT); Sungmin Woo (Korea University of Technology and Educat
 ion); and Jeong-Ho Woo (VisioNexT)\n---------------------\n3D-to-2D Animat
 ion Smear Effect Technique Based on Japanese Hand-Drawn Animation Style\n\
 nOur method achieves the smear effect in Japanese animation by combining s
 keletal animation with vertex displacement and bending mechanism. It reduc
 es choppiness by creating jagged outlines on fast-moving objects.\n\n\nShu
 -Ting Lin (Test Research, Inc.; National Chengchi University) and Ming-Te 
 Chi (National Chengchi University)\n---------------------\nGaussians in th
 e City: Enhancing 3D Scene Reconstruction under distractors with Text-guid
 ed Segmentation and Inpainting\n\nA novel method for 3D scene reconstructi
 on from images with both static and dynamic distractors captured in busy a
 rea. It utilizes text-guided segmentation and inpainting for heavily maske
 d regions.\n\n\nNaoki Shitanda and Jun Rekimoto (Sony CSL Kyoto, Universit
 y of Tokyo)\n---------------------\nFluid Highlights: Stylized Highlights 
 for Anime-Style Food Rendering by Fluid Simulation\n\nWe propose a styliza
 tion method for highlights in anime-style rendering, mainly for food. We u
 se fluid simulation to represent the highlights of the squashed shapes tha
 t are unique to anime.\n\n\nAtsuki Haruyama and Yuki Morimoto (Kyushu Univ
 ersity)\n---------------------\nPnRInfo : Interactive Tactical Information
  Visualization for Pick and Roll Event\n\nPnRInfo detects and visualizes p
 ick-and-roll plays in 3D, enhancing basketball team performance through in
 -depth tactical analysis and interactive discussion.\n\n\nLI-HUAN SHEN and
  JOYCE SUN (International Bilingual School at Hsinchu Science Park) and JA
 N-YUE LIN, YI-HSUAN CHIU, SSU-HSUAN WU, TAI-CHEN TSAI, SHUN-HAN CHANG, HUN
 G-KUO CHU, and MIN-CHUN HU (National Tsing Hua University)\n--------------
 -------\nTransparent 360-Degree Display for High-Resolution Naked-Eye Ster
 eoscopic Aerial Images\n\nThis study proposes a thin directional display t
 hat presents high-resolution stereoscopic images floating in mid-air, view
 able from all directions with a curved transparent reflector.\n\n\nMari Sh
 iina and Naoki Hashimoto (University of Electro-Communications)\n---------
 ------------\nV-Wire: A Single-Wire System for Simplified Hardware Prototy
 ping and Enhanced Fault Detection in Education\n\nWe introduce a V-Wire wh
 ich can provide communication with power source for small sensor/display m
 odules on one loop wire circuit, same as series connection of light bulbs.
 \n\n\nHideaki Nii (Keio University Graduate School of Media Design), Kazut
 oshi Kashimoto (Ristmik llc), and Shozaburo Shimada (VIVIWARE Japan Inc.)\
 n---------------------\nAnime line art colorization by region matching usi
 ng region shape\n\nWe propose colorization method for anime line art using
  reference images. This method is considering copyright, and we aim to int
 roduce this method to anime production sites.\n\n\nDaisuke Nanya and Kouki
  Yonezawa (Meijo University)\n---------------------\nEmpathy Engine: Using
  Game Design and Real-time Technology to Cultivate Social Connection\n\nTh
 is VR game simulates the experiences of takeaway riders, using real-time d
 ata to create scenarios that foster empathy between consumers and riders, 
 highlighting the challenges faced by delivery workers.\n\n\nYuanlinxi Li (
 Shenzhen International School of Design, Harbin Institute of Technology); 
 Mengyao Guo (Shenzhen International School of Design, Harbin Institute of 
 Technology; University of Macau); and Ze Gao (Hong Kong University of Scie
 nce and Technology)\n---------------------\nStyle Transfer with Gesture St
 yle Generator\n\nWe propose a new style transfer method with Gesture Style
  generator. It transfers style to the output motion in the conventional st
 yle transfer manner while also incorporating generated Gesture Style.\n\n\
 nDaYeon Lee and Seungkyu Lee (Kyunghee University)\n---------------------\
 nControlling Cross-Content Motion Style Transfer via Statistical Style Dif
 ference\n\nThis study demonstrates style transfer improvement by a straigh
 tforward method to adjust style information obtained by Δ𝑆𝑡𝑦𝑙𝑒 which effec
 tively replace original style from a motion content to another target styl
 e.\n\n\nUsfita Kiftiyani and Seungkyu Lee (Kyung Hee University)\n--------
 -------------\nNot Just a Gimmick: A Preliminary Study on Designing Intera
 ctive Media Art to Empower Embedded Culture’s Practitioner\n\nThis paper f
 ocuses on cultural practitioners, explores new design approach for Interac
 tive Media Art (IMA), and the potential of IMA to enhance traditional art 
 creation and sustainability.\n\n\nYihao He (School of Arts and Media, Tong
 ji University)\n---------------------\nGeneralizing Human Motion Style Tra
 nsfer Method Based on Metadata-independent Learning\n\nThis study aims to 
 extend the applicability of motion style transfer methods to be robust for
  diverse and complex motions akin to those found in real-world data.\n\n\n
 Yuki Era, Ren Togo, Keisuke Maeda, Takahiro Ogawa, and Miki Haseyama (Hokk
 aido University)\n---------------------\nIndividual Diffusion Auralize Dis
 play Using an Array of Audio Source Position Tracking Ultrasonic Speakers\
 n\nWe developed a prototype system called the "Individual Diffusion Aurali
 ze Display," which independently generates each instrument’s sound using m
 ultiple parametric array loudspeakers (PAL) and monitor speakers. The syst
 em adjusts the reflection points of the sounds based on the players’ ...\n
 \n\nHyuma Auchi, Akito Fukuda, Yuta Yamauchi, Homura Kawamura, and Keiichi
  Zempo (University of Tsukuba)\n---------------------\nAlive Yi: Interacti
 ve Preservation of Yi Minority Embroidery Patterns through Digital Innovat
 ion\n\nAlive Yi is an interactive project that uses TouchDesigner and Leap
  Motion, to preserve and revitalize the traditional embroidery patterns of
  the Yi minority (cultural heritage) in China.\n\n\nZhiwei Wang and Yuzhe 
 Xia (Southwest Minzu University); Kexin Nie (University of Sydney); and Me
 ngyao Guo (Shenzhen International School of Design, Harbin Institute of Te
 chnology; University of Macau)\n---------------------\nSignal2Hand: Sensor
  Modality Translation from Body-Worn Sensor Signals to Hand-Depth Images\n
 \nSignal2Hand is a hand reconstruction method that directly reconstructs h
 and-depth images from body-worn sensor signals.\n\n\nYuki Kubo (NTT Corpor
 ation) and Buntarou Shizuki (University of Tsukuba)\n---------------------
 \nMedia Bus: XR-Based Immersive Cultural Heritage Tourism\n\nThis study in
 troduces the Media Bus prototype for digital storytelling in Seoul using X
 R, HMDs, and TOLEDs. It integrates VPS/GPS, AR, and TOLED displays, showin
 g potential for enhancing urban tourism.\n\n\nJieon Du and Heewon Lee (Art
  Center Nabi), Jeongmin Lee (Deep.Fine), and Gewon Kim (Seoul Institute of
  the Arts)\n---------------------\nNeural Clustering for Prefractured Mesh
  Generation in Real-time Object Destruction\n\nPrefracture method is a pra
 ctical implementation for real-time object destruction that is hardly achi
 evable within performance constraints, but can produce unrealistic results
  due to its heuristic nature. We approach the clustering of prefractured m
 esh generation as an unordered segmentation on poin...\n\n\nSeunghwan Kim,
  Sunha Park, and Seungkyu Lee (Kyung Hee University)\n--------------------
 -\nGradient Traversal: Accelerating Real-Time Rendering of Unstructured Vo
 lumetric Data\n\nNovel volume rendering algorithm for real-time rendering 
 of unstructured datasets. Two-pass approach with gradient estimation and g
 radient traversal, leveraging modern GPGPU capabilities for interactive ex
 ploration of complex, dense volumetric data.\n\n\nMehmet Oguz Derin (Morge
 nrot, Inc.) and Takahiro Harada (Advanced Micro Devices, Inc.; Morgenrot, 
 Inc.)\n---------------------\n3D Reconstruction of a Soft Object Surface a
 nd Contact Areas in Hand-Object Interactions\n\nIn Hand-Object Interaction
 s (HOIs), contact information between is crucial. We present a preliminary
  attempt to reconstruct the surface of a soft object and identify the cont
 act area on that surface.\n\n\nKohei Miura (Osaka University; Kyoto Resear
 ch, Sony Computer Scence Laboratories, Inc.) and Daisuke Iwai and Kosuke S
 ato (Osaka University)\n---------------------\nPhantom Audition: Using the
  Visualization of Electromyography and Vocal Metrics as Tools in Singing T
 raining\n\nOur approach aims to use EMG and vocal metrics to enhance vocal
  training with multi-modality feedback, comparing a professional singer an
 d students to analyze muscle control and quality.\n\n\nKanyu Chen (Keio Un
 iversity Graduate School of Media Design, Institute of Science Tokyo); Emi
 ko Kamiyama (Keio University Graduate School of Media Design); Ruiteng Li 
 (Waseda University); Yichen Peng and Daichi Saito (Institute of Science To
 kyo); Erwin Wu (Institute of Science Tokyo, Huawei); Hideki Koike (Institu
 te of Science Tokyo); and Akira Kato (Keio University Graduate School of M
 edia Design)\n---------------------\nLi Bai the Youth: An LLM-Powered Virt
 ual Agent for Children’s Chinese Poetry Education\n\nLi Bai the Youth is a
 n interactive installation featuring a virtual agent powered by LLMs, offe
 ring real-time poetic dialogue and enhancing children's engagement with cu
 ltural heritage through immersive learning.\n\n\nYurun Chen, Xin Lyu, Tian
 zhao Li, and Zihan Gao (Communication University of China)\n--------------
 -------\nXR Avatar Prototype for Art Performance Supporting the Inclusion 
 of Neurodiverse Artists\n\nOur prototype uses volumetric video and AR to c
 reate an interactive Noh performance, enabling neurodiverse artists and a 
 Noh singer to transcend cross-cultural and cross-ability barriers for incl
 usive art making.\n\n\nShigenori Mochizuki (College of Image Arts and Scie
 nces, Ritsumeikan University); Jonathan Duckworth and Ross Eldridge (Schoo
 l of Design, College of Design and Social Context, RMIT University); and J
 ames Hullick (Jolt Sonic and Visual Arts Inc)\n---------------------\nMate
 rial and Colored Illumination Separation from Single Real Image via Semi-S
 upervised Domain Adaptation\n\nWe propose a training strategy for intrinsi
 c decomposition networks that bridges the domain gap between synthetic and
  real images, enabling even simple CNN to achieve excellent material and i
 llumination separation.\n\n\nHao Sha, Tongtai Cao, and Yue Liu (Engineerin
 g Research Center of Mixed Reality and Advanced Display, School of Optics 
 and Photonics, Beijing Institute of Technology)\n---------------------\nEc
 hoes of Antiquity: An Interactive Installation for Guqin Culture  Heritage
  Using Mid-Air interaction and Generative AI\n\n"Echoes of Antiquity"  is 
 an interactive installation that utilizes Leap Motion for gesture recognit
 ion and generative AI for image processing to illustrate the symbolic elem
 ents of Guqin culture.\n\n\nYuyao Heng, Yingman Chen, and Zihan Gao (Commu
 nication University of China)\n---------------------\nReal-time Holographi
 c Media System Utilizing HBM-based Holography Processor\n\nThis paper intr
 oduces a real-time holographic media system that converts 2D or RGBD video
 s into 3D holograms. The core of this system includes a Linux host that ex
 tracts depth information from 2D images and transmits it via packets, and 
 a holography processor leveraging high-bandwidth memory (HBM) t...\n\n\nWo
 nok Kwon, Sanghoon Cheon, Kihong Choi, and Keehoon Hong (Electronics and T
 elecommunications Research Institute. Technology)\n---------------------\n
 Self-attention Handwriting Generative Model\n\nThis study introduces a GAN
 -based model, zi2zi self-attention, which incorporates residual blocks and
  Self-Attention Layers in the encoder and decoder. These enhancements capt
 ure handwriting font details, mimicking the writer’s style.\n\n\nYu-Chiao 
 Wang, Tung-Ju Hsieh, and Pei-Ying Chiang (National Taipei University of Te
 chnology)\n---------------------\nPerceptually Uniform Hue Adjustment: Hue
  Distortion Cage\n\nA method for making perceptually linear hue adjustment
 s leveraging the OKLab color space to shift colors in the L,a,b model rath
 er than in L,c,h, as is common in other software.\n\n\nDeinyon Lachlan Dav
 ies and Chris Cook (Canva)\n---------------------\nFinger-Pointing Interfa
 ce for Human Gesture Recognition Based on Real-Time Geometric Comprehensio
 n\n\nThis study introduces a interface using stereo cameras to recognize f
 inger-pointing gestures and estimate 3D coordinates, enhancing Human-Compu
 ter Interaction and intuitive user-robot communication. Future improvement
 s target higher accuracy and reliability.\n\n\nMinjae Lee, Jiho Bae, Sang-
 Min Choi, and Suwon Lee (Gyeongsang National University)\n----------------
 -----\nMultidirectional Superimposed Projection for Delay-free Shadow Supp
 ression on 3D Objects\n\nIntroducing our innovative multidirectional super
 imposed projection system designed to eliminate shadows on 3D objects with
 out any delay. This breakthrough ensures seamless user experiences, even w
 ith dynamic occlusions.\n\n\nTakahiro Okamoto, Daisuke Iwai, and Kosuke Sa
 to (Osaka University)\n---------------------\nIncremental Gaussian Splatti
 ng: Gradual 3D Reconstruction from a Monocular Camera Following Physical W
 orld Changes\n\nIncremental Gaussian Splatting enables real-time 3D recons
 truction in dynamic environments using a monocular camera. I-GS outperform
 s conventional methods, providing accurate reconstructions resilient to mo
 ving objects, significantly enhancing remote physical collaboration.\n\n\n
 Keigo Minamida (University of Tokyo) and Jun Rekimoto (University of Tokyo
 , Sony CSL Kyoto)\n---------------------\nNatureBlendVR: A Hybrid Space Ex
 perience for Enhancing Emotional Regulation and Cognitive Performance\n\nT
 he NatureBlendVR, interactive experience designed to enhance emotional reg
 ulation and cognitive function by merging XR technology with bio-responsiv
 e physical elements.\n\n\nKinga Skiers, Peng Danyang, and Giulia Barbaresc
 hi (Keio University Graduate School of Media Design); Pai Yun Suen (Empath
 ic Computing Lab, The University of Auckland; Keio University Graduate Sch
 ool of Media Design); and Kouta Minamizawa (Keio University Graduate Schoo
 l of Media Design)\n---------------------\n`Colorblind Game' Can Enhances 
 Awareness of Color Blindness\n\nThis study explores if a digital game can 
 boost color blindness awareness through the user study with 'color blind' 
 variations of Puyo Puyo. The results suggest positive effects on awareness
 .\n\n\nTaiju KIMURA (Kochi University of Technology) and Hiroki Nishino (U
 niversity of Bedforshire)\n---------------------\nAffective Wings: Explori
 ng Affectionate Behaviors in Close-Proximity Interactions with Soft Floati
 ng Robots\n\nThis study presents “Affective Wings,” a concept involving a 
 soft floating robot designed to enable proximal interactions and physical 
 contact with humans to support emotional connection.\n\n\nMingyang Xu and 
 Yulan Ju (Keio University Graduate School of Media Design); Yunkai Qi (Bei
 hang University); Xiaru Meng (Keio University Graduate School of Media Des
 ign); Qing Zhang (University of Tokyo); and Matthias Hoppe, Kouta Minamiza
 wa, Giulia Barbareschi, and Kai Kunze (Keio University Graduate School of 
 Media Design)\n---------------------\nEngaging Racing Fans through Offline
  E-racing Spectator Experience in AR\n\nA new spectator experience for eng
 aging race fans on non-racing days. Spectators can watch an e-racer's virt
 ual game car race against the pre-recorded car data from an actual race.\n
 \n\nHsueh Han Wu (Rakuten Mobile, Inc.); Kelvin Cheng (Rakuten Institute o
 f Technology, Rakuten Group, Inc.; Rakuten Mobile, Inc.); and Jorge Luis C
 hávez Herrera and Koji Nishina (Rakuten Mobile, Inc.)\n-------------------
 --\nSemantics-guided 3D Indoor Scene Reconstruction from a Single RGB Imag
 e with Implicit Representation\n\nWe enhance single-view 3D scene reconstr
 uction by integrating semantic segmentation with implicit functions, using
  a semantic-guided image encoder and categorical attention module, achievi
 ng improved feature extraction and reconstruction quality.\n\n\nYi-Ju Pan,
  Pei-Chun Tsai, and Kuan-Wen Chen (National Yang Ming Chiao Tung Universit
 y)\n---------------------\nStrainer GAN: Filtering out Impurity Samples in
  GAN Training\n\nStrainer GAN: A method refining impure datasets to enhanc
 e GAN training. Uses automatic filtering to improve image quality and stab
 ility across various architectures. Effective for real-world applications 
 with impure data.\n\n\nJiho Shin and Seungkyu Lee (Kyunghee University)\n-
 --------------------\nControlling Diversity in Single-shot Motion Synthesi
 s\n\nWe propose a VAE-GAN model for the task of controllable and diverse m
 otion synthesis from a single motion sample as an alternative to the data-
 dependent modality-to-motion methods.\n\n\nEleni Tselepi (University of Th
 essaly, Moverse) and Spyridon Thermos, Georgios Albanis, and Anargyros Cha
 tzitofis (Moverse)\n---------------------\nDiskPlay: Dynamic Projection Ma
 pping on Rotating Platforms for Extended Holographic Display\n\nDiskPlay i
 s an holographic display that uses dynamic projection onto rotating disks.
  This system provides special visual expressions, such as stereoscopic ima
 ges, and interactions through manual disk replacement and rotation.\n\n\nH
 idetaka Katsuyama, Shio Miyafuji, and Hideki Koike (Institute of Science T
 okyo)\n---------------------\nTracery Designer: A Metaball-Based Interacti
 ve Design Tool for Gothic Ornaments\n\nThis study proposes a design suppor
 t system for interactively designing Gothic ornaments. This system is capa
 ble of not only designing Gothic ornaments, but also generating shape-shif
 ting animations of Gothic patterns.\n\n\nJoe Takayama (Musashino Art Unive
 rsity)\n---------------------\nSensory Cravings: A Mixed Reality Installat
 ion Enhancing Psychological Experiences through Multisensory Interactions\
 n\nSensory Cravings utilizes mixed reality to create multisensory experien
 ces simulating the emotional effects of consumables like coffee, alcohol, 
 and desserts, aiming to alleviate stress and enhance well-being.\n\n\nShuy
 i Li, Yifan Ding, and Zihan Gao (Communication University of China)\n-----
 ----------------\nAn Exploratory Study on Fabricating of Unobtrusive Edibl
 e Tags\n\nThis paper explores alternative fabrication techniques for embed
 ding unobtrusive tags inside foods. We present two techniques that do not 
 require food 3D printing, including molding and stamping.\n\n\nYamato Miya
 take and Parinya Punpongsanon (Saitama University)\n---------------------\
 nGentlePoles : Designing Wooden Pole Actuators for Guiding People\n\nGentl
 e Poles are wooden pole-like actuators that gently rotate to guide people 
 without relying on text signs or staff. By arranging poles with individual
 ly controllable rotation direction and speed, the system gently and subtly
  directs people. For example, changes in rotation can convey messages suc.
 ..\n\n\nMasaya Shimizu, Berend te Linde, Takatoshi Yoshida, Arata Horie, N
 obuhisa Hanamitsu, and Kouta Minamizawa (Keio University Graduate School o
 f Media Design)\n---------------------\n[INDRA] Interactive Deep-dreaming 
 Robotic Artist: Perceived artistic agency when collaborating with Embodied
  AI\n\n"INDRA," a painting robot, merges human creativity with AI, explori
 ng the evolving role of human agency in co-creation through exchanging pai
 nting turns between a user and AI artist on a shared canvas. The system al
 lows us to find a balance between meaningful human control for perceived a
 rtistic age...\n\n\nMarina Nakagawa and Sohei Wakisaka (Keio University Gr
 aduate School of Media Design)\n---------------------\nPerceiving 3D from 
 a 2D Mid-air Image\n\nThis study examines depth perception in mid-air imag
 es. Participants compared images to real objects. Results show reduced dep
 th perception in mid-air images.\n\n\nSaki Kominato, Miyu Fukuoka, and Nao
 ya Koizumi (University of Electro-Communications)\n---------------------\n
 Bridging Reality and the Virtual Environment: Perceptual Consistency and V
 isual Adaptation\n\nWe investigated perceptual consistency in MR headsets,
  focusing on brightness and color. Through psychophysics experiments and m
 odel development, we aimed to achieve perceptual consistency and establish
  color rendering targets.\n\n\nJun Miao, Alex Shin, Jeanne Vu, Takanori Mi
 ki, Guodong Rong, and Joshua Davis (Meta); Zilong Li (Rochester Institute 
 of Technology); and Wenbin Wang, Jinglun Gao, and Jiangtao Kuang (Meta)\n-
 --------------------\nA Novel Projection Screen using the Crystalline Film
  of a Frozen Soap Bubble\n\nThis study proposes a novel screen that focuse
 s on the freezing phenomenon of soap films. Although soap films are thin a
 nd transparent, in low-temperature environments, beautiful ice crystals fo
 rm on the surface, reducing transparency and enabling image projection ont
 o the film. This study explores ...\n\n\nShinichiro Terasawa, Oki Hasegawa
 , and Toshiki Sato (Japan Advanced Institute of Science and Technology (JA
 IST))\n---------------------\nDisparity Map based Synthetic IR Pattern Aug
 mentation for Active Stereo Matching\n\nWe propose an efficient IR pattern
  augmentation method using disparity maps without occlusion test and verif
 y its performance by applying it to deep learning based stereo matching me
 thods.\n\n\nRayun Boo, Jinhong Park, Jinwoo Kim, Sunho Ki, and Jeong-Ho Wo
 o (VisioNexT)\n---------------------\nAn Augmented Reality Experience for 
 Climate Justice: Using Spatial Animation to Enhance Perceived Togetherness
 \n\nThis project focuses on designing an AR/MR application for climate jus
 tice communication. It examines ways to use spatial graphics/animation/sou
 nd for multi-user simulations with strategies regarding economic and envir
 onmental conflicts.\n\n\nChing-Hua Chuan, Wan-Hsiu Tsai, and Xueer Xia (Un
 iversity of Miami)\n---------------------\nMixed Reality Solutions for Tre
 mor Disorders: Ergonomic Hand Motion and AR Rehabilitation\n\nIntroducing 
 a mixed reality device for tremor disorders, combining ergonomic hand assi
 stance with AR rehabilitation. It offers adaptive support, personalized ex
 ercises, and telehealth, enhancing motor control and user autonomy.\n\n\nX
 injun Li (Cornell University) and Zhenhong Lei (Rhode Island School of Des
 ign)\n---------------------\nNew Fashion: Personalized 3D Design with a Si
 ngle Sketch Input\n\nIn this work, we democratize 3D garment design using 
 freehand sketches and a carefully designed three-stage conditional diffusi
 on network with random sampling augmented point cloud pre-training for hig
 h-quality 3D creation.\n\n\nTianrun Chen (Zhejiang University); Xinyu Chen
 , Chaotao Ding, and Ling Bai (Huzhou University); Shangzhan Zhang (Zhejian
 g University); Lanyun Zhu (Singapore University of Technology and Design (
 SUTD)); Ying Zang and Wenjun Hu (Huzhou University); and Zejian Li and Lin
 gyun Sun (Zhejiang University)\n---------------------\nFitting Spherical G
 aussians to Dynamic HDRI Sequences\n\nWe present a technique for fitting h
 igh dynamic range illumination (HDRI) sequences using anisotropic spherica
 l Gaussians (ASGs) while preserving temporal consistency in the compressed
  HDRI maps.\n\n\nPascal Clausen, Li Ma, Mingming He, Ahmet Levent Taşel, O
 liver Pilarski, and Paul Debevec (Netflix Eyeline Studios)\n--------------
 -------\n3D Texture Representation in Projection Mapping onto a Surface wi
 th Micro-Vibration\n\nIn projection mapping, we proposed a method of addin
 g depth vibration to the projection target and projecting images at high-s
 peed synchronized with the vibration to represent three-dimensional result
 s.\n\n\nHayase Nishi, Daisuke Iwai, and Kosuke Sato (Osaka University)\n--
 -------------------\nHigh Spatial Resolution Projection Mapping for Visual
 ly Consistent Reproduction of Physical Surfaces\n\nIncreasing pixel densit
 y by optically reducing the projection area enables the presentation of a 
 virtual object whose appearance perceptually matches that of a real object
  with fine texture.\n\n\nIkuho Tani, Daisuke Iwai, and Kosuke Sato (Osaka 
 University)\n---------------------\nShadows Being Vacuumed Away: An MR Exp
 erience of Shadow Loss of Body with Spine-Chilling and Body-Trembling,  an
 d Shadow Loss of Thing\n\nIn this MR experience, as your shadow is sucked 
 into a vacuum cleaner, it simultaneously triggers spine-chilling sensation
 s and body vibrations. The shadows of objects are also drawn in.\n\n\nRyu 
 Nakagawa, Kenta Hidaka, Shimpei Biwata, Sho Kato, and Taiki Shigeno (Nagoy
 a City University)\n---------------------\nDesigning LLM Response Layouts 
 for XR Workspaces in Vehicles\n\nThis study investigates the design of res
 ponse layouts for large language models in XR environments for vehicle set
 tings.\n\n\nDaun Kim and Jin-Woo Jeong (Seoul National University of Scien
 ce and Technology)\n---------------------\nA Lava Well of Reflexivity: Exp
 loring Speculative Ambient Media\n\nThis work explores ambient media that 
 speculatively integrates reflexive elements to subtly prompt reflection on
  environmental issues. It challenges perceptions of human impact, aiming t
 o foster dialogue and awareness.\n\n\nTing Han Daniel Chen (Play Design La
 b; Department of Art&Design, Yuan-Ze University)\n---------------------\nR
 eal-Time Transfer Function Editor for Direct Volume Rendering in Mixed Rea
 lity\n\nIntroducing an innovative MR system for Direct Volume Rendering, e
 nabling intuitive real-time transfer function customization with interacti
 ve node editing, color, and opacity controls for enhanced volumetric data 
 visualization.\n\n\nJunseo Choi, Hyeonji Kim, Haill An, and Younhyun Jung 
 (Gachon University)\n---------------------\nHybrid Physical Model and Stat
 us Data-Driven Dynamic Control for Digital Light Processing 3D Printing\n\
 nThis paper proposes a dynamic control scheme for DLP 3D printing that com
 bines physical models with status data. By developing physical models, dat
 a capture, and analysis methods, we dynamically adjust the printing protoc
 ol to improve efficiency.\n\n\nLidong Zhao and Xueyun Zhang (Beijing Unive
 rsity of Technology), Lin Lu (Shandong University), and Lifang Wu (Beijing
  University of Technology)\n---------------------\nHidEye: Proposal of HMD
  Interaction Method by Hiding One Eye\n\nWe proposes HidEye, an interactio
 n method that enables users to easily switch between virtual and real spac
 e content by superimposing VR content with a pass-through function trigger
 ed by covering one eye.\n\n\nRyunosuke Ise and Koji Tsukada (Future Univer
 sity Hakodate)\n---------------------\nARAP-Based Shape Editing to Manipul
 ate the Center of Mass\n\nWe propose a method to deform the shape such tha
 t the center of mass will match a given target position. Users can manipul
 ate its position in interactive speed.\n\n\nShunsuke Hirata (University of
  Tokyo); Yuta Noma (University of Tokyo, University of Toronto); Koya Naru
 mi (Keio University, University of Tokyo); and Yoshihiro Kawahara (Univers
 ity of Tokyo)\n---------------------\nA Multimodal LLM-based Assistant for
  User-Centric Interactive Machine Learning\n\nWe introduce a multimodal LL
 M-based system that aids non-expert users in machine learning development 
 by translating vague user needs into concrete task formulations through in
 teractive chat, ensuring comprehensive training data.\n\n\nWataru Kawabe a
 nd Yusuke Sugano (University of Tokyo)\n---------------------\nAnimated Pi
 ctorial Maps\n\nWe propose a new form of map creation that makes animated 
 cartography more approachable for non-experts. We focus on improving the b
 lending and map style transfer quality.\n\n\nDong-Yi Wu, Li-Kuan Ou, and H
 uiGuang Huang (National Cheng Kung University); Yu Cao (Hong Kong Polytech
 nic University, National Cheng Kung University); Xin-Wei Lin (National Che
 ng Kung University); Thi-Ngoc-Hanh Le (School of Computer Science and Engi
 neering, International University, Ho Chi Minh City, Vietnam; Vietnam Nati
 onal University, Ho Chi Minh City, Vietnam); and Sheng-Yi Yao and Tong-Yee
  Lee (National Cheng Kung University)\n---------------------\nDesign of Wa
 ll Art Utilizing Dynamic Color Changes through Photoelasticity\n\nThis res
 earch presents a dynamic wall art piece using photoelasticity, where color
  changes occur as polarized light passes through a stretched, transparent,
  flexible material. Unlike pixel-based displays such as LCDs and LED matri
 ces, photoelasticity creates gentle, organic color transitions with su...\
 n\n\nRyota Nakayama, Soshi Takeda, Gakuto Sekine, and Yuichiro Katsumoto (
 Tokyo Denki University)\n---------------------\nAnalyzing and Visualizing 
 the Correlation between Ecosystems and Environmental Sustainability : Focu
 sing on Search API Data\n\nThis study aims to analyze public awareness of 
 environmental sustainability, particularly focusing on the ecological impo
 rtance of bees and their impact on human food production. Real-time search
  query data were collected and analyzed to identify trends related to bees
  and environmental keywords. Ar...\n\n\nJungIn Lee (Kyungpook National Uni
 versity)\n---------------------\nIT3: Immersive Table Tennis Training Base
 d on 3D Reconstruction of Broadcast Video\n\nUsing 3D reconstruction and p
 hysical simulation, we introduce a VR system for table tennis training tha
 t allows users to refine their receiving skills and experience immersive g
 ame replays.\n\n\nPei-Hsin Huang, Shang-Ching Liu, Li-Yang Huang, Chuan-Me
 ng Chiu, Jo-Chien Wang, Pu Ching, and Hung-Kuo Chu (National Tsing Hua Uni
 versity) and Min-Chun Hu (National Tsing Hua University, Taiwan Institute 
 of Sports Science)\n---------------------\nSound Signatures for Geometrica
 l Shapes\n\nWe propose automatic generation of sound signatures—pseudoword
 s which can be associated with certain features of geometric shapes such a
 s roundness, spikiness, etc.\n\n\nHanqin Wang and Alexei Sourin (Nanyang T
 echnological University (NTU))\n---------------------\nMambaPainter: Neura
 l Stroke-Based Rendering in a Single Step\n\nMambaPainter predicts a seque
 nce of parameterized brush strokes for stroke-based rendering. We achieved
  over 100 brush strokes in a single inference step based on the selective 
 SSM layers, enabling the efficient translation of source images to the oil
  painting style.\n\n\nTomoya Sawada and Marie Katsurai (Doshisha Universit
 y)\n---------------------\nDevelopment of Tiny Wireless Position Tracker E
 nabling Real-Time Intuitive 3D Modeling\n\nThis work implemented a wireles
 sly powered and communicable position tracker in a 10 mm cubic volume, ach
 ieving a 4.75 mm localization error, 10 cm communication range, and an ada
 ptive power receiver providing appropriate voltage for the tracker regardl
 ess of the distance between the wireless transm...\n\n\nYuki Maegawa, Masa
 nori Hashimoto, and Ryo Shirai (Kyoto University)\n---------------------\n
 Multimodal Learning for Autoencoders\n\nWe propose Multimodal Autoencoder 
 architecture and training scheme, in which images are reconstructed using 
 both image and text inputs, rather than just images as its input.\n\n\nWaj
 ahat Ali Khan and Seungkyu Lee (Kyung Hee University)\n-------------------
 --\nEmpowering CG Production：Cost-Effective Techniques for Voluminous Fur 
 Rendering with Unreal Engine\n\nThis paper presents a comprehensive approa
 ch to rendering voluminous, realistic fur in Unreal Engine. We introduce s
 trategies such as groom node-splitting, mesh conversion and density increa
 se to optimize GPU memory usage and enhance rendering capacity.\n\n\nNing 
 Xia, Xiaofei Yin, and Xuecong Feng (Children's Playground Entertainment In
 c.)\n---------------------\nBoundary Conditioned Floor Layout Generation w
 ith Diffusion Model\n\nUsing self-attention mechanisms, we generate automa
 ted floor plans that align with exterior wall boundaries. Our approach imp
 roves accuracy and produces diverse, precise vector floor plans over exist
 ing GAN methods.\n\n\nYusuke Takeuchi (University of Tokyo; Tetraz, Inc) a
 nd Qi An and Atsushi Yamashita (University of Tokyo)\n--------------------
 -\nAutomotive Holographic Head-Up Display\n\nAdvanced automotive holograph
 ic HUD system : offering 3D holograms, extended virtual image distance (>1
 0m), large field of view (>15 degrees), and wide monocular eye-box (>2cm) 
 for comfortable viewing experience.\n\n\nJinsu Lee, Keehoon Hong, and Mins
 ik Park (Electronics and Telecommunications Research Institute (ETRI))\n--
 -------------------\nVisualization Methods for Manual Wheelchair Training:
  Impact on Communication Between Coaches and Users\n\nThis study evaluates
  a visualization method to enhance communication and skill improvement bet
 ween wheelchair users and coaches, using video, pose estimation, and senso
 r data during a 40-m sprint.\n\n\nXu Han (Tokyo Metropolitan University), 
 Asuka Mano (Research Institute of National Rehabilitation Center for Perso
 ns with Disabilities), Saki Sakaguchi and Mina Shibasaki (Tokyo Metropolit
 an University), Tsuyoshi Nakayama (Research Institute of National Rehabili
 tation Center for Persons with Disabilities), Yuji Higashi (Japanese Assoc
 iation of Occupational Therapists), and Kumiko Kushiyama (Tokyo Metropolit
 an University)\n---------------------\nCurtain UI: Augmenting Curtains for
  Tangible Interactions\n\nTransform everyday curtains into interactive int
 erfaces using capacitive sensing. Curtain UI enables touch-sensitive gestu
 res to control smart home appliances, offering practical and intuitive emb
 odied interactions.\n\n\nPranshu Anand (IIIT Bangalore, Creative Interface
 s Lab - IIIT Delhi) and Vishal Bharti and Anmol Srivastava (Creative Inter
 faces Lab - IIIT Delhi)\n---------------------\nFast and Robust 3D Gaussia
 n Splatting for Virtual Reality\n\nThe method provides fast and artifact-f
 ree rendering of 3D Gaussian Splatting scenes with user-grade VR hardware.
  We combine prior art and our own contributions to address popping, overly
  large Gaussians, and performance issues in VR settings to provide a smoot
 h user experience, as confirmed by a sm...\n\n\nXuechang Tu (Peking Univer
 sity, Carnegie Mellon University) and Bernhard Kerbl and Fernando de la To
 rre (Carnegie Mellon University)\n---------------------\nEfficient Space V
 ariant Gaussian Blur Approximation\n\nWe propose a novel approximation met
 hod to render spatially varying Gaussian blur's in real time. This has bee
 n applied in a web app to allow for creative image blurring.\n\n\nOliver R
 ichards and Chris Cook (Canva)\n---------------------\nGenerating Font Var
 iations Using Latent Space Trajectory\n\nVariable fonts can freely adjust 
 the parameters of font properties and have been rapidly spreading recently
 .\n\nOur work will significantly contribute to the further development of 
 variable fonts.\n\n\nSotaro Kanazawa, I-Chao Shen, Yuki Tatsukawa, and Tak
 eo Igarashi (University of Tokyo)\n---------------------\nDualAvatar: Robu
 st Gaussian Avatar with Dual Representation\n\nWe propose DualAvatar, a ro
 bust Gaussian Splatting avatar reconstruction method that leverages a lear
 nable mesh avatar to achieve more reliable reconstruction and rendering of
  unseen poses from monocular video.\n\n\nJinsong Zhang (Tianjin University
 , University of Tokyo); I-Chao Shen and Jotaro Sakamiya (University of Tok
 yo); Yu-Kun Lai (Cardiff University); Takeo Igarashi (University of Tokyo)
 ; and Kun Li (Tianjin University)\n---------------------\nA Simple Heat Me
 thod for Computing Geodesic Paths on General Manifold Representations\n\nW
 e propose a novel algorithm for computing geodesic paths on general manifo
 lds given only the ability to perform closest point queries and 1D heat fl
 ow.\n\n\nNathan King (University of Waterloo), Steven Ruuth (Simon Fraser 
 University), and Christopher Batty (University of Waterloo)\n-------------
 --------\n3D Human Pose Estimation Using Ultra-low Resolution Thermal Imag
 es\n\nCan we estimate 3D human pose from 8x8 thermal images? Our framework
  enhances privacy using adversarial learning, ensuring robustness against 
 temperature and subject variations while minimizing personal information e
 xposure.\n\n\nTatsuki Arai (Keio University); Mariko Isogawa (Keio Univers
 ity, JST PRESTO); Kuniharu Sakurada (Keio University, University of Tokyo)
 ; and Maki Sugimoto (Keio University)\n---------------------\nCo-play with
  Double  Self: Exploring Bodily-Self Through Heautoscopy-Based XR Hide and
  Seek Game\n\nIn the hide-and-seek game, what kind of experience or percep
 tion will be created if a single player plays both the role of the hider a
 nd the seeker simultaneously?\n\n\nKezhou Yang and Sohei Wakisaka (Keio Me
 dia Design)\n---------------------\nDesign for Hypnotic Line Art Animation
  from a Still Image\n\nStreamline art is a digital art using single-color 
 lines to create 3D alike art images.  However, manually creating line art 
 animation is time-consuming. We present ideas to create streamline animati
 on.\n\n\nXin-Wei Lin, Zhi-Yang Goh, HuiGuang Huang, and Dong-Yi Wu (Nation
 al Cheng-Kung University); Thi-Ngoc-Hanh Le (School of Computer Science an
 d Engineering, International University, Ho Chi Minh City, Vietnam; Vietna
 m National University, Ho Chi Minh City, Vietnam); and Tong-Yee Lee (Natio
 nal Cheng-Kung University)\n---------------------\nSpecTrack: Learned Mult
 i-Rotation Tracking via Speckle Imaging\n\nSpecTrack uses a laser speckle-
 based lensless camera to obtain multi-axis absolute rotations, doubling th
 e accuracy of current rotation estimation techniques and enabling dynamic 
 rotation tracking with minimal hardware requirements.\n\n\nZiyang Chen (Un
 iversity College London (UCL)), Mustafa Doğan (Adobe Research), Josef Spju
 t (NVIDIA), and Kaan Akşit (University College London (UCL))\n------------
 ---------\nMMM: Mid-air image Moving in and out of the Mirror with backwar
 d glance in the mirror\n\nWe propose an optical system displaying mid-air 
 images that move between the inside and outside of a mirror and presenting
  a worldview of the mirror as an adjacent virtual world.\n\n\nYasunori Aka
 shi, Changyo Han, and Takeshi Naemura (University of Tokyo)\n-------------
 --------\nJaku-in: A Cultural Skills Training System for Recording and Rep
 roducing Three-dimensional Body, Eye, and Hand Movements\n\nWe propose "Ja
 ku-in", which displays the expert’s hand movements and gazes in a three-di
 mensional reconstructed space using a point cloud to help the expert's int
 ention for novices.\n\n\nSotaro YOKOI (SONY CSL Kyoto, University of Tokyo
 ); Kaishi Amitani and Natsuki Hamanishi (SONY CSL Kyoto); and Jun Rekimoto
  (SONY CSL Kyoto, University of Tokyo)\n---------------------\nTime Light:
  An Interface for Comparing National Treasure Murals Across Time\n\nTime L
 ight is an interface for comparing damaged 2D artworks with their reconstr
 uctions. This study details its application using Takamatsuzuka Kofun mura
 ls as an example.\n\n\nWenze Song and Takefumi Hayashi (Kansai University 
 Graduate School of Informatics)\n---------------------\n3D Scene Reconstru
 ction of Point Cloud Data: A Lightweight Procedural Approach\n\nWe propose
  a pipeline for the automatic reconstruction of buildings and facades in t
 he form of procedural grammar descriptions from raw point cloud data of sc
 enes.\n\n\nVivica Wirth, Max Mühlhäuser, and Alejandro Sanchez Guinea (Tec
 hnical University of Darmstadt)\n---------------------\nVibrotactile Invis
 ible Presence: Conveying Remote Presence through Moving Vibrotactile Foots
 tep Cues on a Haptic Floor\n\nWe propose concept to convey presence by tra
 nsmitting footsteps and vibrotactile cues from one space to another.  Our 
 prototype uses floor-mounted microphones to capture footsteps and underflo
 or transducers to create moving vibrations elsewhere. Preliminary experime
 nts showed that users can partiall...\n\n\nTakahiro Kusabuka, Yuichi Maki,
  Kakagu Komazaki, Masafumi Suzuki, Hiroshi Chigira, and Takayoshi Mochizuk
 i (NTT Corporation)\n\nRegistration Category: Enhanced Access, Exhibit & E
 xperience Access, Experience Hall Exhibitor, Full Access, Full Access Supp
 orter, Trade Exhibitor
END:VEVENT
END:VCALENDAR
