BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163813Z
LOCATION:Exhibition Hall 1\, Level 2 (Exhibition Centre)
DTSTART;TZID=Australia/Melbourne:20231214T100000
DTEND;TZID=Australia/Melbourne:20231214T173000
UID:siggraphasia_SIGGRAPH Asia 2023_sess200@linklings.com
SUMMARY:Posters Gallery
DESCRIPTION:The Posters program provides an interactive forum for innovati
 ve ideas that are not yet fully polished, for high-impact practical contri
 butions, for behind-the-scenes views of new commercial and artistic work, 
 and for solutions that help solve challenging problems. It is a cooperativ
 e setting where students, researchers, artists, enthusiasts, and industry 
 veterans come together to present their research, art, and ideas to the gl
 obal CG industry and encourage feedback on recently completed work or tent
 ative new approaches.\nThese ideas are put together into simple, visually 
 attractive posters showcased at SIGGRAPH Asia. Posters’ authors will also 
 be present to explain their findings, discuss their work, receive feedback
  and network with all attendees.\n\nDeep Albedo: A Machine Learning Approa
 ch to Real-Time Photo Realistic Human Skin Rendering and Editing Using Aut
 oencoders\n\nWe demonstrate an efficient technique to model skin color cha
 nges\nof a human face due to aging and changing of emotions by variation\n
 of the spatially dependent biophysical properties of skin.\n\n\nJoel Johns
 on (University of British Columbia, Huawei); Kenneth Chau (University of B
 ritish Columbia); and Wei Sen Loi, Abraham Beauferris, Swati Kanwal, and Y
 ingqian Gu (Huawei)\n---------------------\nMulti-Stage Manufacturing for 
 Preoperative Medical Models  with Overhanging Components\n\nWe propose a c
 ost-effective, multi-stage hybrid manufacturing method that combines print
 ing and molding to progressively solidify intricate medical models. And we
  have successfully produced a liver model with overhanging tumors.\n\n\nMi
 ngli Xiang and Zun Li (Beijing University of Technology), Lin Lu (Shandong
  University), and Lifang Wu (Beijing University of Technology)\n----------
 -----------\nIgnis: Eulerian Fluid Simulation and Rendering at VR Frame Ra
 tes\n\nIgnis is a GPU Eulerian fluid solver which utilises an approximate 
 shadowing technique and an adaptive dithering technique to simulate and re
 nder at VR resolutions and refresh rates.\n\n\nCharlie Shenton (RMIT Unive
 rsity, CSIRO)\n---------------------\nGaze and Graze: Illuminating Taiwane
 se Hand Puppet Character Display and Deconstructing Visual Engagement\n\nT
 aiwanese Puppetry, deeply rooted in culture, gains significance in this st
 udy through gaze interaction. Using Tobii Nano and Unity, we reinterpret t
 raditional art with eye-tracking for a more profound experience.\n\n\nYun-
 Ju Chen (National Taipei University of Business) and Tsuei-Ju Hsieh (Natio
 nal Tsing Hua University)\n---------------------\nAvatars for Good Drinkin
 g: An Exploratory Study of The Effects of Avatar’s Body Shape on Beverage 
 Perception\n\nIn a virtual environment, we studied how avatar body shape i
 mpacts beverage perception. Gradual body transitions improved body ownersh
 ip, and larger avatars enhanced purchase intention.\n\n\nYusuke Koseki, Yu
 suke Arikawa, Kizashi Nakano, and Takuji Narumi (University of Tokyo)\n---
 ------------------\nAerial Display Method Using a Flying Screen with an IR
  Marker and Long Range Dynamic Projection Mapping\n\nWe have studied a pro
 jection-based aerial display method. In this poster, we propose a new IR m
 arker for precise screen tracking and a long-range projection principle us
 ing high brightness projector.\n\n\nYuito Hirohashi and Hiromasa Oku (Gunm
 a University)\n---------------------\nGeometry Aware Texturing\n\nGiven a 
 mesh of the outfit and a text prompt, our method is capable of producing h
 igh-quality diffuse texture in around 6 seconds running on a single A40 GP
 U.\n\n\nEvgeniia Cheskidova, Alexander Arganaidi, Daniel-Ionut Rancea, and
  Olaf Haag (Ready Player Me)\n---------------------\nThe Effect of Wearing
  Knee Supporters on the Applicable Gain of Redirected Walking\n\nWe invest
 igated the effect of knee supporters on the applicable gain of redirected 
 walking. As a result, it was indicated that knee supporters can influence 
 the applicable gain.\n\n\nGaku Fukui, Takuto Nakamura, Keigo Matsumoto, Ta
 kuji Narumi, and Hideaki Kuzuoka (University of Tokyo)\n------------------
 ---\nDigital Transformation of Ethnic Dance Heritage: A Multimodal Interac
 tive Game to Balancing Instructional and Cultural Essence\n\nWe have emplo
 yed a multimodal interactive approach to create an  educational game for e
 thnic dances, thereby enhancing players' motion instruction and cultural e
 xperience in the process of dance .\n\n\nMingyang Su, Yun Xie, FeiFei Wu, 
 Ke Fang, XiaoMei Nie, and Xiu Li (Tsinghua University)\n------------------
 ---\n3D Lighter: Learning to Generate Emissive Textures\n\nWe generate emi
 ssive textures by learning luminous 3D models.\n\n\nYosuke Shinya, Kenichi
  Yoneji, and Akihiro Tsukada (DENSO CORPORATION) and Tatsuya Harada (The U
 niversity of Tokyo, RIKEN)\n---------------------\nSomatic Music: Enhancin
 g Musical Experiences through the Performer’s Embodiment\n\nThis study exp
 lores musicians' unique musicality using physical data, enhancing music ap
 preciation through tactile stimulation and vibration, redefining music exp
 eriences.\n\n\nAoi Uyama, Youichi Kamiyama, Sohei Wakisaka, Arata Horie, T
 atsuya Saito, and Kouta Minamizawa (Keio University Graduate School of Med
 ia Design)\n---------------------\nasmVR: VR-Based ASMR Experience with Mu
 ltimodal Triggers for Mental Well-Being\n\nasmVR enhances users' ASMR ting
 ling sensation with multi-modal triggers, immersive VR environments, and r
 emote ASMRist embodiments. Initial tests show heightened tingles, stress r
 elief, and therapeutic VR potential.\n\n\nDanyang Peng, Tanner Person, Ruo
 xin Cui, Mark Armstrong, Kouta Minamizawa, and Yun Suen Pai (Keio Universi
 ty Graduate School of Media Design)\n---------------------\nFlying Over To
 urist Attractions: A Novel Augmented Reality Tourism System Using Miniatur
 e Dioramas\n\nNovel AR tourism system that leverages miniature dioramas to
  provide \nusers with a unique and immersive experience that creates the \
 nsensation of soaring high above and exploring a tourist attraction\n\n\nS
 uwon Lee, Sanghyeon Kim, and Seongwon Kim (Gyeongsang National University)
 ; Hyunwoo Cho (University of South Australia); and Sang-Min Choi (Gyeongsa
 ng National University)\n---------------------\nDatamoshing with Optical F
 low\n\nWe propose a method for data moshing using optical flow. Our algori
 thm can be used to create perplexing video transitions and seamless loopin
 g videos.\n\n\nChris Careaga, Mahesh Kumar Krishna Reddy, and Yağız Aksoy 
 (Simon Fraser University)\n---------------------\nMeta Musicking: A Playgr
 ound for Exploring Alternative Realities with Others in the XR Age\n\nA re
 mote, multi-participant XR audiovisual art experience combining haptic, au
 ditory, and visual elements. Participants can interact with the hand avata
 rs of other remote participants through musical expression in their space.
 \n\n\nRyu Nakagawa, Masaya Furukawa, Ayano Yamanaka, and Maika Yamamoto (N
 agoya City University)\n---------------------\nQuantifying display lag and
  its effects during Head-Mounted Display based Virtual Reality\n\nVirtual 
 reality immersion relies heavily on scene fidelity and spatiotemporal cons
 istency during dynamic human behaviour. However, head-mounted displays hav
 e restrained computational resources to prolong user experience. #sub-fram
 e rate lag estimation.\n\n\nPeter Wagner and Juno Kim (University of New S
 outh Wales, School of Optometry and Vision Science, Sensory Processes Rese
 arch Laboratory); Robert S. Allison (Dept. of Electrical Engineering and C
 omputer Science, York University); and Stephen Palmisano (School of Psycho
 logy, University of Wollongong)\n---------------------\nFoodMorph: Changin
 g Food Appearance Towards Less Unhealthy Food Intake\n\nThe VR system Food
 morph allows users to immerse themselves in inedible, visually simulated f
 ood textures, reducing their interest and intake of unhealthy foods and pr
 omoting healthy eating.\n\n\nRuoxin Cui, Weijen Chen, Danyang Peng, Kouta 
 Minamizawa, and Yun Suen Pai (Keio University Graduate School of Media Des
 ign)\n---------------------\nLearning to Generate Wire Sculpture Art from 
 3D Models\n\nOur goal is to create a 3D wire sculpture that preserves the 
 volume of the original 3D shape given a user-specified template to the  pr
 oposed curve generation network.\n\n\nHuiGuang Huang, Dong-Yi Wu, Thi-Ngoc
 -Hanh Le, and Po-Chih Chen (National Cheng-Kung University); Shih-Syun Lin
  (National Taiwan Ocean University); and Tong-Yee Lee (National Cheng-Kung
  University)\n---------------------\nClosest Point Exterior Calculus\n\nWe
  combine the Closest Point Method with Discrete Exterior Calculus to obtai
 n a geometry processing framework allowing implicit representation of gene
 ral calculus expressions.\n\n\nMica Li, Michael Owens, Juheng Wu, Grace Ya
 ng, and Albert Chern (University of California San Diego)\n---------------
 ------\nLandmark Guided 4D Facial Expression Generation\n\nIn this paper, 
 we proposed a generative model that learns to synthesize 4D face expressio
 n with given landmarks. It is robust to the change of different identities
 .\n\n\nXin Lu and Zhengda Lu (University of Chinese Academy of Sciences), 
 Yiqun Wang (Chongqing University), and Jun Xiao (University of Chinese Aca
 demy of Sciences)\n---------------------\nAugmentation of Medical Preparat
 ion for Children by Using Projective and Tangible Interface\n\nThis resear
 ch aims to create interactive experiences that alleviate anxiety of pediat
 ric patients and cause empathy within their family and medical community. 
   We developed the medical preparation system through the integration of p
 rojective and tangible interfaces. Children can intuitively underst...\n\n
 \nMIki Monzen (Graduate School of Image Arts, Ritsumeikan University) and 
 Shigenori Mochizuki and Toshikazu Ohshima (College of Image Arts and Scien
 ces, Ritsumeikan University)\n---------------------\nExploring Embodiment 
 and Usability of Autonomous Prosthetic Limbs through Virtual Reality\n\nWe
  propose the utilization of full-body motion capture and immersive virtual
  reality to explore the sense of embodiment, usability,  and user percepti
 on associated with autonomous prosthetic limbs.\n\n\nHarin Hapuarachchi (T
 oyohashi University of Technology), Yasuyuki Inoue (Toyama Prefectural Uni
 versity), and Michiteru Kitazaki (Toyohashi University of Technology)\n---
 ------------------\nRecovering Detailed Neural Implicit Surfaces from Blur
 ry Images\n\nWe propose a method to recover surface details from blurry im
 ages by transforming input features using a blur kernel and simulating mot
 ion blur through weighted averaging.\n\n\nZihui Xu and Yiqun Wang (Chongqi
 ng University) and Zhengda Lu and Jun Xiao (University of Chinese Academy 
 of Sciences)\n---------------------\nText-driven Tree Modeling on L-System
 \n\nThis paper presents a text-driven approach for tree modeling through L
 -System, adopting an optimization technique with CLIP.\n\n\nYudai Ichimura
  (Hosei University) and Syuhei Sato (Hosei University, Prometech CG Resear
 ch)\n---------------------\nAI-supported Nishijin-ori: connecting a text-t
 o-image model to traditional Nishijin-ori textile production\n\nThis paper
  presents an AI-supported Nishijin-ori. We first generated pattern images 
 using a fine-tuned text-to-image model, and then produced traditional wove
 n Japanese textiles, Nishijin-ori.\n\n\nAsahi Adachi (Sony Computer Scienc
 e Laboratories - Kyoto, Nara Institute of Science and Technology); Lana Si
 napayen (Sony Computer Science Laboratories - Kyoto, National Institute fo
 r Basic Biology); Hironori Fukuoka (Fukuoka Weaving Co., Ltd.); and Jun Re
 kimoto (Sony Computer Science Laboratories - Kyoto, The University of Toky
 o)\n---------------------\nExpression Omnibus: Expandable Facial Expressio
 n Dataset via Embedding Analysis and Synthesis\n\nWe demonstrate a method 
 to expand a dataset of facial expressions by generating realistic faces ba
 sed on our assessment of a controlled set of realistic faces and its embed
 ding space.\n\n\nJoonho Park and Da Eun Kim (Giantstep) and Joo-Haeng Lee 
 (Pebblous)\n---------------------\nRoom to Room Mapping: Seamlessly Connec
 ting Different Rooms\n\nWe propose a projection mapping technique designed
  to connect rooms in disparate locations virtually, creating a continuous,
  immersive space.\n\n\nNaoki Hashimoto and Yuki Inada (The University of E
 lectro-Communications)\n---------------------\nOwnDiffusion: A Design Pipe
 line Using Design Generative AI to preserve Sense Of Ownership\n\nOwnDiffu
 sion, a design pipeline that utilizes Generative AI to assist in the physi
 cal prototype ideation process for novice product designers and industrial
  design learners while preserving their sense of ownership. We envision th
 is method as a solution for AI-assisted design, enabling designers to ...\
 n\n\nYaokun Wu (Keio University, Keio University Graduate School of Media 
 Design) and Minamizawa Kouta and Yun Suen Pai (Keio Media Design)\n-------
 --------------\nAn Examination of Text Shaking Correction Methods for AR W
 alking\n\nOne problem with walking in AR is less readability of displayed 
 text. Head shaking causes the displayed text to shake. The screen coordina
 te system(SCS) or world coordinate system(WCS) is used for displaying text
  with different effective distances. We propose methods to correct text sh
 aking by combi...\n\n\nMie Sato, Hiromu Koide, and Kei Kanari (Utsunomiya 
 University)\n---------------------\nRule-of-Thirds or Centered? A study in
  preference in photo composition\n\nWe report an experiment to test the va
 lidity of the Rule of Thirds. Our participants overwhelmingly preferred a 
 centered object to one positioned according to the Rule of Thirds.\n\n\nWe
 ng Khuan Hoh, Fang Lue Zhang, and Neil A. Dodgson (Victoria University of 
 Wellington)\n---------------------\nEfficient and Accurate Physically Base
 d Rendering of Periodic Multilayer Structures with Iridescence\n\nWe propo
 se a method for rendering iridescence caused by periodic multilayer struct
 ures by employing Huxley's approach. Our approach can compute multilayer i
 nterference efficiently and accurately.\n\n\nYoshiki Kaminaka, Toru Higaki
 , Bisser Raytchev, and Kazufumi Kaneda (Hiroshima University)\n-----------
 ----------\nVector Gradient Stroke Stylized Neural Network Painting\n\nWe 
 propose vectorization techniques with SVG gradient color paths to represen
 t non-photorealistic rendering brush stroke raster images, reducing the ov
 erall path amounts, reducing vector file size and facilitating image editi
 ng.\n\n\nJia-Shuan Lin and Tung-Ju Hsieh (National Taipei University of Te
 chnology)\n---------------------\nConversation Echo: Communication in virt
 ual environments that reflects conversation contents\n\nThis research prop
 oses "Conversation Echo," a system that reflects the topics of conversatio
 n in the VR environment in real time by using AI to extract topics and gen
 erate panoramic images.\n\n\nShun Hachisu, Sohei Wakisaka, and Kouta Minam
 izawa (Keio University Graduate School of Media Design)\n-----------------
 ----\nSCOOT:Self-supervised Centric Open-set Object Tracking\n\nWe propose
  a system that encompasses a self-supervised appearance model, a fusion mo
 dule for combining textual and visual features, and an object association 
 algorithm based on reconstruction and observation.\n\n\nWei Li (Institute 
 of Automation, Chinese Academy Of Sciences); Weiliang Meng (Institute of A
 utomation, Institute of Automation, Chinese Academy Of Sciences); Bowen Li
  (Institute of Automation, Institute of Automation, Institute of Automatio
 n, Chinese Academy Of Sciences); and Jiguang Zhang and Xiaopeng Zhang (Ins
 titute of Automation, Institute of Automation, Chinese Academy Of Sciences
 )\n---------------------\nRecognition-Independent Handwritten Text Alignme
 nt Using Lightweight Recurrent Neural Network\n\nA novel approach to impro
 ve handwriting legibility by straightening the written content. It may be 
 used for aligning text across different languages and doesn't need prior h
 andwriting recognition.\n\n\nKarina Korovai, Dmytro Zhelezniakov, and Olga
  Radyvonenko (Samsung R&D Institute Ukraine); Oleg Yakovchuk (Samsung R&D 
 Institute Ukraine, National Technical University of Ukraine "Igor Sikorsky
  Kyiv Polytechnic Institute"); and Ivan Deriuga and Nataliya Sakhnenko (Sa
 msung R&D Institute Ukraine)\n---------------------\nDeveloping a Realisti
 c VR Interface to Recreate a Full-body Immersive Fire Scene Experience\n\n
 This paper describes a research project on a VR fire training system. It c
 reates a multi-sensory experience that simulates a real-world fire scene, 
 and evaluated firefighters' and the public's satisfaction.\n\n\nUngyeon Ya
 ng and Hyungki Son (Electronics and Telecommunications Research Institute 
 (ETRI)) and Kyungsik Han (Hanyang University)\n---------------------\nCros
 sing Narrative: Exploring the Possibilities of Crossing the Virtuality and
  Reality in Interactive Narrative Experiences\n\nWe introduce “Crossing Na
 rrative”, an interactive narrative experience that seamlessly blends virtu
 ality and reality by utilizing real-world views and bystanders. We discuss
  specific methods for designing cross-reality narrative experience, focusi
 ng on three key aspects of cross-reality ...\n\n\nZixiao Liu (School of Ne
 w Media Art and Design, Beihang University) and Shuo Yan and Xukun Shen (S
 chool of New Media Art and Design, Beihang University; State Key Laborator
 y of Virtual Reality Technology and Systems, Beihang University)\n--------
 -------------\nAuditory VR Generative System for Non-Experts to Reproduce 
 Human Memories Through Natural Language Interactions\n\nProposing an autom
 atic auditory VR generative system from natural language input for VR expo
 sure therapy. It utilizes a LLM, auditory dataset, and spatial audio gener
 ator, demonstrating utility through physician evaluations.\n\n\nYuta Yamau
 chi (University of Tsukuba), Keiko Ino (National Center of Neurology and P
 sychiatry), and Keiichi Zempo (University of Tsukuba)\n-------------------
 --\nA remote training platform for learning physical skills using an AI po
 wered virtual coach and a novel IoT sensing mat\n\nWe introduce a novel AI
 oT platform for remote Martial Arts training using a pressure sensing mat,
  virtual coach and Serious Game. User studies demonstrate its training eff
 ectiveness and adoption potential.\n\n\nKatia Bourahmoune (Univeristy of T
 echnology Sydney) and Karlos Ishac and Marc Carmichael (University of Tech
 nology Sydney)\n---------------------\nUsability Evaluation of VR Shopping
  System not Imitating Real Stores\n\nIn this study, we investigated the us
 ability of VR shopping systems that do not imitate real stores and created
  a user-friendly system on the basis of the results.\n\n\nIkumi Hisamatsu 
 and Yuji Sakamoto (Hokkaido University)\n---------------------\nTowards a 
 Psychophysically Plausible Simulation of Translucent Appearance\n\nUnderst
 anding visual perception of materials is critical for informing image-base
 d approaches to real-time rendering. This poster presents a new cue to tra
 nslucency that can be efficiently modeled using graphical rendering.\n\n\n
 Takehiro Nagai (Tokyo Institute of Technology, University of New South Wal
 es Sydney); Hiroaki Kiyokawa (Saitama University); Stephen Palmisano (Univ
 ersity of Wollongong); and Juno Kim (University of New South Wales Sydney)
 \n---------------------\nInteractive Relative Pose Estimation for 360° Ind
 oor Panoramas through Wall-Wall Matching Selections\n\nAn open-source pano
 ramic relative camera pose estimation method that works well for difficult
  wide-baseline problems by taking a hybrid approach that leverages neural 
 network estimations and key user inputs.\n\n\nBoSheng Chen and ChiHan Peng
  (National Yang Ming Chiao Tung University)\n---------------------\nVisual
  Signatures of Music Mood\n\nVisualization of music as static images is ra
 rely addressed. In this poster, we propose visual signatures – static imag
 es which are generated using artificial intelligence to visualise the musi
 c mood.\n\n\nHanqin Wang and Alexei Sourin (Nanyang Technological Universi
 ty)\n---------------------\nTowards Efficient Local 3D Conditioning\n\nWe 
 propose an innovative weight-encoded, locally conditioned neural implicit 
 representation, utilizing a neural network to approximate a grid of latent
  codes, while sharing the decoder across the entire category. This approac
 h significantly enhances reconstruction quality compared to global methods
  ...\n\n\nDingxi Zhang (MIT CSAIL, University of Chinese Academy of Scienc
 e) and Artem Lukoianov (MIT CSAIL)\n\nRegistration Category: Full Access, 
 Business & Innovation Symposium Access, Exhibit & Experience Access, Enhan
 ced Access, Trade Exhibitor, Experience Hall Exhibitor
END:VEVENT
END:VCALENDAR
