BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070312Z LOCATION:Exhibition Hall 1\, Level 2 (Exhibition Centre) DTSTART;TZID=Australia/Melbourne:20231215T100000 DTEND;TZID=Australia/Melbourne:20231215T160000 UID:siggraphasia_SIGGRAPH Asia 2023_sess199@linklings.com SUMMARY:Posters Gallery DESCRIPTION:Poster\n\nThe Posters program provides an interactive forum fo r innovative ideas that are not yet fully polished, for high-impact practi cal contributions, for behind-the-scenes views of new commercial and artis tic work, and for solutions that help solve challenging problems. It is a cooperative setting where students, researchers, artists, enthusiasts, and industry veterans come together to present their research, art, and ideas to the global CG industry and encourage feedback on recently completed wo rk or tentative new approaches.\nThese ideas are put together into simple, visually attractive posters showcased at SIGGRAPH Asia. Posters’ authors will also be present to explain their findings, discuss their work, receiv e feedback and network with all attendees.\n\nasmVR: VR-Based ASMR Experie nce with Multimodal Triggers for Mental Well-Being\n\nasmVR enhances users ' ASMR tingling sensation with multi-modal triggers, immersive VR environm ents, and remote ASMRist embodiments. Initial tests show heightened tingle s, stress relief, and therapeutic VR potential.\n\n\nDanyang Peng, Tanner Person, Ruoxin Cui, Mark Armstrong, Kouta Minamizawa, and Yun Suen Pai (Ke io University Graduate School of Media Design)\n---------------------\nDat amoshing with Optical Flow\n\nWe propose a method for data moshing using o ptical flow. Our algorithm can be used to create perplexing video transiti ons and seamless looping videos.\n\n\nChris Careaga, Mahesh Kumar Krishna Reddy, and Yağız Aksoy (Simon Fraser University)\n---------------------\nD eveloping a Realistic VR Interface to Recreate a Full-body Immersive Fire Scene Experience\n\nThis paper describes a research project on a VR fire t raining system. It creates a multi-sensory experience that simulates a rea l-world fire scene, and evaluated firefighters' and the public's satisfact ion.\n\n\nUngyeon Yang and Hyungki Son (Electronics and Telecommunications Research Institute (ETRI)) and Kyungsik Han (Hanyang University)\n------- --------------\nExploring Embodiment and Usability of Autonomous Prostheti c Limbs through Virtual Reality\n\nWe propose the utilization of full-body motion capture and immersive virtual reality to explore the sense of embo diment, usability, and user perception associated with autonomous prosthe tic limbs.\n\n\nHarin Hapuarachchi (Toyohashi University of Technology), Y asuyuki Inoue (Toyama Prefectural University), and Michiteru Kitazaki (Toy ohashi University of Technology)\n---------------------\nRecognition-Indep endent Handwritten Text Alignment Using Lightweight Recurrent Neural Netwo rk\n\nA novel approach to improve handwriting legibility by straightening the written content. It may be used for aligning text across different lan guages and doesn't need prior handwriting recognition.\n\n\nKarina Korovai , Dmytro Zhelezniakov, and Olga Radyvonenko (Samsung R&D Institute Ukraine ); Oleg Yakovchuk (Samsung R&D Institute Ukraine, National Technical Unive rsity of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"); and Ivan Der iuga and Nataliya Sakhnenko (Samsung R&D Institute Ukraine)\n------------- --------\nGaze and Graze: Illuminating Taiwanese Hand Puppet Character Dis play and Deconstructing Visual Engagement\n\nTaiwanese Puppetry, deeply ro oted in culture, gains significance in this study through gaze interaction . Using Tobii Nano and Unity, we reinterpret traditional art with eye-trac king for a more profound experience.\n\n\nYun-Ju Chen (National Taipei Uni versity of Business) and Tsuei-Ju Hsieh (National Tsing Hua University)\n- --------------------\nLandmark Guided 4D Facial Expression Generation\n\nI n this paper, we proposed a generative model that learns to synthesize 4D face expression with given landmarks. It is robust to the change of differ ent identities.\n\n\nXin Lu and Zhengda Lu (University of Chinese Academy of Sciences), Yiqun Wang (Chongqing University), and Jun Xiao (University of Chinese Academy of Sciences)\n---------------------\n3D Lighter: Learni ng to Generate Emissive Textures\n\nWe generate emissive textures by learn ing luminous 3D models.\n\n\nYosuke Shinya, Kenichi Yoneji, and Akihiro Ts ukada (DENSO CORPORATION) and Tatsuya Harada (The University of Tokyo, RIK EN)\n---------------------\nTowards Efficient Local 3D Conditioning\n\nWe propose an innovative weight-encoded, locally conditioned neural implicit representation, utilizing a neural network to approximate a grid of latent codes, while sharing the decoder across the entire category. This approac h significantly enhances reconstruction quality compared to global methods ...\n\n\nDingxi Zhang (MIT CSAIL, University of Chinese Academy of Scienc e) and Artem Lukoianov (MIT CSAIL)\n---------------------\nVisual Signatur es of Music Mood\n\nVisualization of music as static images is rarely addr essed. In this poster, we propose visual signatures – static images which are generated using artificial intelligence to visualise the music mood.\n \n\nHanqin Wang and Alexei Sourin (Nanyang Technological University)\n---- -----------------\nDeep Albedo: A Machine Learning Approach to Real-Time P hoto Realistic Human Skin Rendering and Editing Using Autoencoders\n\nWe d emonstrate an efficient technique to model skin color changes\nof a human face due to aging and changing of emotions by variation\nof the spatially dependent biophysical properties of skin.\n\n\nJoel Johnson (University of British Columbia, Huawei); Kenneth Chau (University of British Columbia); and Wei Sen Loi, Abraham Beauferris, Swati Kanwal, and Yingqian Gu (Huawe i)\n---------------------\nInteractive Relative Pose Estimation for 360° I ndoor Panoramas through Wall-Wall Matching Selections\n\nAn open-source pa noramic relative camera pose estimation method that works well for difficu lt wide-baseline problems by taking a hybrid approach that leverages neura l network estimations and key user inputs.\n\n\nBoSheng Chen and ChiHan Pe ng (National Yang Ming Chiao Tung University)\n---------------------\nSoma tic Music: Enhancing Musical Experiences through the Performer’s Embodimen t\n\nThis study explores musicians' unique musicality using physical data, enhancing music appreciation through tactile stimulation and vibration, r edefining music experiences.\n\n\nAoi Uyama, Youichi Kamiyama, Sohei Wakis aka, Arata Horie, Tatsuya Saito, and Kouta Minamizawa (Keio University Gra duate School of Media Design)\n---------------------\nRecovering Detailed Neural Implicit Surfaces from Blurry Images\n\nWe propose a method to reco ver surface details from blurry images by transforming input features usin g a blur kernel and simulating motion blur through weighted averaging.\n\n \nZihui Xu and Yiqun Wang (Chongqing University) and Zhengda Lu and Jun Xi ao (University of Chinese Academy of Sciences)\n---------------------\nThe Effect of Wearing Knee Supporters on the Applicable Gain of Redirected Wa lking\n\nWe investigated the effect of knee supporters on the applicable g ain of redirected walking. As a result, it was indicated that knee support ers can influence the applicable gain.\n\n\nGaku Fukui, Takuto Nakamura, K eigo Matsumoto, Takuji Narumi, and Hideaki Kuzuoka (University of Tokyo)\n ---------------------\nIgnis: Eulerian Fluid Simulation and Rendering at V R Frame Rates\n\nIgnis is a GPU Eulerian fluid solver which utilises an ap proximate shadowing technique and an adaptive dithering technique to simul ate and render at VR resolutions and refresh rates.\n\n\nCharlie Shenton ( RMIT University, CSIRO)\n---------------------\nMulti-Stage Manufacturing for Preoperative Medical Models with Overhanging Components\n\nWe propose a cost-effective, multi-stage hybrid manufacturing method that combines p rinting and molding to progressively solidify intricate medical models. An d we have successfully produced a liver model with overhanging tumors.\n\n \nMingli Xiang and Zun Li (Beijing University of Technology), Lin Lu (Shan dong University), and Lifang Wu (Beijing University of Technology)\n------ ---------------\nQuantifying display lag and its effects during Head-Mount ed Display based Virtual Reality\n\nVirtual reality immersion relies heavi ly on scene fidelity and spatiotemporal consistency during dynamic human b ehaviour. However, head-mounted displays have restrained computational res ources to prolong user experience. #sub-frame rate lag estimation.\n\n\nPe ter Wagner and Juno Kim (University of New South Wales, School of Optometr y and Vision Science, Sensory Processes Research Laboratory); Robert S. Al lison (Dept. of Electrical Engineering and Computer Science, York Universi ty); and Stephen Palmisano (School of Psychology, University of Wollongong )\n---------------------\nRule-of-Thirds or Centered? A study in preferenc e in photo composition\n\nWe report an experiment to test the validity of the Rule of Thirds. Our participants overwhelmingly preferred a centered o bject to one positioned according to the Rule of Thirds.\n\n\nWeng Khuan H oh, Fang Lue Zhang, and Neil A. Dodgson (Victoria University of Wellington )\n---------------------\nSCOOT:Self-supervised Centric Open-set Object Tr acking\n\nWe propose a system that encompasses a self-supervised appearanc e model, a fusion module for combining textual and visual features, and an object association algorithm based on reconstruction and observation.\n\n \nWei Li (Institute of Automation, Chinese Academy Of Sciences); Weiliang Meng (Institute of Automation, Institute of Automation, Chinese Academy Of Sciences); Bowen Li (Institute of Automation, Institute of Automation, In stitute of Automation, Chinese Academy Of Sciences); and Jiguang Zhang and Xiaopeng Zhang (Institute of Automation, Institute of Automation, Chinese Academy Of Sciences)\n---------------------\nA remote training platform f or learning physical skills using an AI powered virtual coach and a novel IoT sensing mat\n\nWe introduce a novel AIoT platform for remote Martial A rts training using a pressure sensing mat, virtual coach and Serious Game. User studies demonstrate its training effectiveness and adoption potentia l.\n\n\nKatia Bourahmoune (Univeristy of Technology Sydney) and Karlos Ish ac and Marc Carmichael (University of Technology Sydney)\n---------------- -----\nUsability Evaluation of VR Shopping System not Imitating Real Store s\n\nIn this study, we investigated the usability of VR shopping systems t hat do not imitate real stores and created a user-friendly system on the b asis of the results.\n\n\nIkumi Hisamatsu and Yuji Sakamoto (Hokkaido Univ ersity)\n---------------------\nCrossing Narrative: Exploring the Possibil ities of Crossing the Virtuality and Reality in Interactive Narrative Expe riences\n\nWe introduce “Crossing Narrative”, an interactive narrative exp erience that seamlessly blends virtuality and reality by utilizing real-wo rld views and bystanders. We discuss specific methods for designing cross- reality narrative experience, focusing on three key aspects of cross-reali ty ...\n\n\nZixiao Liu (School of New Media Art and Design, Beihang Univer sity) and Shuo Yan and Xukun Shen (School of New Media Art and Design, Bei hang University; State Key Laboratory of Virtual Reality Technology and Sy stems, Beihang University)\n---------------------\nRoom to Room Mapping: S eamlessly Connecting Different Rooms\n\nWe propose a projection mapping te chnique designed to connect rooms in disparate locations virtually, creati ng a continuous, immersive space.\n\n\nNaoki Hashimoto and Yuki Inada (The University of Electro-Communications)\n---------------------\nEfficient a nd Accurate Physically Based Rendering of Periodic Multilayer Structures w ith Iridescence\n\nWe propose a method for rendering iridescence caused by periodic multilayer structures by employing Huxley's approach. Our approa ch can compute multilayer interference efficiently and accurately.\n\n\nYo shiki Kaminaka, Toru Higaki, Bisser Raytchev, and Kazufumi Kaneda (Hiroshi ma University)\n---------------------\nLearning to Generate Wire Sculpture Art from 3D Models\n\nOur goal is to create a 3D wire sculpture that pres erves the volume of the original 3D shape given a user-specified template to the proposed curve generation network.\n\n\nHuiGuang Huang, Dong-Yi Wu , Thi-Ngoc-Hanh Le, and Po-Chih Chen (National Cheng-Kung University); Shi h-Syun Lin (National Taiwan Ocean University); and Tong-Yee Lee (National Cheng-Kung University)\n---------------------\nAI-supported Nishijin-ori: connecting a text-to-image model to traditional Nishijin-ori textile produ ction\n\nThis paper presents an AI-supported Nishijin-ori. We first genera ted pattern images using a fine-tuned text-to-image model, and then produc ed traditional woven Japanese textiles, Nishijin-ori.\n\n\nAsahi Adachi (S ony Computer Science Laboratories - Kyoto, Nara Institute of Science and T echnology); Lana Sinapayen (Sony Computer Science Laboratories - Kyoto, Na tional Institute for Basic Biology); Hironori Fukuoka (Fukuoka Weaving Co. , Ltd.); and Jun Rekimoto (Sony Computer Science Laboratories - Kyoto, The University of Tokyo)\n---------------------\nVector Gradient Stroke Styli zed Neural Network Painting\n\nWe propose vectorization techniques with SV G gradient color paths to represent non-photorealistic rendering brush str oke raster images, reducing the overall path amounts, reducing vector file size and facilitating image editing.\n\n\nJia-Shuan Lin and Tung-Ju Hsieh (National Taipei University of Technology)\n---------------------\nAvatar s for Good Drinking: An Exploratory Study of The Effects of Avatar’s Body Shape on Beverage Perception\n\nIn a virtual environment, we studied how a vatar body shape impacts beverage perception. Gradual body transitions imp roved body ownership, and larger avatars enhanced purchase intention.\n\n\ nYusuke Koseki, Yusuke Arikawa, Kizashi Nakano, and Takuji Narumi (Univers ity of Tokyo)\n---------------------\nFlying Over Tourist Attractions: A N ovel Augmented Reality Tourism System Using Miniature Dioramas\n\nNovel AR tourism system that leverages miniature dioramas to provide \nusers with a unique and immersive experience that creates the \nsensation of soaring high above and exploring a tourist attraction\n\n\nSuwon Lee, Sanghyeon Ki m, and Seongwon Kim (Gyeongsang National University); Hyunwoo Cho (Univers ity of South Australia); and Sang-Min Choi (Gyeongsang National University )\n---------------------\nAuditory VR Generative System for Non-Experts to Reproduce Human Memories Through Natural Language Interactions\n\nProposi ng an automatic auditory VR generative system from natural language input for VR exposure therapy. It utilizes a LLM, auditory dataset, and spatial audio generator, demonstrating utility through physician evaluations.\n\n\ nYuta Yamauchi (University of Tsukuba), Keiko Ino (National Center of Neur ology and Psychiatry), and Keiichi Zempo (University of Tsukuba)\n-------- -------------\nExpression Omnibus: Expandable Facial Expression Dataset vi a Embedding Analysis and Synthesis\n\nWe demonstrate a method to expand a dataset of facial expressions by generating realistic faces based on our a ssessment of a controlled set of realistic faces and its embedding space.\ n\n\nJoonho Park and Da Eun Kim (Giantstep) and Joo-Haeng Lee (Pebblous)\n ---------------------\nAugmentation of Medical Preparation for Children by Using Projective and Tangible Interface\n\nThis research aims to create i nteractive experiences that alleviate anxiety of pediatric patients and ca use empathy within their family and medical community. We developed the medical preparation system through the integration of projective and tangi ble interfaces. Children can intuitively underst...\n\n\nMIki Monzen (Grad uate School of Image Arts, Ritsumeikan University) and Shigenori Mochizuki and Toshikazu Ohshima (College of Image Arts and Sciences, Ritsumeikan Un iversity)\n---------------------\nText-driven Tree Modeling on L-System\n\ nThis paper presents a text-driven approach for tree modeling through L-Sy stem, adopting an optimization technique with CLIP.\n\n\nYudai Ichimura (H osei University) and Syuhei Sato (Hosei University, Prometech CG Research) \n---------------------\nFoodMorph: Changing Food Appearance Towards Less Unhealthy Food Intake\n\nThe VR system Foodmorph allows users to immerse t hemselves in inedible, visually simulated food textures, reducing their in terest and intake of unhealthy foods and promoting healthy eating.\n\n\nRu oxin Cui, Weijen Chen, Danyang Peng, Kouta Minamizawa, and Yun Suen Pai (K eio University Graduate School of Media Design)\n---------------------\nOw nDiffusion: A Design Pipeline Using Design Generative AI to preserve Sense Of Ownership\n\nOwnDiffusion, a design pipeline that utilizes Generative AI to assist in the physical prototype ideation process for novice product designers and industrial design learners while preserving their sense of ownership. We envision this method as a solution for AI-assisted design, e nabling designers to ...\n\n\nYaokun Wu (Keio University, Keio University Graduate School of Media Design) and Minamizawa Kouta and Yun Suen Pai (Ke io Media Design)\n---------------------\nMeta Musicking: A Playground for Exploring Alternative Realities with Others in the XR Age\n\nA remote, mul ti-participant XR audiovisual art experience combining haptic, auditory, a nd visual elements. Participants can interact with the hand avatars of oth er remote participants through musical expression in their space.\n\n\nRyu Nakagawa, Masaya Furukawa, Ayano Yamanaka, and Maika Yamamoto (Nagoya Cit y University)\n---------------------\nAn Examination of Text Shaking Corre ction Methods for AR Walking\n\nOne problem with walking in AR is less rea dability of displayed text. Head shaking causes the displayed text to shak e. The screen coordinate system(SCS) or world coordinate system(WCS) is us ed for displaying text with different effective distances. We propose meth ods to correct text shaking by combi...\n\n\nMie Sato, Hiromu Koide, and K ei Kanari (Utsunomiya University)\n---------------------\nClosest Point Ex terior Calculus\n\nWe combine the Closest Point Method with Discrete Exter ior Calculus to obtain a geometry processing framework allowing implicit r epresentation of general calculus expressions.\n\n\nMica Li, Michael Owens , Juheng Wu, Grace Yang, and Albert Chern (University of California San Di ego)\n---------------------\nTowards a Psychophysically Plausible Simulati on of Translucent Appearance\n\nUnderstanding visual perception of materia ls is critical for informing image-based approaches to real-time rendering . This poster presents a new cue to translucency that can be efficiently m odeled using graphical rendering.\n\n\nTakehiro Nagai (Tokyo Institute of Technology, University of New South Wales Sydney); Hiroaki Kiyokawa (Saita ma University); Stephen Palmisano (University of Wollongong); and Juno Kim (University of New South Wales Sydney)\n---------------------\nDigital Tr ansformation of Ethnic Dance Heritage: A Multimodal Interactive Game to Ba lancing Instructional and Cultural Essence\n\nWe have employed a multimoda l interactive approach to create an educational game for ethnic dances, t hereby enhancing players' motion instruction and cultural experience in th e process of dance .\n\n\nMingyang Su, Yun Xie, FeiFei Wu, Ke Fang, XiaoMe i Nie, and Xiu Li (Tsinghua University)\n---------------------\nGeometry A ware Texturing\n\nGiven a mesh of the outfit and a text prompt, our method is capable of producing high-quality diffuse texture in around 6 seconds running on a single A40 GPU.\n\n\nEvgeniia Cheskidova, Alexander Arganaidi , Daniel-Ionut Rancea, and Olaf Haag (Ready Player Me)\n------------------ ---\nConversation Echo: Communication in virtual environments that reflect s conversation contents\n\nThis research proposes "Conversation Echo," a s ystem that reflects the topics of conversation in the VR environment in re al time by using AI to extract topics and generate panoramic images.\n\n\n Shun Hachisu, Sohei Wakisaka, and Kouta Minamizawa (Keio University Gradua te School of Media Design)\n---------------------\nAerial Display Method U sing a Flying Screen with an IR Marker and Long Range Dynamic Projection M apping\n\nWe have studied a projection-based aerial display method. In thi s poster, we propose a new IR marker for precise screen tracking and a lon g-range projection principle using high brightness projector.\n\n\nYuito H irohashi and Hiromasa Oku (Gunma University)\n\nRegistration Category: Ful l Access, Business & Innovation Symposium Access, Exhibit & Experience Acc ess, Enhanced Access, Trade Exhibitor, Experience Hall Exhibitor END:VEVENT END:VCALENDAR