BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070311Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T174500 DTEND;TZID=Australia/Melbourne:20231213T183700 UID:siggraphasia_SIGGRAPH Asia 2023_sess147@linklings.com SUMMARY:Technoscape DESCRIPTION:Technical Communications, Technical Papers\n\nWhat is the Best Automated Metric for Text to Motion Generation?\n\nThere is growing inter est in generating skeleton-based human motions from natural language descr iptions. While most efforts have focused on developing better neural archi tectures for this task, there has been no significant work on determining the proper evaluation metric. Human evaluation is the ul...\n\n\nJordan Vo as, Yili Wang, Qixing Huang, and Raymond Mooney (University of Texas at Au stin)\n---------------------\nTraining Orchestral Conductors in Beating Ti me\n\nA prototype to train orchestral conductors in how to beat time. The key detection points are maxima in acceleration. We successfuly tested wit h five conductors with dramatically different styles.\n\n\nNeil A. Dodgson and Kathleen Griffin (Victoria University of Wellington)\n--------------- ------\nA Motion-Simulation Platform to Generate Synthetic Motion Data for Computer Vision Tasks\n\nOur Motion-Simulation Platform runs in a game en gine, extracting RGB imagery and intrinsic motion data, benefiting motion- related computer vision tasks. Users and AI-bots can navigate to collect m otion data.\n\n\nAndrew Chalmers (Victoria University of Wellington, Compu tational Media Innovation Centre); Junhong Zhao (Victoria University of We llington); Weng Khuan Hoh, James Drown, and Simon Finnie (Victoria Univers ity of Wellington, Computational Media Innovation Centre); Richard Yao, Ja mes Lin, James Wilmott, and Arindam Dey (Meta Platforms, Inc.); Mark Billi nghurst (University of Auckland); and Taehyun Rhee (Victoria University of Wellington, Computational Media Innovation Centre)\n--------------------- \nVR-NeRF: High-Fidelity Virtualized Walkable Spaces\n\nWe present an end- to-end system for the high-fidelity capture, model reconstruction and real -time rendering of walkable spaces in virtual reality using neural radianc e fields. To this end, we designed and built a custom multi-camera rig to densely capture walkable spaces in high fidelity with multi-...\n\n\nLinni ng Xu (The Chinese University of Hong Kong, Meta); Vasu Agrawal, William L aney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulň, Lorenzo P orzi, Peter Kontschieder, and Aljaž Božič (Meta); Dahua Lin (The Chinese U niversity of Hong Kong); and Michael Zollhoefer and Christian Richardt (Me ta)\n---------------------\nInteractive Material Annotation on 3D Scanned Models leveraging Color-Material Correlation\n\nThis paper proposes an int eractive system for efficient material annotation on 3D scanned models. Fo cusing on the correlation between color and material distribution, we impl emented a two-step annotation workflow.\n\n\nWataru Kawabe (University of Tokyo), Taisuke Hashimoto and Fabrice Matulic (Preferred Networks), Takeo Igarashi (University of Tokyo), and Keita Higuchi (Preferred Networks)\n-- -------------------\nFootstep Detection for Film Sound Production\n\nA met hod for footstep detection with good generalization and high accuracy is p roposed in this paper. Based on it, a footstep detection system was design ed for film sound production.\n\n\nXiaojuan Gu, JunLiang Chen, Bo Li, and Jun Chen (Beijing Film Academy)\n\nRegistration Category: Full Access\n\nS ession Chair: Sheng Li (Peking University) END:VEVENT END:VCALENDAR