BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070250Z LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231215T160000 DTEND;TZID=Australia/Melbourne:20231215T161000 UID:siggraphasia_SIGGRAPH Asia 2023_sess139_papers_240@linklings.com SUMMARY:A Locality-based Neural Solver for Optical Motion Capture DESCRIPTION:Technical Papers\n\nXiaoyu Pan and Bowen Zheng (State Key Labo ratory of CAD & CG, Zhejiang University; ZJU-Tencent Game and Intelligent Graphics Innovation Technology Joint Lab); Xinwei Jiang, Guanglong Xu, Xia nli Gu, and Jingxiang Li (Tencent Games Digital Content Technology Center) ; Qilong Kou (Tencent Technology (Shenzhen) Co., LTD); He Wang (University College London (UCL)); Tianjia Shao and Kun Zhou (State Key Laboratory of CAD & CG, Zhejiang University); and Xiaogang Jin (State Key Laboratory of CAD & CG, Zhejiang University; ZJU-Tencent Game and Intelligent Graphics Innovation Technology Joint Lab)\n\nWe present a novel locality-based lear ning method for cleaning and solving optical motion capture data. Given no isy marker data, we propose a new heterogeneous graph neural network which treats markers and joints as different types of nodes, and uses graph con volution operations to extract the local features of markers and joints an d transform them to clean motions. To deal with anomaly markers (e.g. miss ing or with big tracking errors), the key insight is that a marker motion show strong correlations with the motions of its immediate neighboring mar kers but less so with other markers, a.k.a. locality, which enables us to fill missing markers (e.g. due to occlusion). Additionally, we also identi fy marker outliers due to tracking errors by investigating their accelerat ion profiles. Finally, we propose a training regime based on representatio n learning and data augmentation, by training the model on data with maski ng. The masking schemes aim to mimic the missing and noisy markers often o bserved in the real data. Finally, we show that our method achieves high a ccuracy on multiple metrics across various datasets. Extensive comparison shows our method outperforms state-of-the-art methods in terms of predicti on accuracy of occluded marker position error by approximately 20%, which leads to a further error reduction on the reconstructed joint rotations an d positions by 30%. The code and data for this paper are available at gith ub.com/localmocap/LocalMoCap.\n\nRegistration Category: Full Access\n\nSes sion Chair: Yuting Ye (Reality Labs Research, Meta) URL:https://asia.siggraph.org/2023/full-program?id=papers_240&sess=sess139 END:VEVENT END:VCALENDAR