BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20260114T163633Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_210@linklings.com SUMMARY:Learning the Geodesic Embedding with Graph Neural Networks DESCRIPTION:Bo Pang (Peking Unversity); Zhongtian Zheng (Peking University ); Guoping Wang (Peking Unversity); and Peng-Shuai Wang (Peking University , Wangxuan Institute of Computer Technology)\n\nWe present GeGnn, a learni ng-based method for computing the approximate geodesic distance between tw o arbitrary points on discrete polyhedra surfaces with constant time compl exity after fast precomputation. Previous relevant methods either focus on computing the geodesic distance between a single source and all destinati ons, which has linear complexity at least, or require long precomputation time. Our key idea is to train a graph neural network to embed an input me sh into a high-dimensional embedding space and compute the geodesic distan ce between a pair of points using the corresponding embedding vectors and a lightweight decoding function. To facilitate the learning of the embeddi ng, we propose novel graph convolution and graph pooling modules that inco rporate local geodesic information and are verified to be much more effect ive than previous designs. After training, our method requires only one fo rward pass of the network per mesh as precomputation. Then, we can compute the geodesic distance between a pair of points using our decoding functio n, which requires only several matrix multiplications and can be massively parallelized on GPUs. We verify the efficiency and effectiveness of our m ethod on ShapeNet and demonstrate that our method is faster than existing methods by orders of magnitude while achieving comparable or better accura cy. Additionally, our method exhibits robustness on noisy and incomplete m eshes and strong generalization ability on out-of-distribution meshes.\n\n Registration Category: Full Access, Enhanced Access, Trade Exhibitor, Expe rience Hall Exhibitor\n\n URL:https://asia.siggraph.org/2023/full-program?id=papers_210&sess=sess209 END:VEVENT END:VCALENDAR