BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070241Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_210@linklings.com SUMMARY:Learning the Geodesic Embedding with Graph Neural Networks DESCRIPTION:Technical Papers\n\nBo Pang (Peking Unversity); Zhongtian Zhen g (Peking University); Guoping Wang (Peking Unversity); and Peng-Shuai Wan g (Peking University, Wangxuan Institute of Computer Technology)\n\nWe pre sent GeGnn, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces with constant time complexity after fast precomputation. Previous relevant met hods either focus on computing the geodesic distance between a single sour ce and all destinations, which has linear complexity at least, or require long precomputation time. Our key idea is to train a graph neural network to embed an input mesh into a high-dimensional embedding space and compute the geodesic distance between a pair of points using the corresponding em bedding vectors and a lightweight decoding function. To facilitate the lea rning of the embedding, we propose novel graph convolution and graph pooli ng modules that incorporate local geodesic information and are verified to be much more effective than previous designs. After training, our method requires only one forward pass of the network per mesh as precomputation. Then, we can compute the geodesic distance between a pair of points using our decoding function, which requires only several matrix multiplications and can be massively parallelized on GPUs. We verify the efficiency and ef fectiveness of our method on ShapeNet and demonstrate that our method is f aster than existing methods by orders of magnitude while achieving compara ble or better accuracy. Additionally, our method exhibits robustness on no isy and incomplete meshes and strong generalization ability on out-of-dist ribution meshes.\n\nRegistration Category: Full Access, Enhanced Access, T rade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_210&sess=sess209 END:VEVENT END:VCALENDAR