BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070240Z LOCATION:Darling Harbour Theatre\, Level 2 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231212T093000 DTEND;TZID=Australia/Melbourne:20231212T124500 UID:siggraphasia_SIGGRAPH Asia 2023_sess209_papers_221@linklings.com SUMMARY:Commonsense Knowledge-Driven Joint Reasoning Approach for Object R etrieval in Virtual Reality DESCRIPTION:Technical Papers\n\nHaiyan Jiang (Beijing Institute of Technol ogy; National Key Laboratory of General Artificial Intelligence, Beijing I nstitute for General Artificial Intelligence (BIGAI)); Dondong Weng (Beiji ng Institute of Technology); Xiaonuo Dongye (Beijing Institute of Technolo gy; National Key Laboratory of General Artificial Intelligence, Beijing In stitute for General Artificial Intelligence (BIGAI)); Le Luo (Beijing Inst itute of Technology); and Zhenliang Zhang (National Key Laboratory of Gene ral Artificial Intelligence, Beijing Institute for General Artificial Inte lligence (BIGAI))\n\nOut-of-reach object retrieval is an important task in virtual reality (VR). Gesture-based approach, one of the most commonly us ed approaches, enables bare-hand, eyes-free, and direct retrieval by using assigned gestures. However, it is difficult to retrieve an object from pl enty of objects accurately by using gestures due to the one-to-one mapping metaphor, the limitation of finger poses, and memory burdens. Previous wo rk has focused on gesture design, ignoring the context. In fact, there is a consensus that objects and contexts are related. This indicates the obje ct expected to be retrieved is related to the context including the scene and the objects users interact with. Therefore, we proposed a commonsense knowledge-driven joint reasoning approach for object retrieval, where the human grasping gesture and the context are modeled by an And-Or graph (AOG ). This approach enables users to accurately retrieve objects from plenty of candidate objects by using natural grasping gestures according to their experience of grasping physical objects. The experimental results show th at our proposed model improves retrieval accuracy. Finally, we propose an object retrieval system based on the proposed approach and two user studie s demonstrate that the system enables efficient object retrieval in virtua l environments.\n\nRegistration Category: Full Access, Enhanced Access, Tr ade Exhibitor, Experience Hall Exhibitor URL:https://asia.siggraph.org/2023/full-program?id=papers_221&sess=sess209 END:VEVENT END:VCALENDAR