BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163643Z
LOCATION:Meeting Room C4.9+C4.10\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T132100
DTEND;TZID=Australia/Melbourne:20231213T133600
UID:siggraphasia_SIGGRAPH Asia 2023_sess163_papers_221@linklings.com
SUMMARY:Commonsense Knowledge-Driven Joint Reasoning Approach for Object R
 etrieval in Virtual Reality
DESCRIPTION:Haiyan Jiang (Beijing Institute of Technology; National Key La
 boratory of General Artificial Intelligence, Beijing Institute for General
  Artificial Intelligence (BIGAI)); Dondong Weng (Beijing Institute of Tech
 nology); Xiaonuo Dongye (Beijing Institute of Technology; National Key Lab
 oratory of General Artificial Intelligence, Beijing Institute for General 
 Artificial Intelligence (BIGAI)); Le Luo (Beijing Institute of Technology)
 ; and Zhenliang Zhang (National Key Laboratory of General Artificial Intel
 ligence, Beijing Institute for General Artificial Intelligence (BIGAI))\n\
 nOut-of-reach object retrieval is an important task in virtual reality (VR
 ). Gesture-based approach, one of the most commonly used approaches, enabl
 es bare-hand, eyes-free, and direct retrieval by using assigned gestures. 
 However, it is difficult to retrieve an object from plenty of objects accu
 rately by using gestures due to the one-to-one mapping metaphor, the limit
 ation of finger poses, and memory burdens. Previous work has focused on ge
 sture design, ignoring the context. In fact, there is a consensus that obj
 ects and contexts are related. This indicates the object expected to be re
 trieved is related to the context including the scene and the objects user
 s interact with. Therefore, we proposed a commonsense knowledge-driven joi
 nt reasoning approach for object retrieval, where the human grasping gestu
 re and the context are modeled by an And-Or graph (AOG). This approach ena
 bles users to accurately retrieve objects from plenty of candidate objects
  by using natural grasping gestures according to their experience of grasp
 ing physical objects. The experimental results show that our proposed mode
 l improves retrieval accuracy. Finally, we propose an object retrieval sys
 tem based on the proposed approach and two user studies demonstrate that t
 he system enables efficient object retrieval in virtual environments.\n\nR
 egistration Category: Full Access\n\nSession Chair: Chek Tien Tan (Singapo
 re Institute of Technology, Centre for Immersification)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_221&sess=sess163
END:VEVENT
END:VCALENDAR
