BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Australia/Melbourne X-LIC-LOCATION:Australia/Melbourne BEGIN:DAYLIGHT TZOFFSETFROM:+1000 TZOFFSETTO:+1100 TZNAME:AEDT DTSTART:19721003T020000 RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU END:DAYLIGHT BEGIN:STANDARD DTSTART:19721003T020000 TZOFFSETFROM:+1100 TZOFFSETTO:+1000 TZNAME:AEST RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20240214T070245Z LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre) DTSTART;TZID=Australia/Melbourne:20231213T154500 DTEND;TZID=Australia/Melbourne:20231213T155500 UID:siggraphasia_SIGGRAPH Asia 2023_sess145_papers_194@linklings.com SUMMARY:Neural Cache for Monte Carlo Partial Differential Equation Solver DESCRIPTION:Technical Papers\n\nZilu Li (Cornell University); Guandao Yang (Cornell University, Stanford University); and Xi Deng, Christopher De Sa , Bharath Hariharan, and Steve Marschner (Cornell University)\n\nThis pape r presents a method that uses neural networks as a caching mechanism to re duce the variance of Monte Carlo Partial Differential Equation solvers, su ch as the Walk-on-Spheres algorithm. While these Monte Carlo PDE solvers h ave the merits of being unbiased and discretization-free, their high varia nce often hinders real-time applications. On the other hand, neural networ ks can approximate the PDE solution, and evaluating these networks at infe rence time can be very fast. However, neural-network-based solutions may s uffer from convergence difficulties and high bias. Our hybrid system aims to combine these two potentially complementary solutions by training a neu ral field to approximate the PDE solution using supervision from a WoS sol ver. This neural field is then used as a cache in the WoS solver to reduce variance during inference. We demonstrate that our neural field training procedure is better than the commonly used self-supervised objectives in t he literature. We also show that our hybrid solver exhibits lower variance than WoS with the same computational budget: it is significantly better f or small compute budgets and provides smaller improvements for larger budg ets, reaching the same performance as WoS in the limit.\n\nRegistration Ca tegory: Full Access\n\nSession Chair: Young J. Kim (Ewha Womans University ) URL:https://asia.siggraph.org/2023/full-program?id=papers_194&sess=sess145 END:VEVENT END:VCALENDAR