BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163644Z
LOCATION:Meeting Room C4.8\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T154500
DTEND;TZID=Australia/Melbourne:20231213T155500
UID:siggraphasia_SIGGRAPH Asia 2023_sess145_papers_194@linklings.com
SUMMARY:Neural Cache for Monte Carlo Partial Differential Equation Solver
DESCRIPTION:Zilu Li (Cornell University); Guandao Yang (Cornell University
 , Stanford University); and Xi Deng, Christopher De Sa, Bharath Hariharan,
  and Steve Marschner (Cornell University)\n\nThis paper presents a method 
 that uses neural networks as a caching mechanism to reduce the variance of
  Monte Carlo Partial Differential Equation solvers, such as the Walk-on-Sp
 heres algorithm. While these Monte Carlo PDE solvers have the merits of be
 ing unbiased and discretization-free, their high variance often hinders re
 al-time applications. On the other hand, neural networks can approximate t
 he PDE solution, and evaluating these networks at inference time can be ve
 ry fast. However, neural-network-based solutions may suffer from convergen
 ce difficulties and high bias. Our hybrid system aims to combine these two
  potentially complementary solutions by training a neural field to approxi
 mate the PDE solution using supervision from a WoS solver. This neural fie
 ld is then used as a cache in the WoS solver to reduce variance during inf
 erence. We demonstrate that our neural field training procedure is better 
 than the commonly used self-supervised objectives in the literature. We al
 so show that our hybrid solver exhibits lower variance than WoS with the s
 ame computational budget: it is significantly better for small compute bud
 gets and provides smaller improvements for larger budgets, reaching the sa
 me performance as WoS in the limit.\n\nRegistration Category: Full Access\
 n\nSession Chair: Young J. Kim (Ewha Womans University)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_194&sess=sess145
END:VEVENT
END:VCALENDAR
