BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Australia/Melbourne
X-LIC-LOCATION:Australia/Melbourne
BEGIN:DAYLIGHT
TZOFFSETFROM:+1000
TZOFFSETTO:+1100
TZNAME:AEDT
DTSTART:19721003T020000
RRULE:FREQ=YEARLY;BYMONTH=4;BYDAY=1SU
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:19721003T020000
TZOFFSETFROM:+1100
TZOFFSETTO:+1000
TZNAME:AEST
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20260114T163706Z
LOCATION:Meeting Room C4.11\, Level 4 (Convention Centre)
DTSTART;TZID=Australia/Melbourne:20231213T140000
DTEND;TZID=Australia/Melbourne:20231213T141500
UID:siggraphasia_SIGGRAPH Asia 2023_sess128_papers_321@linklings.com
SUMMARY:Neural Field Convolutions by Repeated Differentiation
DESCRIPTION:Ntumba Elie Nsampi, Adarsh Djeacoumar, and Hans-Peter Seidel (
 Max-Planck-Institut für Informatik); Tobias Ritschel (University College L
 ondon (UCL)); and Thomas Leimkühler (Max-Planck-Institut für Informatik)\n
 \nNeural fields are evolving towards a general-purpose continuous represen
 tation for visual computing. Yet, despite their numerous appealing propert
 ies, they are hardly amenable to signal processing. As a remedy, we presen
 t a method to perform general continuous convolutions with general continu
 ous signals such as neural fields. Observing that piecewise polynomial ker
 nels reduce to a sparse set of Dirac deltas after repeated differentiation
 , we leverage convolution identities and train a repeated integral field t
 o efficiently execute large-scale convolutions. We demonstrate our approac
 h on a variety of data modalities and spatially-varying kernels.\n\nRegist
 ration Category: Full Access\n\nSession Chair: Jianfei Cai (Monash Univers
 ity)\n\n
URL:https://asia.siggraph.org/2023/full-program?id=papers_321&sess=sess128
END:VEVENT
END:VCALENDAR
