Supervised learning in function space

Samuel Lanthaler, Caltech
11/9, 2022 at 4:10PM-5PM in 939 Evans

Neural networks have proven to be effective approximators of high dimensional functions in a wide variety of applications. In scientific applications the goal is often to approximate an underlying operator, which defines a mapping between infinite-dimensional spaces of input and output functions. Extensions of neural networks to this infinite-dimensional setting have been proposed in recent years, giving rise to the rapidly emerging field of operator learning. Despite their practical success, our theoretical understanding of these approaches is still in its infancy. In this talk, I will review some of the proposed operator learning architectures (deep operator networks/neural operators), and present recent results on their approximation theory and sample complexity. This work identifies basic mechanisms by which neural operators can avoid a curse of dimensionality in the underlying (very high- or even infinite-dimensional) approximation task, thus providing a first rationale for their practical success for concrete operators of interest. The analysis also reveals fundamental limitations of some of these approaches.