Overparametrization in machine learning: insights from linear models

Wednesday February 1  2023 - 14.30 (GMT+1)

Andrea Montanari

Department of Electrical Engineering,

Department of Statistics,

and (by courtesy) Department of Mathematics

Stanford University

Youtube Link

Facebook

Slides


Deep learning models are ofter trained in a regime that is forbidden by classical statistical learning theory.The model complexity is often larger than the sample size and the train error does not concentrate around the test error. In fact, the model complexity can be so large that the network interpolates noisy training data. Despite this, it behaves well on fresh test  data, a phenomenon that has been dubbed `benign overfitting'.I will review recent progress towards understanding and characterizing this phenomenonin linear models. [Based on joint work with Chen Cheng]