Data driven approaches based on machine learning provide a new way to solve modeling problems in science and engineering. Algorithms are used to select the most predictive models among hugely overparamaterized classes, in stark contrast to classical practices. How to design and assess the quality of the considered algorithms is then of paramount importance. Heuristics approaches are typically inefficient and brittle, and principled guidelines are much needed. In this talk, I will review classical, as well as new, ideas emphasizing the role of regularization theory, in the light of modern large scale machine learning. In particular, I will discuss how a classical approach, namely itetative regularization, has been rediscovered and brought to new life under the name of implicit regularization.