24 June 2014
San Francesco - Cappella Guinigi
Dealing with big data has emphasized the importance for machine learning algorithms to be predictive but at the same time efficient and scalable. Investigating the interplay between statistics and computations has indeed emerged as a new exciting venue for both theoretical and practical studies. A key challenge in this context is the problem of performing provably efficient and adaptive model selection (determining the best model complexity for the data at hand).
Early stopping is one of the most appealing heuristics for this problem, since the computational resources required for learning are directly linked to the desired prediction properties. Regularization is achieved by iteratively passing over the data multiple times and choosing the number of passes (epochs) providing the best prediction performance. Despite being widely used in practice, the theoretical properties of early stopping have long been poorly understood.
In this talk we will discuss the connection between learning, stability and ill-posedeness to illustrate recent results that provide the theoretical foundation of early stopping for several iterative learning schemes, including batch and online (incremental) approaches.
relatore:
Rosasco , Lorenzo
Units:
DYSCO