This course is an introduction to online learning. It is about considering learning methods when data is revealed as it is collected rather than as a sample once and for all. After a quick introduction to the must-have methods (halving, online gradient), we will look at aggregation methods. The basic idea is, given several predictors, to make them vote by assigning specific weights to them rather than choosing a single one. These methods will allow optimal results under very general conditions. In a second step, we will come back to the more classical "batch" or "off-line" learning framework: we will see that the aggregation methods proposed above can also be used in this case.
Prerequisites: basic concepts in linear algebra and convex analysis, a reasonable knowledge of probability theory.
At the end of this course, students should be able to
- understand the different models and contexts of online prediction;
- understand and know how to implement the online gradient algorithm; understand how exponential weight aggregation works.
- understand a research paper on the problematic of online learning.
- Shalev-Schwartz, S. (2011) Online learning and online convex optimisation. Foundations and Trends in Machine Learning, vol. 4, pages 107-194.
- Tsybakov, A. (2020) Online learning and aggregation. Lecture Notes. (Detailed lecture notes are provided.)