The objective of this course is to familiarize students with convex optimization and its most popular algorithms. We will study some convex optimization methods by illustrating them with examples borrowed from other disciplines such as statistical learning, economics, etc. Our approach will be theoretical, in that we will demonstrate results of convergence of the algorithms and we will reflect on the optimality of these algorithms. We will also try to understand the advantages and limitations of the methods discussed.
a. Convex assemblies, convex functions
b. Linear programming
c. Duality, Lagrange Multiplier, Karush-Kuhn-Tucker Conditions
2. General ideas: Algorithms, oracles and complexity
a. Some examples: Why is convexity so important?
b. Notions of oracle and complexity of an algorithm
c. Complexity: How to obtain upper and lower bounds
3. Geometric methods
a. Centre of gravity method
b. Ellipsoid method
c. Secant plane method
4. Variations on a gradient theme
a. Descendency of (sub) gradient: General idea, unconstrained case
b. Special cases: Regular functions
c. Projected gradient descent: Constrained case
d. Coordinate descent
e. Some other methods (mirror descent, Frank-Wolfe method, etc.)
5. To go further…
a. Accelerated gradient descent
b. Second-order methods
c. Non-convex functions : What to do?
d. A few words on robust and stochastic optimization