Dynamic optimization problems are concerned with the properties of dynamic systems that evolve deterministically or in an environment of uncertainty, and that can be acted upon/guided by means of control in order to optimize a certain criterion (optimal control). The origins and applications are very diverse: engineering (rocket: trajectory control), mechanics (car: turning the steering wheel, accelerator pedal), management, economy or finance, automatic learning, video games, robotics, etc…
The objective of this course is to present the tools and different basic mathematical approaches of the theory of optimal control, in particular dynamic programming, and to illustrate them with concrete applications, especially in economics and finance. The first part will deal with the deterministic framework, and the second part will focus on the stochastic framework with an introduction to the theoretical and algorithmic aspects of reinforcement learning.
Part 1 – Deterministic Optimization
- Introduction: discrete-time model
- Continuous time dynamic programming approach
- Pontryagin's Maximum Continuous Time Principle
Part 2 – Introduction to discrete-time stochastic optimization and reinforcement learning
- Markovian decision-making process
- Bellman Principle of Optimality
- Reinforcement learning algorithms
- Carlier G. Programmation dynamique, notes de cours de l'ENSAE, 2007.
- Fleming W.H. et Rishel R.W. (1975), Deterministic and Stochastic Optimal Control, Springer-Verlag.
- Kamien M. et N. Schwartz: Dynamic Optimization, 1991, 2ème édition, North Holland.
- Trélat E. : Contrôle optimal : théorie et applications, 2008, Vuibert, 2nde édition.
- Bauerle, N. et U. Rieder (2011): Markov Decision Processes with Applications to Finance, Springer
- Sutton et Barto (1998): Introduction to Reinforcement Learning.
- Szepesvari (2009): Algorithms for Reinforcement Learning.
- Groupe PDMIA (2008): Processus décisionnels de Markov en intelligence artificielle.