The lectures will deal with the following issues:
- Discrete Stochastic Dynamic Programming Models (DSDPM). Introduction and definitions, the Bellman equation, introduction to recursive methods.
- An example of DSDPM. Solving a partial equilibrium search problem, maximum likelihood estimation.
- General Framework. State Space, Law of Motions, Classification of solution methods, Dimension of integrals, Curse of dimensionality.
- Direct Solution Methods. Schooling Decisions as an optimal stopping problem, Extreme value-dynamic Logit Model (Rust’s method).
- Computationally Intensive Methods. Simulation Methods, Interpolation Methods.
- Identification and Estimation. A theorem by Hotz and Miller, the degree of underidentification, restrictions on preferences.
- Alternative Solution Methods. Estimation by Conditional Choice Probabilities, OLS estimation, Non rational expectations (Expectation Parametrization Method).
- Applications. Examples of computer programs for various methods.
Each week, a set of notes will be available on the course webpage (Pamplemousse). The lecture notes will be self-contained. However, for a review of dynamic programming concepts, the reader may consult the following
Bellman, R. (1957), Dynamic Programming, Princeton University Press, Princeton.
Eckstein, Z. et K. Wolpin (1989), "The Specification and Estimation of Dynamic Stochastic Discrete Choice Models", Journal of Human Resources, 24, 562-598.
Rust, J. (1994), "Structural Estimation of Markov Decision Processes" in Handbook of Econometrics, Engle, R. et D. McFadden (eds.), Elsevier Science, North-Holland Publishers, Amsterdam, 3081-4143.