Abstract Dynamic Programming, 2nd Edition, 2018
by Dimitri P. Bertsekas
The 2nd edition aims primarily to amplify the presentation of the semicontractive models of Chapter 3 and Chapter 4 of the first (2013) edition, and to supplement it with a broad spectrum of research results that I obtained and published in journals and reports since the first edition was written (see below). As a result, the size of this material more than doubled, and the size of the book increased by nearly 40%.
Chapter 1, 2ND EDITION, Introduction
Chapter 2, 2ND EDITION, Contractive Models
Chapter 3, 2ND EDITION, Semicontractive Models
Chapter 4, 2ND EDITION, Noncontractive Models
In addition to the changes in Chapters 3, and 4, I have also eliminated from the second edition the material of the first edition that deals with restricted policies and Borel space models (Chapter 5 and Appendix C). These models are motivated in part by the complex measurability questions that arise in mathematically rigorous theories of stochastic optimal control involving continuous probability spaces. The restricted policies framework aims primarily to extend abstract DP ideas to Borel space models. Since this material is fully covered in Chapter 6 of the 1978 monograph by Bertsekas and Shreve, and followup research on the subject has been limited, I decided to omit Chapter 5 and Appendix C of the first edition from the second edition and just post them below.
Related Videos and Slides:
Video from a Oct. 2017 Lecture at UConn on Optimal control, abstract, and semicontractive dynamic programming. Related paper, and set of Lecture Slides.
Video from a May 2017 Lecture at MIT on the solutions of Bellman’s equation, Stable optimal control, and semicontractive dynamic programming. Related paper, and set of Lecture Slides.
Five-videolectures on Semicontractive Dynamic Programming.
Related Papers and Reports:
The following papers and reports have a strong connection to the book, and amplify on the analysis and the range of applications of the semicontractive models of Chapters 3 and 4:
- D. P. Bertsekas, “Regular Policies in Abstract Dynamic Programming”, Lab. for Information and Decision Systems Report LIDS-P-3173, MIT, May 2015; SIAM J. on Optimization, Vol. 27, No. 3, pp. 1694-1727. (Related Lecture Slides); (Related Video Lectures).
- D. P. Bertsekas, “Value and Policy Iteration in Deterministic Optimal Control and Adaptive Dynamic Programming”, Lab. for Information and Decision Systems Report LIDS-P-3174, MIT, May 2015 (revised Sept. 2015); IEEE Transactions on Neural Networks and Learning Systems, Vol. 28, 2017, pp. 500-509.
- D. P. Bertsekas and H. Yu, “Stochastic Shortest Path Problems Under Weak Conditions”, Lab. for Information and Decision Systems Report LIDS-P-2909, MIT, January 2016.
- D. P. Bertsekas, “Robust Shortest Path Planning and Semicontractive Dynamic Programming,” Lab. for Information and Decision Systems Report LIDS-P-2915, MIT, Feb. 2014 (revised Jan. 2015 and June 2016); arXiv preprint arXiv:1608.01670; Naval Research Logistics (NRL), 66(1), pp.15-37.
- D. P. Bertsekas, “Affine Monotonic and Risk-Sensitive Models in Dynamic Programming”, Lab. for Information and Decision Systems Report LIDS-3204, MIT, June 2016; to appear in IEEE Transactions on Aut. Control.
- D. P. Bertsekas, “Stable Optimal Control and Semicontractive Dynamic Programming,” SIAM J. on Control and Optimization, Vol. 56, 2018, pp. 231-252, (Related Lecture Slides), (Related Video Lecture from MIT, May 2017). (Related Lecture Slides from UConn, Oct. 2017). (Related Video Lecture from UConn, Oct. 2017).
- D. P. Bertsekas, “Proper Policies in Infinite-State Stochastic Shortest Path Problems,” IEEE Transactions on Automatic Control, Vol. 63, 2018, pp. 3787-3792. (Related Lecture Slides).
- An updated version of Chapter 4 of the author’s Dynamic Programming book, Vol. II, which incorporates recent research on a variety of undiscounted problems and relates to abstract DP topics; (Related Lecture Slides).