Skip to main content

Dynamic Programming and Its Applications

Proceedings of the International Conference on Dynamic Programming and Its Applications, University of British Columbia, Vancouver, British Columbia, Canada, April 14-16, 1977

  • 1st Edition - January 1, 1978
  • Latest edition
  • Editor: Martin L. Puterman
  • Language: English

Dynamic Programming and Its Applications provides information pertinent to the theory and application of dynamic programming. This book presents the development and future… Read more

World Book Day celebration

Where learning shapes lives

Up to 25% off trusted resources that support research, study, and discovery.

Description

Dynamic Programming and Its Applications provides information pertinent to the theory and application of dynamic programming. This book presents the development and future directions for dynamic programming. Organized into four parts encompassing 23 chapters, this book begins with an overview of recurrence conditions for countable state Markov decision problems, which ensure that the optimal average reward exists and satisfies the functional equation of dynamic programming. This text then provides an extensive analysis of the theory of successive approximation for Markov decision problems. Other chapters consider the computational methods for deterministic, finite horizon problems, and present a unified and insightful presentation of several foundational questions. This book discusses as well the relationship between policy iteration and Newton's method. The final chapter deals with the main factors severely limiting the application of dynamic programming in practice. This book is a valuable resource for growth theorists, economists, biologists, mathematicians, and applied management scientists.

Table of contents


Contributors

Preface

Surveys

Recurrence Conditions in Denumerable State Markov Decision Processes

Discounted and Undiscounted Value-Iteration in Markov Decision Problems: A Survey

Computational Advances in Dynamic Programming

The Analytic Theory of Policy Iteration

Dynamic Programming in Borei Spaces

Elimination of Nonoptimal Actions in Markov Decision Processes

Applications

On Renewal Decisions

Steady-State Policies, Dynamic Programming, and Optimal Economic Growth

Comments on the Origin and Application of Markov Decision Processes

The Application of Markov Decision Processes to Forest Management

An Application of Dynamic Programming in Statistics

Some Dynamic Programming Applications in Fisheries Management

Theory

Buckets, Shortest Paths, and Integer Programming

Affine Dynamic Programming

Optimal Control of a Diffusion Process with Reflecting Boundaries and both Continous and Lump Costs

On Approximate Solutions of Finite-Stage Dynamic Programs

An Inverse Theorem between Main and Inverse Dynamic Programming: Infinite-Stage Case

On the Transient Case for Markov Decision Chains with General State Spaces

An Operator-Theoretical Treatment of Negative Dynamic Programming

Existence of Average Optimal Strategies in Markovian Decision Problems with Strictly Unbounded Costs

International Conference on Dynamic Programming: Panel Discussion

Comments Of Karl Hinderer

Comments Of Eric V. Denardo

Comments Of Arthur F. Veinott, Jr.

Participants

Product details

  • Edition: 1
  • Latest edition
  • Published: September 25, 2014
  • Language: English

View book on ScienceDirect

Read Dynamic Programming and Its Applications on ScienceDirect