Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- Preface
- 1 Introduction
- Part I Theory
- 2 Basic concepts of game theory
- 3 Control theoretic methods
- 4 Markovian equilibria with simultaneous play
- 5 Differential games with hierarchical play
- 6 Trigger strategy equilibria
- 7 Differential games with special structures
- 8 Stochastic differential games
- Part II Applications
- Answers and hints for exercises
- Bibliography
- Index
3 - Control theoretic methods
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of figures
- List of tables
- Preface
- 1 Introduction
- Part I Theory
- 2 Basic concepts of game theory
- 3 Control theoretic methods
- 4 Markovian equilibria with simultaneous play
- 5 Differential games with hierarchical play
- 6 Trigger strategy equilibria
- 7 Differential games with special structures
- 8 Stochastic differential games
- Part II Applications
- Answers and hints for exercises
- Bibliography
- Index
Summary
In a differential game each player maximizes his objective functional subject to a number of constraints which include, in particular, a differential equation describing the evolution of the state of the game. Optimization problems of this type are known as optimal control problems and are widely used in economic theory and management science. The present chapter introduces two basic solution techniques for optimal control problems which are used extensively throughout the book: the Hamilton–Jacobi–Bellman equation and Pontryagin's maximum principle. We start by introducing these tools in a standard model with smooth functions and a finite time horizon and illustrate their application by an example. It is then shown that optimal solutions can be represented in many different ways and that the choice of the representation, also called the strategy, depends on the informational assumptions of the model. Sections 3.6 and 3.7 deal with generalized versions of the Hamilton–Jacobi–Bellman equation and Pontryagin's maximum principle, which are valid for optimal control problems defined on unbounded time domains and for non-smooth problems.
A simple optimal control problem
Let us assume that the differential game is defined over the time interval [0, T], where T > 0 denotes the terminal instant of the game. All players can take actions at each time t ∈ [0, T], thereby influencing the evolution of the state of the game as well as their own and their opponents' objective functionals.
- Type
- Chapter
- Information
- Differential Games in Economics and Management Science , pp. 37 - 83Publisher: Cambridge University PressPrint publication year: 2000