The chapter "Numerical Methods for Stochastic Control Problems in Continuous Time" by Harold J. Kushner and Paul Dupuis, published in the book "Applications of Mathematics" (24), provides a comprehensive treatment of numerical methods for solving stochastic control problems. The book is edited by I. Karatzas and M. Yor and includes contributions from various experts in the field.
The content is structured into several sections, covering:
1. **Introduction**: Reviews of continuous-time models, including martingales, stochastic integration, stochastic differential equations, reflected diffusions, and processes with jumps.
2. **Controlled Markov Chains**: Topics such as recursive equations for cost, optimal stopping problems, discounted cost, control to a target set, and finite-time control problems.
3. **Dynamic Programming Equations**: Detailed discussions on functionals of uncontrolled processes, optimal stopping problems, and control until a target set is reached.
4. **Markov Chain Approximation Method**: Introduction to Markov chain approximation, continuous-time interpolation, and various numerical simplifications.
5. **Construction of Approximating Markov Chains**: Techniques for constructing approximating Markov chains, including one-dimensional examples, variable grids, and jump diffusion processes.
6. **Computational Methods for Controlled Markov Chains**: Classical iterative methods, error bounds, accelerated methods, domain decomposition, and multigrid methods.
7. **Ergodic Cost Problem**: Formulation and algorithms for the control problem, including Jacobi-type iterations and numerical methods.
8. **Heavy Traffic and Singular Control**: Motivating examples and numerical methods for heavy traffic and singular control problems.
9. **Weak Convergence and Characterization of Processes**: Definitions, basic theorems, and criteria for tightness in \(D^k[0, \infty)\).
10. **Convergence Proofs**: Limit theorems, existence of optimal control, and convergence of costs.
11. **Convergence for Reflecting Boundaries, Singular Control, and Ergodic Cost Problems**: Detailed analysis of reflecting boundaries, singular control, and ergodic cost problems.
12. **Finite Time Problems and Nonlinear Filtering**: Explicit and implicit approximations, optimal control computations, and nonlinear filtering.
13. **Controlled Variance and Jumps**: Introduction to controlled variance and jumps, including relaxed Poisson measures and optimal controls.
14. **Problems from the Calculus of Variations**: Numerical schemes and convergence for problems with finite and infinite time horizons.
15. **Viscosity Solution Approach**: Definitions, numerical schemes, and proof of convergence for viscosity solutions.
The chapter is rich in mathematical detail and practical examples, making it a valuable resource for researchers and practitioners in the field of stochastic control.The chapter "Numerical Methods for Stochastic Control Problems in Continuous Time" by Harold J. Kushner and Paul Dupuis, published in the book "Applications of Mathematics" (24), provides a comprehensive treatment of numerical methods for solving stochastic control problems. The book is edited by I. Karatzas and M. Yor and includes contributions from various experts in the field.
The content is structured into several sections, covering:
1. **Introduction**: Reviews of continuous-time models, including martingales, stochastic integration, stochastic differential equations, reflected diffusions, and processes with jumps.
2. **Controlled Markov Chains**: Topics such as recursive equations for cost, optimal stopping problems, discounted cost, control to a target set, and finite-time control problems.
3. **Dynamic Programming Equations**: Detailed discussions on functionals of uncontrolled processes, optimal stopping problems, and control until a target set is reached.
4. **Markov Chain Approximation Method**: Introduction to Markov chain approximation, continuous-time interpolation, and various numerical simplifications.
5. **Construction of Approximating Markov Chains**: Techniques for constructing approximating Markov chains, including one-dimensional examples, variable grids, and jump diffusion processes.
6. **Computational Methods for Controlled Markov Chains**: Classical iterative methods, error bounds, accelerated methods, domain decomposition, and multigrid methods.
7. **Ergodic Cost Problem**: Formulation and algorithms for the control problem, including Jacobi-type iterations and numerical methods.
8. **Heavy Traffic and Singular Control**: Motivating examples and numerical methods for heavy traffic and singular control problems.
9. **Weak Convergence and Characterization of Processes**: Definitions, basic theorems, and criteria for tightness in \(D^k[0, \infty)\).
10. **Convergence Proofs**: Limit theorems, existence of optimal control, and convergence of costs.
11. **Convergence for Reflecting Boundaries, Singular Control, and Ergodic Cost Problems**: Detailed analysis of reflecting boundaries, singular control, and ergodic cost problems.
12. **Finite Time Problems and Nonlinear Filtering**: Explicit and implicit approximations, optimal control computations, and nonlinear filtering.
13. **Controlled Variance and Jumps**: Introduction to controlled variance and jumps, including relaxed Poisson measures and optimal controls.
14. **Problems from the Calculus of Variations**: Numerical schemes and convergence for problems with finite and infinite time horizons.
15. **Viscosity Solution Approach**: Definitions, numerical schemes, and proof of convergence for viscosity solutions.
The chapter is rich in mathematical detail and practical examples, making it a valuable resource for researchers and practitioners in the field of stochastic control.