Stochastic Controls: Hamiltonian Systems and HJB Equations

Stochastic Controls: Hamiltonian Systems and HJB Equations

| Jiongmin Yong, Xun Yu Zhou
The book "Stochastic Controls: Hamiltonian Systems and HJB Equations" is a comprehensive treatise on the theory and applications of stochastic control. It begins with an introduction to basic stochastic calculus, covering probability spaces, random variables, conditional expectation, convergence of probabilities, stochastic processes, stopping times, martingales, and Itô's integral. The book then delves into stochastic optimal control problems, discussing deterministic cases, examples, formulations, existence of optimal controls, reachable sets, and various stochastic control models. The maximum principle and stochastic Hamiltonian systems are explored in detail, including the statement and proof of the maximum principle, sufficient conditions of optimality, and problems with state constraints. The dynamic programming approach and the Hamilton–Jacobi–Bellman (HJB) equation are introduced, along with the properties of the value function, viscosity solutions, and uniqueness of viscosity solutions. The relationship between the maximum principle and dynamic programming is examined, including classical Hamilton–Jacobi theory, the relationship for deterministic and stochastic systems, and stochastic verification theorems. The book also covers linear quadratic optimal control problems, both deterministic and stochastic, and the existence and solvability of stochastic Riccati equations. Finally, the book discusses backward stochastic differential equations (BSDEs), including linear and nonlinear BSDEs, Feynman–Kac-type formulae, forward-backward stochastic differential equations (FBSDEs), and their applications in option pricing problems.The book "Stochastic Controls: Hamiltonian Systems and HJB Equations" is a comprehensive treatise on the theory and applications of stochastic control. It begins with an introduction to basic stochastic calculus, covering probability spaces, random variables, conditional expectation, convergence of probabilities, stochastic processes, stopping times, martingales, and Itô's integral. The book then delves into stochastic optimal control problems, discussing deterministic cases, examples, formulations, existence of optimal controls, reachable sets, and various stochastic control models. The maximum principle and stochastic Hamiltonian systems are explored in detail, including the statement and proof of the maximum principle, sufficient conditions of optimality, and problems with state constraints. The dynamic programming approach and the Hamilton–Jacobi–Bellman (HJB) equation are introduced, along with the properties of the value function, viscosity solutions, and uniqueness of viscosity solutions. The relationship between the maximum principle and dynamic programming is examined, including classical Hamilton–Jacobi theory, the relationship for deterministic and stochastic systems, and stochastic verification theorems. The book also covers linear quadratic optimal control problems, both deterministic and stochastic, and the existence and solvability of stochastic Riccati equations. Finally, the book discusses backward stochastic differential equations (BSDEs), including linear and nonlinear BSDEs, Feynman–Kac-type formulae, forward-backward stochastic differential equations (FBSDEs), and their applications in option pricing problems.
Reach us at info@study.space