This paper by R. W. Brockett explores the local behavior of control problems described by the system \(\dot{x} = f(x, u)\) with \(f(x_0, 0) = 0\), focusing on determining when a smooth function \(u(x)\) exists such that \(x = x_0\) is an asymptotically stable equilibrium point. The main results, formulated in Theorems 1 and 2, show that controllability does not necessarily imply the existence of a stabilizing control law, contrary to common belief. Theorem 1 uses a degree-theoretic argument to demonstrate this, while Theorem 2 provides a criterion for the existence of stabilizing control laws in a nonlinear setting.
The paper begins with an introduction that highlights the importance of understanding the local feedback stabilization problem. It then delves into the mathematical background of control systems, including the concept of input-linear systems and the linearized system at the equilibrium point. The nonexistence of stabilizing control laws is discussed, particularly in the context of systems where the linearized system has uncontrollable modes with positive real parts.
The existence of stabilizing control laws is addressed in the fourth section, where the authors introduce the concept of finite gain at an equilibrium point and use it to study stability in critical cases. A lemma is presented to reduce the stability problem to a lower-dimensional system, and Theorem 2 provides a sufficient condition for stabilizability in certain nonlinear systems.
Finally, the paper concludes with acknowledgments and references, highlighting the support from various research institutions and grants.This paper by R. W. Brockett explores the local behavior of control problems described by the system \(\dot{x} = f(x, u)\) with \(f(x_0, 0) = 0\), focusing on determining when a smooth function \(u(x)\) exists such that \(x = x_0\) is an asymptotically stable equilibrium point. The main results, formulated in Theorems 1 and 2, show that controllability does not necessarily imply the existence of a stabilizing control law, contrary to common belief. Theorem 1 uses a degree-theoretic argument to demonstrate this, while Theorem 2 provides a criterion for the existence of stabilizing control laws in a nonlinear setting.
The paper begins with an introduction that highlights the importance of understanding the local feedback stabilization problem. It then delves into the mathematical background of control systems, including the concept of input-linear systems and the linearized system at the equilibrium point. The nonexistence of stabilizing control laws is discussed, particularly in the context of systems where the linearized system has uncontrollable modes with positive real parts.
The existence of stabilizing control laws is addressed in the fourth section, where the authors introduce the concept of finite gain at an equilibrium point and use it to study stability in critical cases. A lemma is presented to reduce the stability problem to a lower-dimensional system, and Theorem 2 provides a sufficient condition for stabilizability in certain nonlinear systems.
Finally, the paper concludes with acknowledgments and references, highlighting the support from various research institutions and grants.