20 August 2004 | D. Q. Mayne, M. M. Seron and S. V. Raković
This paper presents a novel solution to the problem of robust model predictive control (MPC) for constrained linear systems with bounded disturbances. The key idea is to include the initial state of the model as a decision variable in the online optimal control problem. This allows the value function to be zero in a disturbance invariant set Z, which serves as the 'origin' for the system. This property enables the establishment of robust exponential stability of Z for the controlled system with bounded disturbances. The resulting online algorithm is a quadratic program with similar complexity to conventional MPC.
The paper discusses the challenges of achieving asymptotic stability in the presence of bounded disturbances, where the best achievable result is robust asymptotic stability of a set Z. A Lyapunov function that is zero in Z is required for this, but previous methods had limitations such as discontinuous stage costs or the need for solving a quadratic program to evaluate the stage cost. The proposed approach addresses these issues by incorporating the initial state as a decision variable, leading to a more effective and robust controller.
The paper introduces a new robust MPC controller that uses the initial state as a decision variable in the optimal control problem. This approach ensures that the value function is zero in Z, facilitating the proof of robust attractivity and stability of Z. The controller is shown to be robustly exponentially stable for the controlled system with bounded disturbances, and the region of attraction is Z.
The paper also presents an illustrative example of the proposed controller applied to a constrained sampled double integrator system with bounded disturbances. The results demonstrate the effectiveness of the new controller in maintaining system stability under bounded disturbances.
The paper concludes that the proposed robust MPC controller is a novel approach that incorporates the initial state as a decision variable, leading to a more effective and robust controller. The controller is relatively simple, requiring optimization over the initial state and a sequence of control actions subject to tighter constraints than in the original problem. The optimal control problem is a standard quadratic program with similar complexity to conventional MPC. The results depend on linearity and cannot easily be extended to nonlinear systems.This paper presents a novel solution to the problem of robust model predictive control (MPC) for constrained linear systems with bounded disturbances. The key idea is to include the initial state of the model as a decision variable in the online optimal control problem. This allows the value function to be zero in a disturbance invariant set Z, which serves as the 'origin' for the system. This property enables the establishment of robust exponential stability of Z for the controlled system with bounded disturbances. The resulting online algorithm is a quadratic program with similar complexity to conventional MPC.
The paper discusses the challenges of achieving asymptotic stability in the presence of bounded disturbances, where the best achievable result is robust asymptotic stability of a set Z. A Lyapunov function that is zero in Z is required for this, but previous methods had limitations such as discontinuous stage costs or the need for solving a quadratic program to evaluate the stage cost. The proposed approach addresses these issues by incorporating the initial state as a decision variable, leading to a more effective and robust controller.
The paper introduces a new robust MPC controller that uses the initial state as a decision variable in the optimal control problem. This approach ensures that the value function is zero in Z, facilitating the proof of robust attractivity and stability of Z. The controller is shown to be robustly exponentially stable for the controlled system with bounded disturbances, and the region of attraction is Z.
The paper also presents an illustrative example of the proposed controller applied to a constrained sampled double integrator system with bounded disturbances. The results demonstrate the effectiveness of the new controller in maintaining system stability under bounded disturbances.
The paper concludes that the proposed robust MPC controller is a novel approach that incorporates the initial state as a decision variable, leading to a more effective and robust controller. The controller is relatively simple, requiring optimization over the initial state and a sequence of control actions subject to tighter constraints than in the original problem. The optimal control problem is a standard quadratic program with similar complexity to conventional MPC. The results depend on linearity and cannot easily be extended to nonlinear systems.