This paper introduces a new class of reinforcement learning algorithms called residual algorithms, which combine the guaranteed convergence of residual gradient algorithms with the fast learning speed of direct algorithms. The paper shows that direct algorithms can be unstable when used with general function-approximation systems, while residual gradient algorithms are guaranteed to converge but may learn slowly in some cases. Residual algorithms are shown to be a generalization of both direct and residual gradient algorithms, combining their advantages. The paper presents various forms of value iteration, Q-learning, and advantage learning as special cases of residual algorithms. Theoretical analysis and simulation results demonstrate the properties of these algorithms. The paper also discusses the application of residual algorithms to stochastic Markov decision processes (MDPs) and multiple-action MDPs. Simulation results show that residual algorithms can achieve near-optimal performance in some cases, while direct algorithms may fail to converge. The paper concludes that residual algorithms provide a promising approach for reinforcement learning with function approximation systems, offering both guaranteed convergence and fast learning.This paper introduces a new class of reinforcement learning algorithms called residual algorithms, which combine the guaranteed convergence of residual gradient algorithms with the fast learning speed of direct algorithms. The paper shows that direct algorithms can be unstable when used with general function-approximation systems, while residual gradient algorithms are guaranteed to converge but may learn slowly in some cases. Residual algorithms are shown to be a generalization of both direct and residual gradient algorithms, combining their advantages. The paper presents various forms of value iteration, Q-learning, and advantage learning as special cases of residual algorithms. Theoretical analysis and simulation results demonstrate the properties of these algorithms. The paper also discusses the application of residual algorithms to stochastic Markov decision processes (MDPs) and multiple-action MDPs. Simulation results show that residual algorithms can achieve near-optimal performance in some cases, while direct algorithms may fail to converge. The paper concludes that residual algorithms provide a promising approach for reinforcement learning with function approximation systems, offering both guaranteed convergence and fast learning.