Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss

Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss

2024 | Lorenzo Canese, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Marco Re, Sergio Spanò
This paper introduces DQ-RTS, a novel decentralized Multi-Agent Reinforcement Learning (MARL) algorithm designed to address challenges in distributed environments with non-ideal communication and varying agent numbers. DQ-RTS incorporates an optimized communication protocol to mitigate data loss between agents, demonstrating superior convergence speed compared to its decentralized counterpart Q-RTS. The algorithm maintains performance even when the agent population fluctuates, making it suitable for applications requiring adaptable agent numbers. Extensive experiments on various benchmark tasks validate the scalability and effectiveness of DQ-RTS, establishing its potential as a practical solution for resilient MARL in dynamic distributed environments. The paper also discusses the background of MARL, the limitations of centralized approaches, and the detailed development of the DQ-RTS algorithm, including its communication phases and optimization techniques. Results show that DQ-RTS outperforms Q-RTS in terms of convergence speed and robustness, especially in scenarios with poor communication ranges. The algorithm's hardware implementability and potential for edge computing are highlighted, along with future directions for improving communication efficiency.This paper introduces DQ-RTS, a novel decentralized Multi-Agent Reinforcement Learning (MARL) algorithm designed to address challenges in distributed environments with non-ideal communication and varying agent numbers. DQ-RTS incorporates an optimized communication protocol to mitigate data loss between agents, demonstrating superior convergence speed compared to its decentralized counterpart Q-RTS. The algorithm maintains performance even when the agent population fluctuates, making it suitable for applications requiring adaptable agent numbers. Extensive experiments on various benchmark tasks validate the scalability and effectiveness of DQ-RTS, establishing its potential as a practical solution for resilient MARL in dynamic distributed environments. The paper also discusses the background of MARL, the limitations of centralized approaches, and the detailed development of the DQ-RTS algorithm, including its communication phases and optimization techniques. Results show that DQ-RTS outperforms Q-RTS in terms of convergence speed and robustness, especially in scenarios with poor communication ranges. The algorithm's hardware implementability and potential for edge computing are highlighted, along with future directions for improving communication efficiency.
Reach us at info@study.space
[slides and audio] Resilient multi-agent RL%3A introducing DQ-RTS for distributed environments with data loss