NOISY NETWORKS FOR EXPLORATION

NOISY NETWORKS FOR EXPLORATION

9 Jul 2019 | Meire Fortunato*, Mohammad Gheshlaghi Azar*, Bilal Piot *, Jacob Menick, Matteo Hessel, Ian Osband, Alex Graves, Volodymyr Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg
NoisyNet is a deep reinforcement learning agent that introduces parametric noise to its weights, enhancing exploration through stochasticity in the policy. The noise parameters are learned alongside the network weights using gradient descent. This approach is straightforward to implement and adds minimal computational overhead. Experiments on 57 Atari games show that NoisyNet significantly improves performance compared to conventional exploration heuristics for A3C, DQN, and Dueling agents, sometimes advancing agents from subhuman to superhuman performance. NoisyNet can be adapted to various deep reinforcement learning algorithms, demonstrating its versatility. The method leverages the stochasticity induced by the noise to drive exploration, providing a structured alternative to local perturbations like ε-greedy or entropy regularization.NoisyNet is a deep reinforcement learning agent that introduces parametric noise to its weights, enhancing exploration through stochasticity in the policy. The noise parameters are learned alongside the network weights using gradient descent. This approach is straightforward to implement and adds minimal computational overhead. Experiments on 57 Atari games show that NoisyNet significantly improves performance compared to conventional exploration heuristics for A3C, DQN, and Dueling agents, sometimes advancing agents from subhuman to superhuman performance. NoisyNet can be adapted to various deep reinforcement learning algorithms, demonstrating its versatility. The method leverages the stochasticity induced by the noise to drive exploration, providing a structured alternative to local perturbations like ε-greedy or entropy regularization.
Reach us at info@study.space