An Evolutionary Algorithm that Constructs Recurrent Neural Networks

An Evolutionary Algorithm that Constructs Recurrent Neural Networks

July 16, 1993 | Peter J. Angeline, Gregory M. Saunders and Jordan B. Pollack
This paper presents GNARL, an evolutionary algorithm that simultaneously acquires both the structure and weights of recurrent neural networks. Unlike traditional methods that assume a fixed architecture, GNARL uses an evolutionary programming approach that allows for the emergence of complex behaviors and topologies not constrained by artificial architectural assumptions. The algorithm's empirical acquisition method enables it to explore a wide range of network structures and weights, making it more flexible and effective in solving complex tasks. GNARL is compared to genetic algorithms, which are less suitable for network acquisition due to their reliance on crossover and structural hill climbing. Instead, GNARL uses mutation as the primary reproductive operator, which allows for more effective exploration of the network space. The algorithm's fitness function evaluates the performance of networks on a given task, and the selection process ensures that the most fit networks are retained and mutated to generate new ones. The paper demonstrates GNARL's ability to solve a variety of tasks, including the enable-trigger problem, inducing regular languages, and the ant problem. In the enable-trigger problem, GNARL successfully evolved a network that could handle input sequences of varying lengths, demonstrating its ability to generalize. In the ant problem, GNARL was able to evolve a network that could navigate a grid and collect food efficiently, outperforming other methods in terms of both speed and accuracy. The results show that GNARL is effective in acquiring both the structure and weights of recurrent networks, and that this can be done within a comparable number of network evaluations as training a network with a static architecture on the same task. GNARL also appears to generalize better consistently, possibly due to its selective inclusion and exclusion of some links. The algorithm's minimal representational constraints, combined with appropriate search dynamics, make it extremely versatile and effective in solving complex tasks.This paper presents GNARL, an evolutionary algorithm that simultaneously acquires both the structure and weights of recurrent neural networks. Unlike traditional methods that assume a fixed architecture, GNARL uses an evolutionary programming approach that allows for the emergence of complex behaviors and topologies not constrained by artificial architectural assumptions. The algorithm's empirical acquisition method enables it to explore a wide range of network structures and weights, making it more flexible and effective in solving complex tasks. GNARL is compared to genetic algorithms, which are less suitable for network acquisition due to their reliance on crossover and structural hill climbing. Instead, GNARL uses mutation as the primary reproductive operator, which allows for more effective exploration of the network space. The algorithm's fitness function evaluates the performance of networks on a given task, and the selection process ensures that the most fit networks are retained and mutated to generate new ones. The paper demonstrates GNARL's ability to solve a variety of tasks, including the enable-trigger problem, inducing regular languages, and the ant problem. In the enable-trigger problem, GNARL successfully evolved a network that could handle input sequences of varying lengths, demonstrating its ability to generalize. In the ant problem, GNARL was able to evolve a network that could navigate a grid and collect food efficiently, outperforming other methods in terms of both speed and accuracy. The results show that GNARL is effective in acquiring both the structure and weights of recurrent networks, and that this can be done within a comparable number of network evaluations as training a network with a static architecture on the same task. GNARL also appears to generalize better consistently, possibly due to its selective inclusion and exclusion of some links. The algorithm's minimal representational constraints, combined with appropriate search dynamics, make it extremely versatile and effective in solving complex tasks.
Reach us at info@study.space