July 16, 1993 | Peter J. Angeline, Gregory M. Saunders and Jordan B. Pollack
The paper presents GNARL (GeNeralized Acquisition of Recurrent Links), an evolutionary program designed to construct recurrent neural networks. Standard methods for inducing both the structure and weight values of recurrent neural networks often fit an assumed class of architectures to every task, simplifying the complex interactions between network structure and function. Evolutionary computation, including genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. However, the authors argue that genetic algorithms are inappropriate for network acquisition and describe GNARL, which simultaneously acquires both the structure and weights for recurrent networks.
GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. The algorithm uses a population of search space positions to locate extrema of a function defined over the search space, with members ranked according to a fitness function. New population members, called offspring, are created using specialized reproduction heuristics.
The paper discusses the limitations of genetic algorithms in evolving neural networks, highlighting issues such as structural hill climbing and the need for a strong interpretation function. In contrast, evolutionary programming, which manipulates networks directly and avoids crossover between networks, is found to be more suitable for simultaneous structural and parametric learning in recurrent networks.
The GNARL algorithm is described in detail, including its selection, reproduction, and mutation processes. The algorithm's ability to evolve recurrent networks for various problems, such as the enable-trigger task, regular language induction, and the ant problem, is demonstrated through experiments. The results show that GNARL can evolve networks with complex behaviors and topologies, outperforming or matching the performance of other methods in terms of speed and generalization.The paper presents GNARL (GeNeralized Acquisition of Recurrent Links), an evolutionary program designed to construct recurrent neural networks. Standard methods for inducing both the structure and weight values of recurrent neural networks often fit an assumed class of architectures to every task, simplifying the complex interactions between network structure and function. Evolutionary computation, including genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. However, the authors argue that genetic algorithms are inappropriate for network acquisition and describe GNARL, which simultaneously acquires both the structure and weights for recurrent networks.
GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. The algorithm uses a population of search space positions to locate extrema of a function defined over the search space, with members ranked according to a fitness function. New population members, called offspring, are created using specialized reproduction heuristics.
The paper discusses the limitations of genetic algorithms in evolving neural networks, highlighting issues such as structural hill climbing and the need for a strong interpretation function. In contrast, evolutionary programming, which manipulates networks directly and avoids crossover between networks, is found to be more suitable for simultaneous structural and parametric learning in recurrent networks.
The GNARL algorithm is described in detail, including its selection, reproduction, and mutation processes. The algorithm's ability to evolve recurrent networks for various problems, such as the enable-trigger task, regular language induction, and the ant problem, is demonstrated through experiments. The results show that GNARL can evolve networks with complex behaviors and topologies, outperforming or matching the performance of other methods in terms of speed and generalization.