A Resource-Allocating Network for Function Interpolation

A Resource-Allocating Network for Function Interpolation

Summer 1991 | John Platt
This paper presents a resource-allocating network (RAN) for function interpolation, which dynamically adds new computational units when unusual patterns are presented. The RAN forms compact representations, learns quickly, and can be used at any stage of learning without repeating patterns. Units respond only to local regions of input space, and the network learns by allocating new units or adjusting existing ones. If the network performs poorly, a new unit is allocated to correct the response; if it performs well, parameters are updated using LMS gradient descent. The RAN outperforms backpropagation in learning speed for predicting the Mackey-Glass chaotic time series, using a comparable number of synapses. It is more efficient than traditional methods like Parzen windows and k-nearest neighbors, which require a number of stored patterns growing linearly with the number of presented patterns. The RAN stores patterns sub-linearly, reaching a maximum. It uses Gaussian-like units that respond to local regions of input space, allowing for efficient storage and retrieval of information. The RAN's learning algorithm starts with no stored patterns and allocates new units when inputs are far from existing centers and the output error is large. It adjusts the centers of Gaussian units based on error, and uses gradient descent to correct small errors. The network's performance is evaluated on the Mackey-Glass equation, showing that it achieves comparable accuracy to other methods with significantly less computation time. The RAN's novelty condition and center adjustment are crucial for its performance. It outperforms other methods in terms of compactness and accuracy, using fewer units and parameters. The RAN is effective for both offline and online learning, and its performance is validated through experiments on the Mackey-Glass equation. The RAN's ability to dynamically allocate resources makes it efficient and effective for function interpolation.This paper presents a resource-allocating network (RAN) for function interpolation, which dynamically adds new computational units when unusual patterns are presented. The RAN forms compact representations, learns quickly, and can be used at any stage of learning without repeating patterns. Units respond only to local regions of input space, and the network learns by allocating new units or adjusting existing ones. If the network performs poorly, a new unit is allocated to correct the response; if it performs well, parameters are updated using LMS gradient descent. The RAN outperforms backpropagation in learning speed for predicting the Mackey-Glass chaotic time series, using a comparable number of synapses. It is more efficient than traditional methods like Parzen windows and k-nearest neighbors, which require a number of stored patterns growing linearly with the number of presented patterns. The RAN stores patterns sub-linearly, reaching a maximum. It uses Gaussian-like units that respond to local regions of input space, allowing for efficient storage and retrieval of information. The RAN's learning algorithm starts with no stored patterns and allocates new units when inputs are far from existing centers and the output error is large. It adjusts the centers of Gaussian units based on error, and uses gradient descent to correct small errors. The network's performance is evaluated on the Mackey-Glass equation, showing that it achieves comparable accuracy to other methods with significantly less computation time. The RAN's novelty condition and center adjustment are crucial for its performance. It outperforms other methods in terms of compactness and accuracy, using fewer units and parameters. The RAN is effective for both offline and online learning, and its performance is validated through experiments on the Mackey-Glass equation. The RAN's ability to dynamically allocate resources makes it efficient and effective for function interpolation.
Reach us at info@study.space