A Resource-Allocating Network for Function Interpolation

A Resource-Allocating Network for Function Interpolation

Summer 1991 | John Platt
The paper introduces a Resource-Allocating Network (RAN) that dynamically allocates new computational units to handle unusual patterns, forming compact representations while learning efficiently and rapidly. The network uses Gaussian units, which respond to only a local region of the input space, and adjusts their parameters using standard LMS gradient descent. The RAN learns by allocating new units when patterns are not well represented and updating existing units when patterns are well represented. This approach allows the network to scale sub-linearly with the number of inputs, making it suitable for both online and offline learning. The RAN is compared with other learning algorithms, such as backpropagation and hashing B-splines, in predicting the Mackey-Glass chaotic time series. The results show that the RAN achieves comparable accuracy to backpropagation but with significantly fewer computational resources. The RAN's performance is further enhanced by adjusting the centers of the Gaussian units and using a two-part novelty condition to identify novel patterns. The RAN's compact representation and efficient learning make it particularly suitable for hardware implementation and statistical applications.The paper introduces a Resource-Allocating Network (RAN) that dynamically allocates new computational units to handle unusual patterns, forming compact representations while learning efficiently and rapidly. The network uses Gaussian units, which respond to only a local region of the input space, and adjusts their parameters using standard LMS gradient descent. The RAN learns by allocating new units when patterns are not well represented and updating existing units when patterns are well represented. This approach allows the network to scale sub-linearly with the number of inputs, making it suitable for both online and offline learning. The RAN is compared with other learning algorithms, such as backpropagation and hashing B-splines, in predicting the Mackey-Glass chaotic time series. The results show that the RAN achieves comparable accuracy to backpropagation but with significantly fewer computational resources. The RAN's performance is further enhanced by adjusting the centers of the Gaussian units and using a two-part novelty condition to identify novel patterns. The RAN's compact representation and efficient learning make it particularly suitable for hardware implementation and statistical applications.
Reach us at info@study.space