An Analysis of Single-Layer Networks in Unsupervised Feature Learning

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

2011 | Adam Coates, Honglak Lee, Andrew Y. Ng
This paper explores the impact of various parameters on unsupervised feature learning, focusing on single-layer networks. The authors apply several off-the-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to datasets like CIFAR-10, NORB, and STL-10. They find that factors such as the number of hidden nodes, receptive field size, step-size between extracted features, and whitening have a significant impact on performance. Specifically, they demonstrate that achieving state-of-the-art performance on CIFAR-10 and NORB can be achieved with a single layer of features, using simple algorithms like K-means clustering. The best results are obtained with 1600 features, a 1-pixel stride, a 6-pixel receptive field, and whitening. The paper also highlights the importance of dense feature extraction and large numbers of features, which can improve performance even more. The findings suggest that while more complex algorithms may have greater representational power, simpler and faster algorithms can still achieve competitive results when optimized for these parameters.This paper explores the impact of various parameters on unsupervised feature learning, focusing on single-layer networks. The authors apply several off-the-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to datasets like CIFAR-10, NORB, and STL-10. They find that factors such as the number of hidden nodes, receptive field size, step-size between extracted features, and whitening have a significant impact on performance. Specifically, they demonstrate that achieving state-of-the-art performance on CIFAR-10 and NORB can be achieved with a single layer of features, using simple algorithms like K-means clustering. The best results are obtained with 1600 features, a 1-pixel stride, a 6-pixel receptive field, and whitening. The paper also highlights the importance of dense feature extraction and large numbers of features, which can improve performance even more. The findings suggest that while more complex algorithms may have greater representational power, simpler and faster algorithms can still achieve competitive results when optimized for these parameters.
Reach us at info@study.space