Slow Feature Analysis: Unsupervised Learning of Invariances

Slow Feature Analysis: Unsupervised Learning of Invariances

2002 | Laurenz Wiskott, Terrence J. Sejnowski
Slow Feature Analysis (SFA) is an unsupervised learning method for extracting invariant or slowly varying features from temporal input signals. It uses a nonlinear expansion of the input signal and applies principal component analysis (PCA) to the expanded signal and its time derivative. SFA guarantees finding the optimal solution within a family of functions and can extract decorrelated features ordered by their invariance. It is applied hierarchically to process high-dimensional signals and extract complex features. SFA can learn translation, size, rotation, contrast, and illumination invariance for one-dimensional objects based on training stimuli. It is particularly effective with few training objects, achieving good generalization. SFA is compared with previous approaches to learning invariances. The learning problem involves finding an input-output function that minimizes the temporal variation of the output signal while conveying information about the input. This is formalized as an optimization problem with constraints on zero mean, unit variance, and decorrelation. SFA simplifies this problem by using a nonlinear expansion and linear PCA on the expanded signal. The algorithm involves normalizing the input signal, expanding it nonlinearly, sphering the expanded signal, applying PCA, and repeating the process for hierarchical learning. Examples demonstrate SFA's ability to extract complex cell responses, disparity, motion, and other features from visual data. SFA can also extract rare features like slowly or rarely varying signals. The algorithm is implemented as a hierarchical network, with each SFA module learning a slow feature from the previous output. SFA is shown to be effective in learning translation invariance and other visual features, with results validated through simulations and comparisons with other methods.Slow Feature Analysis (SFA) is an unsupervised learning method for extracting invariant or slowly varying features from temporal input signals. It uses a nonlinear expansion of the input signal and applies principal component analysis (PCA) to the expanded signal and its time derivative. SFA guarantees finding the optimal solution within a family of functions and can extract decorrelated features ordered by their invariance. It is applied hierarchically to process high-dimensional signals and extract complex features. SFA can learn translation, size, rotation, contrast, and illumination invariance for one-dimensional objects based on training stimuli. It is particularly effective with few training objects, achieving good generalization. SFA is compared with previous approaches to learning invariances. The learning problem involves finding an input-output function that minimizes the temporal variation of the output signal while conveying information about the input. This is formalized as an optimization problem with constraints on zero mean, unit variance, and decorrelation. SFA simplifies this problem by using a nonlinear expansion and linear PCA on the expanded signal. The algorithm involves normalizing the input signal, expanding it nonlinearly, sphering the expanded signal, applying PCA, and repeating the process for hierarchical learning. Examples demonstrate SFA's ability to extract complex cell responses, disparity, motion, and other features from visual data. SFA can also extract rare features like slowly or rarely varying signals. The algorithm is implemented as a hierarchical network, with each SFA module learning a slow feature from the previous output. SFA is shown to be effective in learning translation invariance and other visual features, with results validated through simulations and comparisons with other methods.
Reach us at info@study.space
[slides] Slow Feature Analysis%3A Unsupervised Learning of Invariances | StudySpace