14 Dec 2016 | Zhiguang Wang, Weizhong Yan, Tim Oates
This paper proposes a simple yet strong baseline for time series classification using deep neural networks. The proposed models are end-to-end, without any preprocessing or feature engineering. The Fully Convolutional Network (FCN) achieves premium performance compared to other state-of-the-art approaches, while the ResNet structure also performs competitively. The global average pooling in the convolutional model enables the exploitation of the Class Activation Map (CAM) to identify the contributing regions in the raw data for specific labels. The models provide a simple choice for real-world applications and a good starting point for future research.
The paper evaluates three deep neural network architectures: Multilayer Perceptrons (MLP), Fully Convolutional Networks (FCN), and Residual Networks (ResNet). The MLP uses three fully-connected layers with dropout and ReLU activation. The FCN is used as a feature extractor with convolutional and batch normalization layers. The ResNet extends the neural network to a deep structure using shortcut connections.
The experiments are conducted on the UCR time series repository, which includes 44 distinct datasets. The models are trained with different optimization algorithms and evaluated using test error rates and Mean Per-Class Error (MPCE). The results show that FCN and ResNet perform well, with FCN achieving the best performance on three metrics. The MPCE score is used as an unbiased evaluation measure, and the results show that the five approaches (COTE, MCNN, BOSS, FCN, ResNet) are clustered in the best group.
The paper also discusses the use of CAM to localize the contributing regions in the raw data for specific labels. The CAM provides a natural way to interpret the class-specific regions in the data and allows for visualization of the predicted class scores on any given time series. The results show that the models generalize well, and the use of batch normalization and global average pooling helps reduce overfitting.
The paper concludes that deep neural networks provide a strong baseline for time series classification, with the FCN achieving premium performance. The models are simple to implement and provide a good starting point for future research. The results also show that the use of deep neural networks can be effective on small datasets, and the exploration of deeper architectures can lead to better performance. The paper also discusses the importance of feature visualization and analysis, and the effectiveness of 1-D convolution in capturing local patterns.This paper proposes a simple yet strong baseline for time series classification using deep neural networks. The proposed models are end-to-end, without any preprocessing or feature engineering. The Fully Convolutional Network (FCN) achieves premium performance compared to other state-of-the-art approaches, while the ResNet structure also performs competitively. The global average pooling in the convolutional model enables the exploitation of the Class Activation Map (CAM) to identify the contributing regions in the raw data for specific labels. The models provide a simple choice for real-world applications and a good starting point for future research.
The paper evaluates three deep neural network architectures: Multilayer Perceptrons (MLP), Fully Convolutional Networks (FCN), and Residual Networks (ResNet). The MLP uses three fully-connected layers with dropout and ReLU activation. The FCN is used as a feature extractor with convolutional and batch normalization layers. The ResNet extends the neural network to a deep structure using shortcut connections.
The experiments are conducted on the UCR time series repository, which includes 44 distinct datasets. The models are trained with different optimization algorithms and evaluated using test error rates and Mean Per-Class Error (MPCE). The results show that FCN and ResNet perform well, with FCN achieving the best performance on three metrics. The MPCE score is used as an unbiased evaluation measure, and the results show that the five approaches (COTE, MCNN, BOSS, FCN, ResNet) are clustered in the best group.
The paper also discusses the use of CAM to localize the contributing regions in the raw data for specific labels. The CAM provides a natural way to interpret the class-specific regions in the data and allows for visualization of the predicted class scores on any given time series. The results show that the models generalize well, and the use of batch normalization and global average pooling helps reduce overfitting.
The paper concludes that deep neural networks provide a strong baseline for time series classification, with the FCN achieving premium performance. The models are simple to implement and provide a good starting point for future research. The results also show that the use of deep neural networks can be effective on small datasets, and the exploration of deeper architectures can lead to better performance. The paper also discusses the importance of feature visualization and analysis, and the effectiveness of 1-D convolution in capturing local patterns.