14 May 2019 | Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller
This paper reviews the current state-of-the-art performance of deep learning algorithms for Time Series Classification (TSC). It highlights the gap between traditional TSC methods, such as the Nearest Neighbor (NN) classifier with Dynamic Time Warping (DTW), and deep learning approaches. The authors present an empirical study of recent deep neural network (DNN) architectures for TSC, including Multi Layer Perceptrons (MLP), Convolutional Neural Networks (CNN), and Echo State Networks (ESN). They evaluate these models on both univariate and multivariate time series datasets using a unified taxonomy and an open-source deep learning framework. The study involves training 8,730 deep learning models on 97 datasets, making it the most comprehensive study of DNNs for TSC to date. The paper also discusses the impact of random initialization on DNN performance and explores methods to improve interpretability. The main contributions include a practical guide to adapting deep learning for TSC, a detailed taxonomy of DNNs, and a comprehensive evaluation of their performance.This paper reviews the current state-of-the-art performance of deep learning algorithms for Time Series Classification (TSC). It highlights the gap between traditional TSC methods, such as the Nearest Neighbor (NN) classifier with Dynamic Time Warping (DTW), and deep learning approaches. The authors present an empirical study of recent deep neural network (DNN) architectures for TSC, including Multi Layer Perceptrons (MLP), Convolutional Neural Networks (CNN), and Echo State Networks (ESN). They evaluate these models on both univariate and multivariate time series datasets using a unified taxonomy and an open-source deep learning framework. The study involves training 8,730 deep learning models on 97 datasets, making it the most comprehensive study of DNNs for TSC to date. The paper also discusses the impact of random initialization on DNN performance and explores methods to improve interpretability. The main contributions include a practical guide to adapting deep learning for TSC, a detailed taxonomy of DNNs, and a comprehensive evaluation of their performance.