This paper proposes two deep learning models for time series classification: LSTM-FCN and ALSTM-FCN. These models combine fully convolutional networks (FCNs) with long short-term memory (LSTM) recurrent neural networks (RNNs) to enhance performance. The LSTM-FCN model integrates an LSTM sub-module into the FCN, while the ALSTM-FCN model adds an attention mechanism to the LSTM-FCN. Both models achieve state-of-the-art performance on time series classification tasks, particularly on the University of California Riverside (UCR) benchmark datasets. The ALSTM-FCN model allows for visualization of the decision process of the LSTM cell through the attention mechanism. The models require minimal preprocessing and do not need extensive feature engineering. The proposed models are tested on all 85 UCR time series benchmarks and outperform most existing state-of-the-art models. The performance of the models is evaluated using accuracy, rank-based statistics, and mean per class error. The results show that both models significantly improve performance on most datasets. Fine-tuning is also proposed as a method to further enhance model performance. The ALSTM-FCN model outperforms the LSTM-FCN model in some cases, while the LSTM-FCN model benefits more from fine-tuning. The paper concludes that LSTM RNNs can effectively supplement FCN modules for time series classification. Future work includes exploring the application of the models to multivariate time series and understanding why the attention LSTM cell may not match the performance of the general LSTM cell on some datasets.This paper proposes two deep learning models for time series classification: LSTM-FCN and ALSTM-FCN. These models combine fully convolutional networks (FCNs) with long short-term memory (LSTM) recurrent neural networks (RNNs) to enhance performance. The LSTM-FCN model integrates an LSTM sub-module into the FCN, while the ALSTM-FCN model adds an attention mechanism to the LSTM-FCN. Both models achieve state-of-the-art performance on time series classification tasks, particularly on the University of California Riverside (UCR) benchmark datasets. The ALSTM-FCN model allows for visualization of the decision process of the LSTM cell through the attention mechanism. The models require minimal preprocessing and do not need extensive feature engineering. The proposed models are tested on all 85 UCR time series benchmarks and outperform most existing state-of-the-art models. The performance of the models is evaluated using accuracy, rank-based statistics, and mean per class error. The results show that both models significantly improve performance on most datasets. Fine-tuning is also proposed as a method to further enhance model performance. The ALSTM-FCN model outperforms the LSTM-FCN model in some cases, while the LSTM-FCN model benefits more from fine-tuning. The paper concludes that LSTM RNNs can effectively supplement FCN modules for time series classification. Future work includes exploring the application of the models to multivariate time series and understanding why the attention LSTM cell may not match the performance of the general LSTM cell on some datasets.