A State-of-the-Art Survey on Deep Learning Theory and Architectures

A State-of-the-Art Survey on Deep Learning Theory and Architectures

Received: 17 January 2019; Accepted: 31 January 2019; Published: 5 March 2019 | Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal and Vijayan K. Asari
This paper provides a comprehensive survey of deep learning (DL) theory and architectures, covering advancements since 2012. It discusses various types of DL approaches, including supervised, semi-supervised, unsupervised, and reinforcement learning (RL). The survey highlights the key differences between traditional machine learning and DL, emphasizing the automatic feature learning capability of DL. It also explores the applications of DL in fields such as image processing, computer vision, speech recognition, and natural language processing. The paper details the development of different neural network architectures, such as Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Auto-Encoders (AEs), and Generative Adversarial Networks (GANs). Additionally, it covers advanced techniques for efficient training, transfer learning, and energy-efficient approaches. The survey includes recent frameworks, benchmark datasets, and conferences/journals relevant to the DL community. The paper concludes with a discussion on the challenges and future directions in DL.This paper provides a comprehensive survey of deep learning (DL) theory and architectures, covering advancements since 2012. It discusses various types of DL approaches, including supervised, semi-supervised, unsupervised, and reinforcement learning (RL). The survey highlights the key differences between traditional machine learning and DL, emphasizing the automatic feature learning capability of DL. It also explores the applications of DL in fields such as image processing, computer vision, speech recognition, and natural language processing. The paper details the development of different neural network architectures, such as Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Auto-Encoders (AEs), and Generative Adversarial Networks (GANs). Additionally, it covers advanced techniques for efficient training, transfer learning, and energy-efficient approaches. The survey includes recent frameworks, benchmark datasets, and conferences/journals relevant to the DL community. The paper concludes with a discussion on the challenges and future directions in DL.
Reach us at info@study.space