A State-of-the-Art Survey on Deep Learning Theory and Architectures

A State-of-the-Art Survey on Deep Learning Theory and Architectures

5 March 2019 | Md Zahangir Alom, Tarek M. Taha, Chris Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C. Van Essen, Abdul A. S. Awwal and Vijayan K. Asari
This survey provides an overview of deep learning (DL) theory and architectures, covering key DL approaches, their applications, and recent developments. The paper discusses various DL techniques, including supervised, semi-supervised, unsupervised, and reinforcement learning (RL), as well as their applications in fields such as image processing, speech recognition, medical imaging, and robotics. It highlights the advantages of DL, such as its ability to automatically learn features, generalization across tasks, scalability, and robustness to input variations. The survey also addresses challenges in DL, including big data analytics, scalability, energy efficiency, and multi-task learning. It reviews recent advances in DL, including deep neural networks (DNN), convolutional neural networks (CNN), recurrent neural networks (RNN), auto-encoders (AE), deep belief networks (DBN), generative adversarial networks (GAN), and deep reinforcement learning (DRL). The paper also includes discussions on frameworks, SDKs, and benchmark datasets used in DL. It covers the history of DL, key components of DL models, and popular CNN architectures such as LeNet, AlexNet, VGGNet, GoogLeNet, ResNet, and DenseNet. The survey emphasizes the importance of DL in various application domains and its potential as a universal learning approach.This survey provides an overview of deep learning (DL) theory and architectures, covering key DL approaches, their applications, and recent developments. The paper discusses various DL techniques, including supervised, semi-supervised, unsupervised, and reinforcement learning (RL), as well as their applications in fields such as image processing, speech recognition, medical imaging, and robotics. It highlights the advantages of DL, such as its ability to automatically learn features, generalization across tasks, scalability, and robustness to input variations. The survey also addresses challenges in DL, including big data analytics, scalability, energy efficiency, and multi-task learning. It reviews recent advances in DL, including deep neural networks (DNN), convolutional neural networks (CNN), recurrent neural networks (RNN), auto-encoders (AE), deep belief networks (DBN), generative adversarial networks (GAN), and deep reinforcement learning (DRL). The paper also includes discussions on frameworks, SDKs, and benchmark datasets used in DL. It covers the history of DL, key components of DL models, and popular CNN architectures such as LeNet, AlexNet, VGGNet, GoogLeNet, ResNet, and DenseNet. The survey emphasizes the importance of DL in various application domains and its potential as a universal learning approach.
Reach us at info@study.space