Deep Forest

Deep Forest

6 Jul 2020 | Zhi-Hua Zhou, Ji Feng
This paper proposes a non-differentiable deep learning model called gcForest, which is based on decision trees and ensemble learning. Unlike traditional deep neural networks (DNNs), which rely on differentiable modules and backpropagation, gcForest uses a cascade structure of decision tree forests and multi-grained scanning to achieve deep learning capabilities. The cascade structure allows for layer-by-layer processing and in-model feature transformation, while multi-grained scanning enhances the model's ability to capture complex patterns in data. gcForest has fewer hyperparameters than DNNs and can automatically determine its model complexity based on the data. The model is robust to hyper-parameter settings and performs well across different domains with the same default configuration. Experiments show that gcForest achieves competitive performance with DNNs on various tasks, including image classification, face recognition, music classification, hand movement recognition, and sentiment analysis. It also performs well on low-dimensional and high-dimensional data. gcForest does not rely on backpropagation and can be trained efficiently using parallel processing. The paper discusses the potential of non-differentiable modules in deep learning and highlights the advantages of gcForest in terms of model complexity, hyper-parameter tuning, and performance. The study opens the door for deep learning based on non-differentiable modules and demonstrates the possibility of constructing deep models without backpropagation.This paper proposes a non-differentiable deep learning model called gcForest, which is based on decision trees and ensemble learning. Unlike traditional deep neural networks (DNNs), which rely on differentiable modules and backpropagation, gcForest uses a cascade structure of decision tree forests and multi-grained scanning to achieve deep learning capabilities. The cascade structure allows for layer-by-layer processing and in-model feature transformation, while multi-grained scanning enhances the model's ability to capture complex patterns in data. gcForest has fewer hyperparameters than DNNs and can automatically determine its model complexity based on the data. The model is robust to hyper-parameter settings and performs well across different domains with the same default configuration. Experiments show that gcForest achieves competitive performance with DNNs on various tasks, including image classification, face recognition, music classification, hand movement recognition, and sentiment analysis. It also performs well on low-dimensional and high-dimensional data. gcForest does not rely on backpropagation and can be trained efficiently using parallel processing. The paper discusses the potential of non-differentiable modules in deep learning and highlights the advantages of gcForest in terms of model complexity, hyper-parameter tuning, and performance. The study opens the door for deep learning based on non-differentiable modules and demonstrates the possibility of constructing deep models without backpropagation.
Reach us at info@study.space
[slides] Deep forest | StudySpace