Graph Convolutional Networks for Hyperspectral Image Classification

Graph Convolutional Networks for Hyperspectral Image Classification

2020 | Danfeng Hong, Member, IEEE, Lianru Gao, Senior Member, IEEE, Jing Yao, Bing Zhang, Fellow, IEEE, Antonio Plaza, Fellow, IEEE, and Jocelyn Chanussot, Fellow, IEEE
This paper presents a novel mini-batch graph convolutional network (miniGCN) for hyperspectral (HS) image classification, which addresses the computational limitations of traditional graph convolutional networks (GCNs) and improves classification performance through feature fusion with convolutional neural networks (CNNs). The authors compare CNNs and GCNs in terms of their ability to extract spatial-spectral features and demonstrate that miniGCNs can be trained in a mini-batch fashion, enabling efficient processing of large-scale HS data. Furthermore, they propose three fusion strategies (additive, element-wise multiplicative, and concatenation) to combine features extracted from CNNs and miniGCNs, resulting in improved classification performance. Extensive experiments on three HS datasets (Indian Pines, Pavia University, and Houston2013) show that miniGCNs outperform traditional GCNs and that the fusion strategies significantly enhance classification accuracy compared to single models. The proposed miniGCN is capable of inferring out-of-sample data without re-training, making it more practical for real-world applications. The study highlights the potential of combining CNNs and GCNs through feature fusion to achieve better performance in HS image classification.This paper presents a novel mini-batch graph convolutional network (miniGCN) for hyperspectral (HS) image classification, which addresses the computational limitations of traditional graph convolutional networks (GCNs) and improves classification performance through feature fusion with convolutional neural networks (CNNs). The authors compare CNNs and GCNs in terms of their ability to extract spatial-spectral features and demonstrate that miniGCNs can be trained in a mini-batch fashion, enabling efficient processing of large-scale HS data. Furthermore, they propose three fusion strategies (additive, element-wise multiplicative, and concatenation) to combine features extracted from CNNs and miniGCNs, resulting in improved classification performance. Extensive experiments on three HS datasets (Indian Pines, Pavia University, and Houston2013) show that miniGCNs outperform traditional GCNs and that the fusion strategies significantly enhance classification accuracy compared to single models. The proposed miniGCN is capable of inferring out-of-sample data without re-training, making it more practical for real-world applications. The study highlights the potential of combining CNNs and GCNs through feature fusion to achieve better performance in HS image classification.
Reach us at info@study.space