SpectralMamba: Efficient Mamba for Hyperspectral Image Classification

SpectralMamba: Efficient Mamba for Hyperspectral Image Classification

12 Apr 2024 | Jing Yao, Member, IEEE, Danfeng Hong, Senior Member, IEEE, Chenyu Li, and Jocelyn Chanussot, Fellow, IEEE
**SpectralMamba: Efficient Mamba for Hyperspectral Image Classification** **Abstract:** Recurrent neural networks and Transformers have dominated hyperspectral (HS) imaging due to their ability to capture long-range dependencies from spectrum sequences. However, these sequential architectures suffer from inefficiencies, such as difficulty in parallelization and computationally expensive attention mechanisms, which hinder their practicality, especially for large-scale remote sensing applications. To address this, SpectralMamba is proposed, a novel state space model integrated with an efficient deep learning framework for HS image classification. SpectralMamba simplifies the modeling of HS data dynamics at two levels: spatial-spectral and hidden state spaces. In the spatial-spectral space, a dynamical mask is learned using efficient convolutions to encode spatial regularity and spectral peculiarity, reducing spectral variability and confusion. In the hidden state space, the merged spectrum is efficiently operated with input-dependent parameters, yielding selective responses without relying on redundant attention or imparallelizable recurrence. A piece-wise scanning mechanism is employed to transfer continuous spectra into sequences with reduced length while maintaining contextual profiles among bands. Extensive experiments on four benchmark HS datasets show that SpectralMamba significantly outperforms classic network architectures in terms of both performance and efficiency. **Keywords:** Artificial intelligence, efficient, Mamba, hyperspectral image classification, state space model, spatial-spectral, transformer, remote sensing. **Introduction:** Hyperspectral imaging captures both spatial and spectral information, providing detailed spectral profiles for each pixel. Despite its advantages, HS image classification faces challenges such as the curse of dimensionality and spectral variability. SpectralMamba addresses these issues by leveraging a simplified yet adequate modeling of HS data dynamics. The key contributions include: 1. A novel SSM-based backbone network, SpectralMamba, for efficient and effective HS image classification. 2. Piece-wise sequential scanning (PSS) and gated spatial-spectral merging (GSSM) strategies to handle high dimensionality and spectral variability. 3. Extensive experimental results on four benchmark datasets, demonstrating superior performance and computational efficiency compared to classic backbones. **Methodology:** SpectralMamba consists of three main components: PSS, GSSM, and efficient selective state space (S6) modeling. PSS reduces the spectrum into pieces to enhance computational efficiency, while GSSM adaptively encodes spatial-spectral relationships. The S6 model allows for efficient computation of long-range dependencies. **Experiments:** Experiments on four benchmark HS datasets (Houston2013, Longkou, Augsburg, and Botswana) show that SpectralMamba outperforms classic methods in terms of accuracy and computational efficiency. Ablation studies further validate the effectiveness of the key components. **Conclusion:** SpectralMamba addresses the challenges of spectral redundancy and variability in HS image classification by leveraging state space models. It achieves superior performance and efficiency, making it a promising**SpectralMamba: Efficient Mamba for Hyperspectral Image Classification** **Abstract:** Recurrent neural networks and Transformers have dominated hyperspectral (HS) imaging due to their ability to capture long-range dependencies from spectrum sequences. However, these sequential architectures suffer from inefficiencies, such as difficulty in parallelization and computationally expensive attention mechanisms, which hinder their practicality, especially for large-scale remote sensing applications. To address this, SpectralMamba is proposed, a novel state space model integrated with an efficient deep learning framework for HS image classification. SpectralMamba simplifies the modeling of HS data dynamics at two levels: spatial-spectral and hidden state spaces. In the spatial-spectral space, a dynamical mask is learned using efficient convolutions to encode spatial regularity and spectral peculiarity, reducing spectral variability and confusion. In the hidden state space, the merged spectrum is efficiently operated with input-dependent parameters, yielding selective responses without relying on redundant attention or imparallelizable recurrence. A piece-wise scanning mechanism is employed to transfer continuous spectra into sequences with reduced length while maintaining contextual profiles among bands. Extensive experiments on four benchmark HS datasets show that SpectralMamba significantly outperforms classic network architectures in terms of both performance and efficiency. **Keywords:** Artificial intelligence, efficient, Mamba, hyperspectral image classification, state space model, spatial-spectral, transformer, remote sensing. **Introduction:** Hyperspectral imaging captures both spatial and spectral information, providing detailed spectral profiles for each pixel. Despite its advantages, HS image classification faces challenges such as the curse of dimensionality and spectral variability. SpectralMamba addresses these issues by leveraging a simplified yet adequate modeling of HS data dynamics. The key contributions include: 1. A novel SSM-based backbone network, SpectralMamba, for efficient and effective HS image classification. 2. Piece-wise sequential scanning (PSS) and gated spatial-spectral merging (GSSM) strategies to handle high dimensionality and spectral variability. 3. Extensive experimental results on four benchmark datasets, demonstrating superior performance and computational efficiency compared to classic backbones. **Methodology:** SpectralMamba consists of three main components: PSS, GSSM, and efficient selective state space (S6) modeling. PSS reduces the spectrum into pieces to enhance computational efficiency, while GSSM adaptively encodes spatial-spectral relationships. The S6 model allows for efficient computation of long-range dependencies. **Experiments:** Experiments on four benchmark HS datasets (Houston2013, Longkou, Augsburg, and Botswana) show that SpectralMamba outperforms classic methods in terms of accuracy and computational efficiency. Ablation studies further validate the effectiveness of the key components. **Conclusion:** SpectralMamba addresses the challenges of spectral redundancy and variability in HS image classification by leveraging state space models. It achieves superior performance and efficiency, making it a promising
Reach us at info@study.space
Understanding SpectralMamba%3A Efficient Mamba for Hyperspectral Image Classification