MambaTab is a novel, plug-and-play model designed for learning tabular data. It leverages structured state-space models (SSMs), specifically the Mamba variant, to efficiently extract effective representations from data with long-range dependencies. Unlike existing deep learning models that require extensive preprocessing and tuning, MambaTab offers superior performance with significantly fewer parameters. Empirical validation on diverse benchmark datasets demonstrates MambaTab's efficiency, scalability, generalizability, and predictive gains, making it a lightweight and versatile solution for tabular data. Key contributions include:
- **Small Model Size and Parameters**: MambaTab requires fewer model weights and exhibits linear parameter growth.
- **Linear Scalability**: The model's parameters scale linearly with the number of features or sequence length.
- **Efficient Training and Inference**: Minimal data preprocessing is needed, and the architecture is simple.
- **Superior Performance**: MambaTab outperforms state-of-the-art baselines, including MLP-, Transformer-, and CNN-based models, while using a small fraction of their parameters.
MambaTab's advantages make it a promising solution for practical applications across various domains, enabling wide applicability with varying computational resources.MambaTab is a novel, plug-and-play model designed for learning tabular data. It leverages structured state-space models (SSMs), specifically the Mamba variant, to efficiently extract effective representations from data with long-range dependencies. Unlike existing deep learning models that require extensive preprocessing and tuning, MambaTab offers superior performance with significantly fewer parameters. Empirical validation on diverse benchmark datasets demonstrates MambaTab's efficiency, scalability, generalizability, and predictive gains, making it a lightweight and versatile solution for tabular data. Key contributions include:
- **Small Model Size and Parameters**: MambaTab requires fewer model weights and exhibits linear parameter growth.
- **Linear Scalability**: The model's parameters scale linearly with the number of features or sequence length.
- **Efficient Training and Inference**: Minimal data preprocessing is needed, and the architecture is simple.
- **Superior Performance**: MambaTab outperforms state-of-the-art baselines, including MLP-, Transformer-, and CNN-based models, while using a small fraction of their parameters.
MambaTab's advantages make it a promising solution for practical applications across various domains, enabling wide applicability with varying computational resources.