Robust Beamforming with Gradient-based Liquid Neural Network

Robust Beamforming with Gradient-based Liquid Neural Network

| Xinquan Wang, Fenghao Zhu, Chongwen Huang, Ahmed Alhammadi, Faouzi Bader, Zhaoyang Zhang, Chau Yuen, Fellow, IEEE, and Mériouane Debbah, Fellow, IEEE
This paper proposes a robust gradient-based liquid neural network (GLNN) framework for beamforming in millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) communication systems. The GLNN integrates manifold learning and liquid neural networks (LNNs) with gradient-based learning to address the challenges of high complexity and robustness in dynamic mmWave channels. The framework uses ordinary differential equations (ODEs) to model liquid neurons, enabling efficient processing of noisy and dynamic data. The GLNN extracts high-order channel features using gradients of the optimization objective function and employs a residual connection to reduce training burden. Manifold learning compresses the search space, allowing the GLNN to maintain low complexity while ensuring robustness. Simulation results show that the GLNN achieves 4.15% higher spectral efficiency than typical iterative algorithms and reduces time consumption to 1.61% of conventional methods. The GLNN outperforms baselines in terms of robustness to channel estimation errors and adapts quickly to dynamic environments. The framework is efficient, with computational complexity that scales linearly with the number of transmit antennas. The GLNN demonstrates strong performance in both static and dynamic scenarios, making it a promising solution for future mmWave MIMO communication systems.This paper proposes a robust gradient-based liquid neural network (GLNN) framework for beamforming in millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) communication systems. The GLNN integrates manifold learning and liquid neural networks (LNNs) with gradient-based learning to address the challenges of high complexity and robustness in dynamic mmWave channels. The framework uses ordinary differential equations (ODEs) to model liquid neurons, enabling efficient processing of noisy and dynamic data. The GLNN extracts high-order channel features using gradients of the optimization objective function and employs a residual connection to reduce training burden. Manifold learning compresses the search space, allowing the GLNN to maintain low complexity while ensuring robustness. Simulation results show that the GLNN achieves 4.15% higher spectral efficiency than typical iterative algorithms and reduces time consumption to 1.61% of conventional methods. The GLNN outperforms baselines in terms of robustness to channel estimation errors and adapts quickly to dynamic environments. The framework is efficient, with computational complexity that scales linearly with the number of transmit antennas. The GLNN demonstrates strong performance in both static and dynamic scenarios, making it a promising solution for future mmWave MIMO communication systems.
Reach us at info@study.space
Understanding Robust Beamforming With Gradient-Based Liquid Neural Network