The paper introduces the UltraLight VM-UNet, a lightweight model for skin lesion segmentation based on the Vision Mamba architecture. The authors analyze the key factors influencing the parameter count in Mamba and propose the Parallel Vision Mamba Layer (PVM Layer) to process deep features in parallel, significantly reducing parameters while maintaining computational efficiency. The UltraLight VM-UNet achieves a parameter count of 0.049M and GFLOPs of 0.060, outperforming existing lightweight models on three public datasets (ISIC2017, ISIC2018, and PH²). The PVM Layer is shown to reduce parameters by up to 99.82% compared to traditional models, demonstrating its effectiveness in balancing performance and computational complexity. The study also explores the impact of different channel numbers and parallel connections, further validating the proposed approach. The results suggest that Mamba could become a mainstream module for lightweight modeling in the future.The paper introduces the UltraLight VM-UNet, a lightweight model for skin lesion segmentation based on the Vision Mamba architecture. The authors analyze the key factors influencing the parameter count in Mamba and propose the Parallel Vision Mamba Layer (PVM Layer) to process deep features in parallel, significantly reducing parameters while maintaining computational efficiency. The UltraLight VM-UNet achieves a parameter count of 0.049M and GFLOPs of 0.060, outperforming existing lightweight models on three public datasets (ISIC2017, ISIC2018, and PH²). The PVM Layer is shown to reduce parameters by up to 99.82% compared to traditional models, demonstrating its effectiveness in balancing performance and computational complexity. The study also explores the impact of different channel numbers and parallel connections, further validating the proposed approach. The results suggest that Mamba could become a mainstream module for lightweight modeling in the future.