The paper introduces M3Net, a novel framework designed to achieve universal LiDAR segmentation using a single set of parameters across multiple tasks, datasets, and modalities. The authors address the challenges of heterogeneous data by performing multi-space alignments in data, feature, and label spaces. These alignments help in effectively leveraging the strengths of different datasets and modalities, enhancing the model's generalization and robustness. Extensive experiments on twelve LiDAR segmentation datasets, including SemanticKITTI, nuScenes, and Waymo Open, demonstrate the effectiveness of M3Net, achieving state-of-the-art mIoU scores of 75.1%, 83.1%, and 72.4%, respectively. The framework also shows strong performance in knowledge transfer and out-of-distribution generalization, making it a promising solution for safe autonomous driving perception.The paper introduces M3Net, a novel framework designed to achieve universal LiDAR segmentation using a single set of parameters across multiple tasks, datasets, and modalities. The authors address the challenges of heterogeneous data by performing multi-space alignments in data, feature, and label spaces. These alignments help in effectively leveraging the strengths of different datasets and modalities, enhancing the model's generalization and robustness. Extensive experiments on twelve LiDAR segmentation datasets, including SemanticKITTI, nuScenes, and Waymo Open, demonstrate the effectiveness of M3Net, achieving state-of-the-art mIoU scores of 75.1%, 83.1%, and 72.4%, respectively. The framework also shows strong performance in knowledge transfer and out-of-distribution generalization, making it a promising solution for safe autonomous driving perception.