BrainSegFounder: Towards Foundation Models for Neuroimage Segmentation

BrainSegFounder: Towards Foundation Models for Neuroimage Segmentation

12 Aug 2024 | Joseph Cox, Peng Liu, Skylar E. Stolte, Yunchao Yang, Kang Liu, Kyle B. See, Huiwen Ju, and Ruogu Fang
The BrainSegFounder project introduces a novel approach to creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. The approach involves a two-stage pretraining process using vision transformers, which first encodes anatomical structures in generally healthy brains from a large-scale unlabeled neuroimage dataset and then identifies disease-specific attributes. This dual-phase methodology significantly reduces the extensive data requirements typically needed for AI model training in neuroimage segmentation, making the model adaptable to various imaging modalities. The model, BrainSegFounder, is evaluated using the Brain Tumor Segmentation (BrATS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets, demonstrating significant performance gains over previous fully supervised learning methods. The study highlights the importance of scaling up both model complexity and the volume of unlabeled training data from healthy brains, enhancing the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. The pretrained models and code are available at <https://github.com/lab-smile/BrainSegFounder>.The BrainSegFounder project introduces a novel approach to creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. The approach involves a two-stage pretraining process using vision transformers, which first encodes anatomical structures in generally healthy brains from a large-scale unlabeled neuroimage dataset and then identifies disease-specific attributes. This dual-phase methodology significantly reduces the extensive data requirements typically needed for AI model training in neuroimage segmentation, making the model adaptable to various imaging modalities. The model, BrainSegFounder, is evaluated using the Brain Tumor Segmentation (BrATS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets, demonstrating significant performance gains over previous fully supervised learning methods. The study highlights the importance of scaling up both model complexity and the volume of unlabeled training data from healthy brains, enhancing the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. The pretrained models and code are available at <https://github.com/lab-smile/BrainSegFounder>.
Reach us at info@study.space
Understanding BrainSegFounder%3A Towards 3D foundation models for neuroimage segmentation