Exploring Low-Resource Medical Image Classification with Weakly Supervised Prompt Learning

Exploring Low-Resource Medical Image Classification with Weakly Supervised Prompt Learning

February 7, 2024 | Fudan Zheng, Jindong Cao, Weijiang Yu, Zhiguang Chen, Nong Xiao, Yutong Lu
This paper proposes a weakly supervised prompt learning framework, MedPrompt, for medical image classification in low-resource scenarios. The framework includes an unsupervised pre-trained vision-language model and a weakly supervised prompt learning model. The pre-trained model uses large-scale medical images and texts for training without manual annotations, while the prompt learning model generates automatic medical prompts using only class labels. The generated prompts enable the pre-trained model to perform zero-shot and few-shot learning without manual annotation or prompt design. Experimental results show that the model outperforms hand-crafted prompts in full-shot learning on four datasets, achieves superior accuracy in zero-shot classification on three datasets, and is comparable in accuracy on the remaining dataset. The prompt generator is lightweight and can be embedded into any network architecture. The framework reduces the reliance on domain experts and manual annotation, enabling end-to-end, low-cost medical image classification. The model's performance is validated on four benchmark datasets: CheXpert, MIMIC-CXR, COVID, and RSNA. The results demonstrate that the automatically generated prompts significantly improve the model's generalization ability and performance in zero-shot and few-shot learning. The framework is effective in low-resource medical scenarios where manual labeling is expensive and time-consuming. The model's lightweight design allows it to be integrated into various network architectures, including large-scale and mobile networks. The proposed method is the first to automatically generate medical prompts, reducing the dependency on domain experts and manual annotation. The model's performance is validated through extensive experiments, showing its effectiveness in medical image classification tasks. The framework's ability to generate high-quality prompts without manual intervention is a significant contribution to the field of low-resource medical image classification.This paper proposes a weakly supervised prompt learning framework, MedPrompt, for medical image classification in low-resource scenarios. The framework includes an unsupervised pre-trained vision-language model and a weakly supervised prompt learning model. The pre-trained model uses large-scale medical images and texts for training without manual annotations, while the prompt learning model generates automatic medical prompts using only class labels. The generated prompts enable the pre-trained model to perform zero-shot and few-shot learning without manual annotation or prompt design. Experimental results show that the model outperforms hand-crafted prompts in full-shot learning on four datasets, achieves superior accuracy in zero-shot classification on three datasets, and is comparable in accuracy on the remaining dataset. The prompt generator is lightweight and can be embedded into any network architecture. The framework reduces the reliance on domain experts and manual annotation, enabling end-to-end, low-cost medical image classification. The model's performance is validated on four benchmark datasets: CheXpert, MIMIC-CXR, COVID, and RSNA. The results demonstrate that the automatically generated prompts significantly improve the model's generalization ability and performance in zero-shot and few-shot learning. The framework is effective in low-resource medical scenarios where manual labeling is expensive and time-consuming. The model's lightweight design allows it to be integrated into various network architectures, including large-scale and mobile networks. The proposed method is the first to automatically generate medical prompts, reducing the dependency on domain experts and manual annotation. The model's performance is validated through extensive experiments, showing its effectiveness in medical image classification tasks. The framework's ability to generate high-quality prompts without manual intervention is a significant contribution to the field of low-resource medical image classification.
Reach us at info@study.space
[slides] Exploring Low-Resource Medical Image Classification with Weakly Supervised Prompt Learning | StudySpace