Towards Generalizable Tumor Synthesis

Towards Generalizable Tumor Synthesis

28 Mar 2024 | Qi Chen, Xiaoxi Chen, Haorui Song, Zhiwei Xiong, Alan Yuille, Chen Wei, Zongwei Zhou
This paper presents DiffTumor, a framework for generalizable tumor synthesis in medical imaging. The key observation is that early-stage tumors (<2cm) exhibit similar imaging characteristics across different organs, such as liver, pancreas, and kidneys. This allows generative AI models, like Diffusion Models, to create realistic tumors generalizable to multiple organs even when trained on limited examples from a single organ. The framework consists of three stages: 1) training an autoencoder to learn compressed latent features from CT volumes, 2) training a diffusion model using latent features and tumor masks to generate synthetic tumors, and 3) training a segmentation model using the synthetic tumors. The DiffTumor framework can generate a wide variety of synthetic tumors with different locations, sizes, shapes, textures, and intensities, which can be used to train AI models for tumor detection and segmentation. The results show that DiffTumor can generate visually realistic tumors that are generalizable across different organs and patient demographics. The framework also reduces the need for annotated data, as it only requires one annotated tumor for training. The DiffTumor framework has been validated through various experiments, including visual Turing tests, generalization across different organs, and generalization across different demographics. The results demonstrate that DiffTumor significantly improves the performance of AI models in detecting and segmenting early-stage tumors. The framework is also efficient, with the ability to generate synthetic tumors in real-time. The key contributions of this paper are the verification of the similarity of early-stage tumors across different organs and the development of the DiffTumor framework for generalizable tumor synthesis.This paper presents DiffTumor, a framework for generalizable tumor synthesis in medical imaging. The key observation is that early-stage tumors (<2cm) exhibit similar imaging characteristics across different organs, such as liver, pancreas, and kidneys. This allows generative AI models, like Diffusion Models, to create realistic tumors generalizable to multiple organs even when trained on limited examples from a single organ. The framework consists of three stages: 1) training an autoencoder to learn compressed latent features from CT volumes, 2) training a diffusion model using latent features and tumor masks to generate synthetic tumors, and 3) training a segmentation model using the synthetic tumors. The DiffTumor framework can generate a wide variety of synthetic tumors with different locations, sizes, shapes, textures, and intensities, which can be used to train AI models for tumor detection and segmentation. The results show that DiffTumor can generate visually realistic tumors that are generalizable across different organs and patient demographics. The framework also reduces the need for annotated data, as it only requires one annotated tumor for training. The DiffTumor framework has been validated through various experiments, including visual Turing tests, generalization across different organs, and generalization across different demographics. The results demonstrate that DiffTumor significantly improves the performance of AI models in detecting and segmenting early-stage tumors. The framework is also efficient, with the ability to generate synthetic tumors in real-time. The key contributions of this paper are the verification of the similarity of early-stage tumors across different organs and the development of the DiffTumor framework for generalizable tumor synthesis.
Reach us at info@study.space