Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens

Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens

2024 | Chiho Yoon, Eunwoo Park, Sampa Misra, Jin Young Kim, Jin Woo Baik, Kwang Gi Kim, Chan Kwon Jung, Chulhong Kim
This study presents a deep learning (DL)-based framework for automated histological image analysis in label-free photoacoustic histology (PAH) of human specimens. The framework consists of three main components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) a U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The E-CUT method preserves morphological aspects of cell nuclei and cytoplasm, making VHE images highly similar to real H&E images. The U-net architecture successfully segments various features such as cell area, cell count, and intercellular distance. The StepFF method combines deep feature vectors from PAH, VHE, and segmented images to achieve a 98.00% classification accuracy, compared to 94.80% for conventional PAH classification. The framework demonstrates promising performance in classifying human liver cancers, with a sensitivity of 100% based on the evaluation of three pathologists. This DL-based approach has significant potential as a practical clinical strategy for digital pathology.This study presents a deep learning (DL)-based framework for automated histological image analysis in label-free photoacoustic histology (PAH) of human specimens. The framework consists of three main components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) a U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The E-CUT method preserves morphological aspects of cell nuclei and cytoplasm, making VHE images highly similar to real H&E images. The U-net architecture successfully segments various features such as cell area, cell count, and intercellular distance. The StepFF method combines deep feature vectors from PAH, VHE, and segmented images to achieve a 98.00% classification accuracy, compared to 94.80% for conventional PAH classification. The framework demonstrates promising performance in classifying human liver cancers, with a sensitivity of 100% based on the evaluation of three pathologists. This DL-based approach has significant potential as a practical clinical strategy for digital pathology.
Reach us at info@study.space