This study presents a deep learning approach for automatically segmenting the mandible and identifying 3D anatomical landmarks from cone-beam computed tomography (CBCT) images to determine the most accurate mandibular median sagittal plane (MMSP). The research involved 400 participants, with 360 in the training group, 40 in the validation group, and 50 in the test group. The PointRend algorithm was used for mandible segmentation, and PoseNet was used to identify 27 anatomical landmarks. The 3D coordinates of 5 central landmarks and 2 pairs of side landmarks were obtained for the test group. Using the template mapping technique, 35 combinations of 3 midline landmarks were screened, and the asymmetry index (AI) was calculated for each of the 35 mirror planes. The template mapping technique plane was used as the reference plane; the top four planes with the smallest AIs were compared through distance, volume difference, and similarity index to find the plane with the fewest errors.
The mandible was segmented automatically in 10 ± 1.5 seconds with a Dice similarity coefficient of 0.98. The mean landmark localization error for the 27 landmarks was 1.04 ± 0.28 mm. The B-Gn-F plane was found to be the most accurate MMSP, with an average AI of 1.6. The results showed that the B-Gn-F plane had the smallest error among the four planes, and the similarity index was significantly different (P < 0.01). The study concluded that deep learning can automatically segment the mandible, identify anatomical landmarks, and address medical demands in people without mandibular deformities. The most accurate MMSP was the B-Gn-F plane. The study also discussed the limitations of previous methods and the advantages of using deep learning for automated segmentation and landmark identification. The results suggest that the B-Gn-F plane is the most accurate MMSP for clinical applications.This study presents a deep learning approach for automatically segmenting the mandible and identifying 3D anatomical landmarks from cone-beam computed tomography (CBCT) images to determine the most accurate mandibular median sagittal plane (MMSP). The research involved 400 participants, with 360 in the training group, 40 in the validation group, and 50 in the test group. The PointRend algorithm was used for mandible segmentation, and PoseNet was used to identify 27 anatomical landmarks. The 3D coordinates of 5 central landmarks and 2 pairs of side landmarks were obtained for the test group. Using the template mapping technique, 35 combinations of 3 midline landmarks were screened, and the asymmetry index (AI) was calculated for each of the 35 mirror planes. The template mapping technique plane was used as the reference plane; the top four planes with the smallest AIs were compared through distance, volume difference, and similarity index to find the plane with the fewest errors.
The mandible was segmented automatically in 10 ± 1.5 seconds with a Dice similarity coefficient of 0.98. The mean landmark localization error for the 27 landmarks was 1.04 ± 0.28 mm. The B-Gn-F plane was found to be the most accurate MMSP, with an average AI of 1.6. The results showed that the B-Gn-F plane had the smallest error among the four planes, and the similarity index was significantly different (P < 0.01). The study concluded that deep learning can automatically segment the mandible, identify anatomical landmarks, and address medical demands in people without mandibular deformities. The most accurate MMSP was the B-Gn-F plane. The study also discussed the limitations of previous methods and the advantages of using deep learning for automated segmentation and landmark identification. The results suggest that the B-Gn-F plane is the most accurate MMSP for clinical applications.