A Deep-Learning-Based Model for the Detection of Diseased Tomato Leaves

A Deep-Learning-Based Model for the Detection of Diseased Tomato Leaves

2024 | Akram Abdullah, Gehad Abdullah Amran, S. M. Ahanaf Tahmid, Amerah Alabrah, Ali A. AL-Bakhrahi, Abdulaziz Ali
This study introduces a You Only Look Once (YOLO) model, specifically YOLOV8s, for detecting diseases in tomato leaves. The model was trained using the Plant Village dataset, which includes both healthy and diseased tomato leaf images. The images were enhanced and processed using the Ultralytics Hub, a platform that provides an optimal setting for training YOLOV8 and YOLOV5 models. The YAML file was customized to identify sick leaves, and the model was trained to achieve high accuracy and efficiency. The results demonstrate that YOLOV8s outperforms both YOLOV5 and Faster R-CNN in terms of mean average precision (mAP), precision, and recall. YOLOV8s achieved a mAP of 92.5%, compared to 89.1% for YOLOV5 and 77.5% for Faster R-CNN. Additionally, YOLOV8s exhibited a significantly higher frame rate of 141.5 FPS, making it capable of real-time detection. The study also highlights the benefits of YOLOV8s, including its lightweight architecture and rapid inference speed, which contribute to its superior performance in various detection tasks. The research contributes to the growing body of evidence that confirms the efficacy of YOLOV8s in demanding detection tasks and suggests its potential for future applications in agriculture and other fields requiring real-time object detection.This study introduces a You Only Look Once (YOLO) model, specifically YOLOV8s, for detecting diseases in tomato leaves. The model was trained using the Plant Village dataset, which includes both healthy and diseased tomato leaf images. The images were enhanced and processed using the Ultralytics Hub, a platform that provides an optimal setting for training YOLOV8 and YOLOV5 models. The YAML file was customized to identify sick leaves, and the model was trained to achieve high accuracy and efficiency. The results demonstrate that YOLOV8s outperforms both YOLOV5 and Faster R-CNN in terms of mean average precision (mAP), precision, and recall. YOLOV8s achieved a mAP of 92.5%, compared to 89.1% for YOLOV5 and 77.5% for Faster R-CNN. Additionally, YOLOV8s exhibited a significantly higher frame rate of 141.5 FPS, making it capable of real-time detection. The study also highlights the benefits of YOLOV8s, including its lightweight architecture and rapid inference speed, which contribute to its superior performance in various detection tasks. The research contributes to the growing body of evidence that confirms the efficacy of YOLOV8s in demanding detection tasks and suggests its potential for future applications in agriculture and other fields requiring real-time object detection.
Reach us at info@study.space