Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations

Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations

2024 | Rizky Mulya Sampurno, Zifu Liu, R. M. Rasika D. Abeysathna, Tofael Ahamed
This study presents a method for detecting uncut weeds and obstacles in orchard plantations using YOLO instance segmentation for autonomous robotic weeding. The research aims to develop a vision module that enables autonomous robotic weeders to recognize uncut weeds and obstacles such as tree trunks and poles within rows. The training dataset was collected from a pear orchard at the Tsukuba Plant Innovation Research Center (T-PIRC) in Japan, consisting of 5000 images preprocessed and labeled for training and testing. Four versions of YOLO instance segmentation models—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—were evaluated for real-time application with an autonomous weeder. A comparison study was conducted to assess detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. YOLOv8n-seg demonstrated better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when deployed on a resource-constrained device suitable for robotic weeders. The results indicate that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards. The study highlights the potential of YOLO instance segmentation for real-time object detection and segmentation in orchard environments, particularly for identifying uncut weeds and obstacles. The research contributes to the development of autonomous robotic systems for efficient and precise weed control in orchards.This study presents a method for detecting uncut weeds and obstacles in orchard plantations using YOLO instance segmentation for autonomous robotic weeding. The research aims to develop a vision module that enables autonomous robotic weeders to recognize uncut weeds and obstacles such as tree trunks and poles within rows. The training dataset was collected from a pear orchard at the Tsukuba Plant Innovation Research Center (T-PIRC) in Japan, consisting of 5000 images preprocessed and labeled for training and testing. Four versions of YOLO instance segmentation models—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—were evaluated for real-time application with an autonomous weeder. A comparison study was conducted to assess detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. YOLOv8n-seg demonstrated better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when deployed on a resource-constrained device suitable for robotic weeders. The results indicate that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards. The study highlights the potential of YOLO instance segmentation for real-time object detection and segmentation in orchard environments, particularly for identifying uncut weeds and obstacles. The research contributes to the development of autonomous robotic systems for efficient and precise weed control in orchards.
Reach us at info@study.space
[slides] Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations | StudySpace