Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations

Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations

30 January 2024 | Rizky Mulya Sampurno, Zifu Liu, R. M. Rasika D. Abeyrathna, Tofael Ahamed
This study addresses the challenge of intrarow weed detection in orchards using a custom-trained dataset and YOLO instance segmentation algorithms. The primary objective is to develop a vision module that supports autonomous robotic weeders in recognizing uncut weeds and obstacles (e.g., fruit tree trunks, fixed poles) within rows. The dataset was acquired from a pear orchard at the Tsukuba Plant Innovation Research Center (T-PIRC) in Japan, consisting of 5000 images preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—were evaluated for real-time application with an autonomous weeder. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, with YOLOv8n-seg selected as the most suitable vision module. YOLOv8n-seg demonstrated better segmentation accuracy than YOLOv5n-seg while maintaining fast inference times. The proposed deep learning-based detection accuracy and inference speed can be effectively used for object recognition via edge devices in robotic operations during intrarow weeding in orchards.This study addresses the challenge of intrarow weed detection in orchards using a custom-trained dataset and YOLO instance segmentation algorithms. The primary objective is to develop a vision module that supports autonomous robotic weeders in recognizing uncut weeds and obstacles (e.g., fruit tree trunks, fixed poles) within rows. The dataset was acquired from a pear orchard at the Tsukuba Plant Innovation Research Center (T-PIRC) in Japan, consisting of 5000 images preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation—YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg—were evaluated for real-time application with an autonomous weeder. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, with YOLOv8n-seg selected as the most suitable vision module. YOLOv8n-seg demonstrated better segmentation accuracy than YOLOv5n-seg while maintaining fast inference times. The proposed deep learning-based detection accuracy and inference speed can be effectively used for object recognition via edge devices in robotic operations during intrarow weeding in orchards.
Reach us at info@study.space