04 January 2024 | Chenghao Lu, Emmanuel Nnadozie, Moritz Paul Camenzind, Yuncai Hu and Kang Yu
This study explores the use of UAV-based RGB imaging and the YOLOv5 deep learning model for maize plant detection. The research aims to develop a lightweight, fast, and precise model for automatic plant counting in maize fields, addressing the challenges of dense weeds and leaf occlusion. The study conducted field experiments at the Dürrnast Research Station in Germany, capturing UAV images of maize plants at the 3-leaf and 7-leaf stages. The images were annotated using the Segment Anything Model (SAM) and manually adjusted to improve accuracy. The YOLOv5 model was trained on these datasets, with a focus on improving performance under realistic field conditions. Key findings include:
1. **Model Performance**: The YOLOv5-based model achieved high accuracy, with mAP@0.5 scores of 82.8% and 86.3% for the 3-leaf and 7-leaf stages, respectively.
2. **Data Augmentation**: Rotation-based data augmentation significantly improved the model's performance, increasing mAP@0.5 by 2.4% at the 3-leaf stage and 1.3% at the 7-leaf stage.
3. **Low Noise Weight**: Training the model on a low-noise dataset further enhanced its accuracy, achieving a mAP@0.5 of 0.939 for the 3-leaf stage.
4. **Semi-Automatic Labeling**: The use of SAM for semi-automatic labeling reduced the time and cost of manual labeling, improving the quality of annotations.
5. **Future Work**: The study suggests future research directions, including increasing the number of training images, incorporating diverse field conditions, and refining the model structure to better fit maize plant detection.
The results demonstrate the potential of YOLOv5-based models for real-time plant monitoring in maize fields, particularly when deployed on UAVs and other IoT devices.This study explores the use of UAV-based RGB imaging and the YOLOv5 deep learning model for maize plant detection. The research aims to develop a lightweight, fast, and precise model for automatic plant counting in maize fields, addressing the challenges of dense weeds and leaf occlusion. The study conducted field experiments at the Dürrnast Research Station in Germany, capturing UAV images of maize plants at the 3-leaf and 7-leaf stages. The images were annotated using the Segment Anything Model (SAM) and manually adjusted to improve accuracy. The YOLOv5 model was trained on these datasets, with a focus on improving performance under realistic field conditions. Key findings include:
1. **Model Performance**: The YOLOv5-based model achieved high accuracy, with mAP@0.5 scores of 82.8% and 86.3% for the 3-leaf and 7-leaf stages, respectively.
2. **Data Augmentation**: Rotation-based data augmentation significantly improved the model's performance, increasing mAP@0.5 by 2.4% at the 3-leaf stage and 1.3% at the 7-leaf stage.
3. **Low Noise Weight**: Training the model on a low-noise dataset further enhanced its accuracy, achieving a mAP@0.5 of 0.939 for the 3-leaf stage.
4. **Semi-Automatic Labeling**: The use of SAM for semi-automatic labeling reduced the time and cost of manual labeling, improving the quality of annotations.
5. **Future Work**: The study suggests future research directions, including increasing the number of training images, incorporating diverse field conditions, and refining the model structure to better fit maize plant detection.
The results demonstrate the potential of YOLOv5-based models for real-time plant monitoring in maize fields, particularly when deployed on UAVs and other IoT devices.