22 January 2024 | Baobao Liu, Heying Wang, Zifan Cao, Yu Wang, Lu Tao, Jingjing Yang, and Kaibing Zhang
PRC-Light YOLO is an efficient lightweight model designed for fabric defect detection. The model improves upon YOLOv7 by integrating new convolution operators into the Extended-Efficient Layer Aggregation Network (E-ELAN) to optimize feature extraction and reduce computational load. It enhances the feature fusion network by using Receptive Field Block (RFB) as the feature pyramid and Content-Aware ReAssembly of FEatures (CARAFE) as an upsampling operator. Real-time adaptive convolution kernels are generated to extend the receptive field and capture contextual information. The HardSwish activation function is applied to reduce computational burden and memory access, while the Wise-IOU v3 bounding box loss function incorporates a dynamic non-monotonic focusing mechanism to mitigate adverse gradients from low-quality instances. Data augmentation techniques are used to improve the model's generalization ability. Compared to YOLOv7, PRC-Light YOLO reduces model parameters by 18.03% and computational load by 20.53%, achieving a 7.6% improvement in mAP. The model is tested on a fabric defect dataset with 1061 images, and experimental results show that PRC-Light YOLO outperforms other models in precision, recall, F1-score, and mAP. The model is also integrated into a fabric defect detection system using PyQt5, which allows for real-time detection of fabric defects. The system provides visual feedback on detected defects and their confidence levels. The model demonstrates high accuracy in detecting various fabric defects, including holes, stains, and yard defects, with an AP value of 94.1% for holes. However, the model has limitations in detecting warp-hanged defects and requires further optimization to improve detection accuracy. The study concludes that PRC-Light YOLO is an effective and efficient model for fabric defect detection, with significant improvements in detection accuracy and computational efficiency.PRC-Light YOLO is an efficient lightweight model designed for fabric defect detection. The model improves upon YOLOv7 by integrating new convolution operators into the Extended-Efficient Layer Aggregation Network (E-ELAN) to optimize feature extraction and reduce computational load. It enhances the feature fusion network by using Receptive Field Block (RFB) as the feature pyramid and Content-Aware ReAssembly of FEatures (CARAFE) as an upsampling operator. Real-time adaptive convolution kernels are generated to extend the receptive field and capture contextual information. The HardSwish activation function is applied to reduce computational burden and memory access, while the Wise-IOU v3 bounding box loss function incorporates a dynamic non-monotonic focusing mechanism to mitigate adverse gradients from low-quality instances. Data augmentation techniques are used to improve the model's generalization ability. Compared to YOLOv7, PRC-Light YOLO reduces model parameters by 18.03% and computational load by 20.53%, achieving a 7.6% improvement in mAP. The model is tested on a fabric defect dataset with 1061 images, and experimental results show that PRC-Light YOLO outperforms other models in precision, recall, F1-score, and mAP. The model is also integrated into a fabric defect detection system using PyQt5, which allows for real-time detection of fabric defects. The system provides visual feedback on detected defects and their confidence levels. The model demonstrates high accuracy in detecting various fabric defects, including holes, stains, and yard defects, with an AP value of 94.1% for holes. However, the model has limitations in detecting warp-hanged defects and requires further optimization to improve detection accuracy. The study concludes that PRC-Light YOLO is an effective and efficient model for fabric defect detection, with significant improvements in detection accuracy and computational efficiency.