透過您的圖書館登入
IP:18.227.228.95
  • 期刊

無人飛行載具影像於十字花科大宗蔬菜種類及生育期判釋應用

The Use of Unmanned Aerial Vehicle Images in the Identification of Major Cruciferous Vegetable Crops and Their Growth Stages

摘要


本研究利用無人航空載具(unmanned aerial vehicle,UAV)定期多次蒐集臺中區農業改良場內十字花科蔬菜栽培場域影像,透過YOLOv4深度學習架構同時辨識四種蔬菜與其生育期。流程依序為場域影像資料蒐集與拼接、正鑲嵌影像前處理、標的物框示、影像格式轉換、深度學習模式訓練及模式輸出驗證。最終學習模式的整體正確率、錯誤率和未偵測率分別為64.64%、6.31%和29.05%,若就單一作物來看結球白菜辨識度最佳達75.86%,其次為甘藍(69.15%)和花椰菜(63.39%),青花菜僅有50.16%,但仍有改善空間;另定植30-40天後之蓮座後期到結球期植株有最佳的判釋率,此時青花菜、花椰菜、甘藍和結球白菜準確率可達91.45%、93.37%、88.55%及93.51%。此外,在模型訓練Epoch從50增加至70時,訓練集之Generalized Intersection over Union(GIoU)損失函數值快速下降但驗證集值維持不變,呈現過擬合(overfitting)現象,需增加標註資料量輔以影像擴增並優化架構,以提高辨識準確率及模型訓練性能,後續可應用於蔬菜生產偵監協作。

並列摘要


The aim of this study was to recognize four Cruciferous vegetables and their growth stage at the same time through unmanned aerial vehicle (UAV)-image collection and YOLOv4 deep-learning architecture training. Overall procedures consisted in order of UAV-image collection and mapping, orthophotomosaic pretreatment, objective labeling, image-format conversion, deep-learning model training, and exported model validation. The final training model had 64.64% of overall accuracy, 6.31% of mistake, and 29.05% of missing rate. In terms of single crop identification, Chinese cabbage had the highest accuracy rate at 75.86%, followed by cabbage (69.15%) and cauliflower (63.39%). Only 50.16% of accurate rate was obtained in broccoli indicating the need of current model improvement. At 30-40 days after transplanting, during late rosette to heading stage, these four Cruciferous vegetables could be better identified with accuracy rates of 91.45%, 93.37%, 88.55%, and 93.51%b, in broccoli, cauliflower, cabbage, and Chinese cabbage, respectively. Furthermore, the generalized intersection over union (GIoU) loss function of training sets dramatically decreased when model epoch increased from 50 to 70 while validation sets stay unchanged. This would lead to overfitting of the model. To avoid this and to enhance the recognition accuracy rate as well as the reliability of the model, increased amount of labeled data, augment image, and optimization of the training structure are needed. It is hoped be useful in monitoring vegetable production.

延伸閱讀