ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于改進Faster R-CNN的蘋果采摘視覺定位與檢測方法
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

國家自然科學(xué)基金項目(52265065,、51765031)


Vision Detection Method for Picking Robots Based on Improved Faster R-CNN
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對采摘機器人對場景中目標(biāo)分布密集,、果實相互遮擋的檢測及定位能力不理想問題,提出一種引入高效通道注意力機制(ECA)和多尺度融合特征金字塔(FPN)改進Faster R-CNN果實檢測及定位方法,。首先,,利用表達能力較強的融合FPN的殘差網(wǎng)絡(luò)ResNet50替換原VGG16網(wǎng)絡(luò),消除了網(wǎng)絡(luò)退化問題,,進而提取更加抽象和豐富的語義信息,,提升模型對多尺度和小目標(biāo)的檢測能力;其次,,引入注意力機制ECA模塊,,使特征提取網(wǎng)絡(luò)聚焦特征圖像的局部高效信息,減少無效目標(biāo)的干擾,,提升模型檢測精度,;最后,采用一種枝葉插圖數(shù)據(jù)增強方法改進蘋果數(shù)據(jù)集,,解決圖像數(shù)據(jù)不足問題,。基于構(gòu)建的數(shù)據(jù)集,,使用遺傳算法優(yōu)化K-means++聚類生成自適應(yīng)錨框,,提高模型定位準確性。試驗結(jié)果表明,,改進模型對可抓取和不可直接抓取蘋果的精度均值分別為96.16%和86.95%,,平均精度均值為92.79%,較傳統(tǒng)Faster R-CNN提升15.68個百分點,;對可抓取和不可直接抓取的蘋果定位精度分別為97.14%和88.93%,,較傳統(tǒng)Faster R-CNN分別提高12.53個百分點和40.49個百分點;內(nèi)存占用量減少38.20%,,每幀平均計算時間縮短40.7%,,改進后的模型參數(shù)量小且實時性好,能夠更好地應(yīng)用于果實采摘機器人視覺系統(tǒng),。

    Abstract:

    To address the issue of poor detection and positioning capabilities of fruit picking robots in scenes with densely distributed targets and fruits occluding each other, a method to improve the fruit detection and positioning of Faster R-CNN was proposed by introducing an efficient channel attention mechanism(ECA)and a multiscale feature fusion pyramid(FPN). Firstly,,the commonly used VGG16 network was replaced with a ResNet50 residual network with strong expression capability and eliminate network degradation problem,,thus extracting more abstract and rich semantic information to enhance the models detection ability for multiscale and small targets. Secondly,the ECA module was introduced to enable the feature extraction network to focus on local and efficient information in the feature map,,reduce the interference of invalid targets,,and improve the model's detection accuracy. Finally,a branch and leaf grafting data augmentation method was used to improve the apple dataset and solve the problem of insufficient image data. Based on the constructed dataset,,genetic algorithms were used to optimize K-means++ clustering and generate adaptive anchor boxes. Experimental results showed that the improved model had average precision of 96.16% for graspable apples and 86.95% for non-graspable apples,,and the mean average precision was 92.79%,which was 15.68 percentages higher than that of the traditional Faster R-CNN. The positioning accuracy for graspable and non-directly graspable apples were 97.14% and 88.93%, respectively,,which were 12.53 percentages and 40.49 percentages higher than that of traditional Faster R-CNN. The weight was reduced by 38.20%. The computation time was reduced by 40.7%. The improved model was more suitable for application in fruit-picking robot visual systems.

    參考文獻
    相似文獻
    引證文獻
引用本文

李翠明,楊柯,申濤,尚拯宇.基于改進Faster R-CNN的蘋果采摘視覺定位與檢測方法[J].農(nóng)業(yè)機械學(xué)報,2024,55(1):47-54. LI Cuiming, YANG Ke, SHEN Tao, SHANG Zhengyu. Vision Detection Method for Picking Robots Based on Improved Faster R-CNN[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(1):47-54.

復(fù)制
分享
文章指標(biāo)
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-06-26
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-07-13
  • 出版日期:
文章二維碼