ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于改進(jìn)DeepLabv3+的火龍果園視覺(jué)導(dǎo)航路徑識(shí)別方法
CSTR:
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類(lèi)號(hào):

基金項(xiàng)目:

國(guó)家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2017YFD0700602)


Navigation Path Recognition between Dragon Orchard Using Improved DeepLabv3+ Network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問(wèn)統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    針對(duì)視覺(jué)導(dǎo)航系統(tǒng)應(yīng)用在火龍果園環(huán)境中面臨干擾因素多、圖像背景復(fù)雜,、復(fù)雜模型難以部署等問(wèn)題,本文提出了一種基于改進(jìn)DeepLabv3+網(wǎng)絡(luò)的火龍果園視覺(jué)導(dǎo)航路徑識(shí)別方法,。首先,采用MobileNetV2取代傳統(tǒng)DeepLabv3+的主干特征提取網(wǎng)絡(luò)Xception,并將空間金字塔池化模塊(Atrous spatial pyramid pooling, ASPP)中的空洞卷積替換成深度可分離卷積(Depthwise separable convolution,DSC),在提升模型檢測(cè)速率的同時(shí)大幅減少了模型的參數(shù)量和內(nèi)存占用量,;其次,,在特征提取模塊處引入坐標(biāo)注意力機(jī)制(Coordinate attention,CA),增強(qiáng)了模型的特征提取能力,;最后,,通過(guò)設(shè)計(jì)的導(dǎo)航路徑提取算法對(duì)網(wǎng)絡(luò)模型分割出的道路掩碼區(qū)域擬合出導(dǎo)航路徑。實(shí)驗(yàn)結(jié)果表明:改進(jìn)后的DeepLabv3+的平均交并比和平均像素準(zhǔn)確率分別達(dá)到95.80%和97.86%,,相較原模型分別提升0.79,、0.41個(gè)百分點(diǎn)。同時(shí),,模型內(nèi)存占用量只有15.0MB,,和原模型相比降低97.00%,與Pspnet和U-net模型相比則分別降低91.57%,、 91.02%,。另外,,導(dǎo)航路徑識(shí)別精度測(cè)試結(jié)果表明平均像素誤差為22像素、平均距離誤差7.58cm,。已知所在果園道路寬度為3m,平均距離誤差占比為2.53%,。因此,本文研究方法可為解決火龍果園視覺(jué)導(dǎo)航任務(wù)提供有效參考。

    Abstract:

    Visual navigation has the advantages of low cost, wide applicability and high degree of intelligence, so it is widely used in orchard navigation tasks. Therefore, how to quickly and accurately identify the navigation path is a key step to achieve visual navigation. Aiming at the problems of multiple interference factors and complex image background in the application of visual navigation system in dragon orchard environment, a visual navigation path recognition method was proposed for dragon orchard based on improved DeepLabv3+ network. Firstly, the traditional DeepLabv3+ backbone feature extraction network was replaced by MobileNetV2 from Xception, and the atrous convolution in atrous spatial pyramid pooling (ASPP) was replaced with depthwise separable convolution(DSC). While improving the model detection rate, the number and memory footprint of model parameters were greatly reduced. Secondly, coordinate attention (CA) was introduced at the feature extraction module, which was helpful for the model to locate and identify road areas. Then, experiments were conducted on a self-built dragon orchard road dataset containing three different road conditions. The results showed that compared with the traditional DeepLabv3+, the MIoU and MPA of the improved DeepLabv3+ were increased by 0.79 percentage points and 0.41 percentage points, respectively, reaching 95.80% and 97.86%. Frames per second (FPS) was increased to 57.89f/s, and the number of parameters and memory footprint were reduced by 92.92% and 97.00%, respectively, to 3.87×106 and 15.0MB. The recognition results of the improved model on the orchard road were verified on the test set, indicating that the model had good robustness and anti-interference. In addition, comparing the proposed model with Pspnet and U-net networks, the results showed that the improved models offered significant advantages in detection rate, amount of parameters, and model size, making them more suitable for deployment to embedded devices. According to the segmentation results of the model, the edge information on both sides of the road was extracted, the road boundary line was fitted by the least squares method, and finally the navigation path was extracted by the angle bisector line fitting algorithm. The navigation path recognition accuracy was tested in three different road environments, and the test results showed that the average pixel error was 22 pixels and the average distance error was 7.58cm. The road width of the orchard in this test was 3m, and the average distance error accounted for only 2.53%. Therefore, the research result can provide an effective reference for the visual navigation task of dragon orchard.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

周學(xué)成,肖明瑋,梁英凱,商楓楠,陳橋,羅陳迪.基于改進(jìn)DeepLabv3+的火龍果園視覺(jué)導(dǎo)航路徑識(shí)別方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2023,54(9):35-43. ZHOU Xuecheng, XIAO Mingwei, LIANG Yingkai, SHANG Fengnan, CHEN Qiao, LUO Chendi. Navigation Path Recognition between Dragon Orchard Using Improved DeepLabv3+ Network[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(9):35-43.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-02-24
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-09-10
  • 出版日期:
文章二維碼