ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于雙流跨模態(tài)特征融合模型的群養(yǎng)生豬體質量測定
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

財政部和農業(yè)農村部:國家現(xiàn)代農業(yè)產業(yè)技術體系項目(CARS-35)


Estimation of Pig Weight Based on Cross-modal Feature Fusion Model
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對生豬體質量準確測定問題,提出了一種跨模態(tài)特征融合模型(Crossmodality feature fusion ResNet,,CFF-ResNet),,充分利用可見光圖像的紋理輪廓信息與深度圖像的空間結構信息的互補性,實現(xiàn)了群養(yǎng)環(huán)境中無接觸的生豬體質量智能測定,。首先,,采集并配準俯視豬圈的可見光與深度圖像,并通過EdgeFlow算法對每一只目標生豬個體進行由粗到細的像素級分割,。然后,,基于ResNet50網絡構建雙流架構模型,通過內部插入門控形成雙向連接,,有效地結合可見光流和深度流的特征,,實現(xiàn)跨模態(tài)特征融合。最后,,雙流分別回歸出生豬體質量預估值,,通過均值合并得到最終的體質量測定值,。在試驗中,以某種公豬場群養(yǎng)生豬為數(shù)據采集對象,,構建了擁有9842對配準可見光和深度圖像的數(shù)據集,,包括6909對訓練數(shù)據和2933對測試數(shù)據。本研究所提出模型在測試集上的平均絕對誤差為3.019kg,,平均準確率為96.132%,。與基于可見光和基于深度的單模態(tài)基準模型相比,該模型體質量測定精度更高,,其在平均絕對誤差上分別減少18.095%和12.569%,。同時,該模型體質量測定精度優(yōu)于其他現(xiàn)有生豬體質量測定方法:常規(guī)圖像處理模型,、改進EfficientNetV2模型,、改進DenseNet201模型和BotNet+DBRB+PFC模型,在平均絕對誤差上分別減少46.272%,、14.403%,、8.847%和11.414%。試驗結果表明,,該測定模型能夠有效學習跨模態(tài)的特征,,滿足了生豬體質量測定的高精度要求,為群養(yǎng)環(huán)境中生豬體質量測定提供了技術支撐,。

    Abstract:

    In recent years, with the increasing scale of pig farming in the world, farms are in urgent need of automated livestock information management systems to ensure animal welfare. As one of the significant growing information of pigs, the weight of pigs can help farmers to grasp the healthy status of pigs. The traditional methods manually measure pig weight, which are time-consuming and laborious. With the development of image processing technology, the estimation of pig weight by analyzing images has opened up a way for intelligent determination of pig weight. However, many recent studies usually considered only one image modality, either RGB or depth, which ignored the complementary information between the two modalities. To address the above issues, a cross-modality feature fusion model CFF-ResNet was proposed, which made full use of the complementary between texture contour information of RGB images and spatial structure information of depth images, for realizing the intelligent estimation of pig weight without human contact in a group farming environment. Firstly, RGB and depth images of the piggery in top view were acquired, and the correspondence between the pixel coordinates of the two different modalities were used to achieve alignment. Then the EdgeFlow algorithm was used to segment each target individual pig in the coarse-to-fine pixel level, while filtering out irrelevant background information. A two-stream architecture model was constructed based on the ResNet50 network, and a bidirectional connection was formed by inserting internal gates to effectively combine the features of RGB and depth streams for cross-modal feature fusion. Finally, the two streams were regressed separately to produce pig weight predictions, and the final weight estimation values were obtained by averaging. In the experiment, the data was collected from a commercial pig farm in Henan, and a dataset with 9842 pairs of aligned RGB and depth images was constructed, including 6909 pairs of training images and 2933 pairs of test images. The experimental results showed that the mean absolute error of the proposed model on the test set was 3.019kg, which was reduced by 18.095% and 12.569% compared with the RGB and depth-based single-stream benchmark models, respectively. The average accuracy of proposed method reached 96.132%, which was very promising. Noting that, the model did not add additional training parameters when compared with the direct use of two single-stream models to process RGB and depth images separately. The mean absolute error of the model was reduced by 46.272%, 14.403%, 8.847%, and 11.414% compared with other existing methods: the conventional method, the improved EfficientNetV2 model, the improved DenseNet201 model, and the BotNet+DBRB+PFC model, respectively. In addition, to verify the effectiveness of cross-modal feature fusion, a series of ablation experiments were also designed to explore different alternatives for two stream connections, including unidirectional or bidirectional additive or multiplicative connections. The experimental results showed that the model with a bidirectional additive connection obtained the best performance among all alternatives. All the above experimental results showed that the proposed model can effectively learn the cross-modal features and meet the requirements of accurate pig weight measurement, which can provide effective technical support for pig weight measurement in group farming environment.

    參考文獻
    相似文獻
    引證文獻
引用本文

何威,米陽,劉剛,丁向東,李濤.基于雙流跨模態(tài)特征融合模型的群養(yǎng)生豬體質量測定[J].農業(yè)機械學報,2023,54(s1):275-282,,329. HE Wei, MI Yang, LIU Gang, DING Xiangdong, LI Tao. Estimation of Pig Weight Based on Cross-modal Feature Fusion Model[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(s1):275-282,329.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-06-20
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-12-10
  • 出版日期:
文章二維碼