ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

基于FCM-SimCC的豬只面部關鍵點定位方法
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

山東省自然科學基金項目(ZR2022MC152)和山東省重點研發(fā)計劃項目(2018GGX104012)


Pig Facial Landmark Detection Method Based on FCM-SimCC
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    隨著生豬養(yǎng)殖業(yè)向規(guī)?;图s化轉(zhuǎn)型,,非侵入式個體識別技術對于追蹤溯源、食品安全,、疾病控制等方面至關重要,,而豬只面部關鍵點定位技術是實現(xiàn)豬只非侵入式個體識別的前提。本研究基于SimCC關鍵點定位算法提出一種豬只面部關鍵點定位模型FCM-SimCC,,使用FasterNet代替原算法的CSPDarkNet作為特征提取網(wǎng)絡,;通過在FasterNet中嵌入CA注意力機制,提高模型對長距離特征的捕獲能力,;使用MLT自適應權(quán)重多任務損失函數(shù)聯(lián)合KL散度損失函數(shù)與Wing Loss損失函數(shù)對模型進行監(jiān)督,。在包含多個豬只品種、多種面部姿態(tài)的4861幅圖像的數(shù)據(jù)集上進行實驗,,結(jié)果表明本研究模型平均精度均值,、50%平均精度、75%平均精度分別為76.12%,、93.44%,、83.25%,相比原模型分別提升3.14,、1.77,、4.47個百分點,,浮點運算量為2.79×109,參數(shù)量為1.38×107,,浮點運算量減少38.68%,,參數(shù)量減少20.16%。并與DarkPose,、HRNet、YOLO X-Pose等主流關鍵點定位方法進行對比,,實驗結(jié)果表明FCM-SimCC模型能夠在較低的浮點運算量與較少模型參數(shù)量的基礎上實現(xiàn)快速精準的豬只面部關鍵點定位,,為豬只面部關鍵點定位及后續(xù)的豬只個體身份識別等提供技術支持。

    Abstract:

    With the transformation of pig breeding industry to largescale and intensive, non-intrusive individual identification technology is very important for traceback, food safety, disease control and scientific breeding. Pig facial landmark detection serves as a fundamental requirement for achieving non-invasive pig identification. A pig facial landmark detection model named FCM-SimCC was introduced, building upon the SimCC landmark detection algorithm. The model replaced CSPDarkNet with FasterNet for feature extraction and incorporated the CA attention mechanism within FasterNet to enhance the capture of long-distance features. Supervision of the model was achieved through the MLT adaptive weight multi-task loss function combined with KL divergence loss and Wing Loss. Test on a dataset of 4861 images was done, representing a variety of pig breeds and facial poses, the FCM-SimCC model attained mean average precision, 50% average precision, and 75% average precision of 76.12%, 93.44%, and 83.25%, respectively. These results indicated improvements of 3.14, 1.77, and 4.47 percentage points over the original model, with a reduced computational demand to 2.79×109 and a parameter count of 1.38×107, marking a 38.68% decrease in floating-point operations and a 20.16% reduction in parameters. When compared with mainstream landmark detection methodologies such as DeepPose, HRNet, and YOLO X-Pose, the FCM-SimCC model showcased its ability to provide rapid and precise pig facial landmark detection with lower computational resources and fewer parameters, offering valuable insights for similar tasks in pig facial landmark detection and individual pig identification.

    參考文獻
    相似文獻
    引證文獻
引用本文

張惠莉,王光遠,員玉良,代晨龍,滕飛,任景龍.基于FCM-SimCC的豬只面部關鍵點定位方法[J].農(nóng)業(yè)機械學報,2025,56(4):397-407. ZHANG Huili, WANG Guangyuan, YUN Yuliang, DAI Chenlong, TENG Fei, REN Jinglong. Pig Facial Landmark Detection Method Based on FCM-SimCC[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(4):397-407.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2024-02-21
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2025-04-10
  • 出版日期:
文章二維碼