[go: up one dir, main page]

CN117351430A - Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision - Google Patents

Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision Download PDF

Info

Publication number
CN117351430A
CN117351430A CN202311435584.0A CN202311435584A CN117351430A CN 117351430 A CN117351430 A CN 117351430A CN 202311435584 A CN202311435584 A CN 202311435584A CN 117351430 A CN117351430 A CN 117351430A
Authority
CN
China
Prior art keywords
monitoring
egg
laying
cage
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311435584.0A
Other languages
Chinese (zh)
Inventor
林宏建
吴锐
何叶帆
贺鹏光
窦军
泮进明
应义斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202311435584.0A priority Critical patent/CN117351430A/en
Publication of CN117351430A publication Critical patent/CN117351430A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a stacked cage-raising layer zone egg laying monitoring method based on computer vision, which comprises the following steps: step 1, obtaining video data of two layers of laying hen positions through inspection equipment; step 2, labeling the acquired video data, and forming a data set by the video image set and the label; step 3, a monitoring algorithm based on an improved deep Sort framework is constructed, wherein the monitoring algorithm comprises a target detection module, a track matching module, a track generation module and a positioning counting module; training a monitoring algorithm by adopting a data set to obtain a monitoring model; and 5, acquiring video data of the laying hen positions to be counted, and inputting the video data into a monitoring model to obtain the egg yield of the corresponding laying hen positions. The invention also provides a layered cage-raising laying hen zone-position egg production monitoring system. The method provided by the invention is designed aiming at the multilayer multi-row stacked cage-raising laying hen zone bit, so that the egg yield monitoring is more visual and accurate and the monitoring efficiency is higher.

Description

一种基于计算机视觉的层叠式笼养蛋鸡区位产蛋监测方法及 系统A computer vision-based egg production monitoring method for stacked caged laying hens and system

技术领域Technical field

本发明属于畜牧业养殖技术领域,尤其涉及一种基于计算机视觉的层叠式笼养蛋鸡区位产蛋监测方法及系统。The invention belongs to the field of animal husbandry and breeding technology, and in particular relates to a computer vision-based method and system for positional egg production monitoring of stacked caged laying hens.

背景技术Background technique

深度学习的发展为计算机视觉技术提供了强大的支持,使得基于计算机视觉的监测方法在精度和稳定性方面都有了显著的提高,尤其是深度学习技术的广泛应用,为图像处理和目标检测等任务提供了更为精准和高效的方法,帮助减少人工的成本和监测误差,近些年来,有学者将计算机视觉技术应用到笼养蛋鸡产蛋量监测中,赵春江等人通过二维码标签和改进的YOLO进行单层鸡笼定位和计数,但该方法存在一定的局限性;一方面,由于养殖过程中,二维码标签容易接触到鸡的粪便、食物残渣等异物而污损或者撕毁,导致无法扫描;另一方面,二维码里面的内容是固定的,是无法进行修改,若更改鸡笼信息只能重新制作,前后期维护需要大量人工成本,该方法只能应用于单层,检测效率较低。The development of deep learning has provided strong support for computer vision technology, which has significantly improved the accuracy and stability of monitoring methods based on computer vision. In particular, the wide application of deep learning technology has provided many advantages for image processing and target detection. The task provides a more accurate and efficient method to help reduce labor costs and monitoring errors. In recent years, some scholars have applied computer vision technology to the monitoring of egg production in caged laying hens. Zhao Chunjiang and others used QR codes to Labels and improved YOLO are used to locate and count single-layer chicken cages, but this method has certain limitations; on the one hand, due to the breeding process, the QR code label is easily exposed to chicken feces, food residues and other foreign matter and becomes stained or Tear it up, making it impossible to scan; on the other hand, the content in the QR code is fixed and cannot be modified. If you change the chicken coop information, you can only make it again. Pre- and post-maintenance requires a lot of labor costs. This method can only be applied to a single layer, the detection efficiency is low.

专利文献CN114842470A公开一种层叠式笼养模式下的鸡蛋计数方法及定位系统,该方法包括:鸡蛋目标跟踪计数模块:基于鸡蛋多类别目标监测模型和多目标跟踪DeepSort算法实现鸡蛋的计数,并将不同类别的鸡蛋加以区分实现分别计数;速度显示屏定位及示数识别模块:利用卷积监测算法设计并训练了基于UNet算法的速度显示屏语义分割模型及基于CNN算法的速度显示屏目标分类模型。他们在鸡蛋识别多目标跟踪过程中引入了外观特征来提高追踪识别的准确率,该方法对蛋类视频数据要求高,针对每两个传送带需要配置一个速度传感器,对多层多列的鸡笼安装设备麻烦且现场鸡笼定位情况不直观,在养殖场景中前期投入较大且后期日常维护麻烦。Patent document CN114842470A discloses an egg counting method and positioning system in a stacked cage mode. The method includes: an egg target tracking and counting module: based on the egg multi-category target monitoring model and the multi-target tracking DeepSort algorithm to count eggs, and Different types of eggs are distinguished to achieve separate counting; the speed display positioning and display identification module: using the convolution monitoring algorithm to design and train the speed display semantic segmentation model based on the UNet algorithm and the speed display target classification model based on the CNN algorithm . They introduced appearance features in the multi-target tracking process of egg recognition to improve the accuracy of tracking and recognition. This method has high requirements for egg video data. A speed sensor needs to be configured for every two conveyor belts. For multi-layer and multi-column chicken cages, It is troublesome to install the equipment and the positioning of chicken cages on site is not intuitive. In the breeding scene, the initial investment is large and the daily maintenance in the later period is troublesome.

专利文献CN116091473A公开了一种蛋鸡舍多通道区位精细产蛋性能监测方法及系统,该方法包括多个鸡笼并排布置,每层鸡笼对应一条传送带,蛋鸡下单后,鸡蛋顺着鸡笼底部斜面滑落到传送带上,传送带开启后将鸡蛋向前运输至集蛋器,监控系统在视频画面上蛋道口位置处对鸡蛋进行检测实现鸡蛋位置和数量监控。该方法通过计算机视觉技术对鸡蛋速度进行测算,对监控系统的要求较高,而且针对于大型养鸡场,需要布置多个监控系统,对鸡笼定位的情况不够直观而且搭建系统费时费力。Patent document CN116091473A discloses a method and system for monitoring the precise egg production performance of multi-channel areas in laying hen houses. The method includes multiple chicken cages arranged side by side, and each layer of chicken cages corresponds to a conveyor belt. After the laying hens place an order, the eggs are transported along the chicken cages. The bottom slope slides down to the conveyor belt. After the conveyor belt is opened, the eggs are transported forward to the egg collector. The monitoring system detects the eggs at the egg crossing position on the video screen to monitor the location and quantity of eggs. This method uses computer vision technology to measure egg speed, which has high requirements on the monitoring system. Moreover, for large chicken farms, multiple monitoring systems need to be deployed. The positioning of chicken cages is not intuitive enough and building the system is time-consuming and laborious.

发明内容Contents of the invention

本发明的主要目的在于提供一种基于计算机视觉的层叠式笼养蛋鸡区位产蛋监测方法及系统,该方法针对多层多列的层叠式笼养蛋鸡区位进行设计,对每个鸡笼的产蛋量监测更加直观精确以及监测效率更高,投入的监测设备更少且不使用任何外部辅助的标签,后期维护起来非常简单,同时也能确保实现高效率的无人化监测。The main purpose of the present invention is to provide a computer vision-based method and system for monitoring the egg production of laminated caged laying hens. This method is designed for the location of multi-layered and multi-column laminated caged laying hens, and each chicken cage is The egg production monitoring is more intuitive and accurate and the monitoring efficiency is higher. It requires less monitoring equipment and does not use any external auxiliary tags. It is very simple to maintain in the future and can also ensure efficient unmanned monitoring.

为了实现本发明的第一个目的,提供了一种基于计算机视觉的层叠式笼养蛋鸡区位产蛋监测方法,包括:In order to achieve the first object of the present invention, a computer vision-based method for monitoring egg production in stacked caged laying hens is provided, including:

步骤1、通过巡检设备获取两层蛋鸡区位的视频数据,所述巡检设备包括两个俯视角度相同但高度不同的单目相机;Step 1. Obtain video data of two layers of laying hens through inspection equipment. The inspection equipment includes two monocular cameras with the same overlooking angle but different heights;

步骤2、将获取的视频数据进行图像逐帧截取以构建视频图像集合,并以图像中的笼柱和鸡蛋对视频图像集合进行标签标注,将视频图像集合和标签组成数据集;Step 2. Intercept the acquired video data frame by frame to construct a video image collection, label the video image collection with the cage columns and eggs in the image, and combine the video image collection and labels to form a data set;

步骤3、构建基于改进Deepsort框架的监测算法,所述监测算法包括目标检测模块,轨迹匹配模块,轨迹生成模块以及定位计数模块,所述目标检测模块包括目标检测模块,所述目标检测模块用于标定输入视频数据中笼柱的编号和识别输入视频数据以生成每一帧图像中的检测信息,所述检测信息包括识别获得的检测框或预测获得的预测框,所述检测框内包括与鸡蛋匹配的轨迹和对应的外形特征以及轨迹运动参数,所述轨迹匹配模块根据当前帧的检测框与上一帧预测获得的预测框进行多轮IOU匹配,并基于匹配结果进行代价矩阵分析,以输出对应的线性匹配关系,所述轨迹生成模块根据检测框与所有预测框之间的线性匹配关系,并结合检测框对应的外形特征、轨迹运动参数以及轨迹进行级联匹配,以生成鸡蛋的轨迹图像,所述定位计数模块根据笼柱的编号对每个蛋鸡区位进行定位,并结合鸡蛋的轨迹图像生成对应鸡蛋区位的产蛋量个数;Step 3. Construct a monitoring algorithm based on the improved Deepsort framework. The monitoring algorithm includes a target detection module, a trajectory matching module, a trajectory generation module and a positioning counting module. The target detection module includes a target detection module. The target detection module is used to Calibrate the number of the cage column in the input video data and identify the input video data to generate detection information in each frame of the image. The detection information includes a detection frame obtained by identification or a prediction frame obtained by prediction, and the detection frame includes an egg. Matching trajectories and corresponding shape features and trajectory motion parameters, the trajectory matching module performs multiple rounds of IOU matching based on the detection frame of the current frame and the prediction frame obtained by prediction of the previous frame, and performs cost matrix analysis based on the matching results to output Corresponding linear matching relationship, the trajectory generation module performs cascade matching based on the linear matching relationship between the detection frame and all prediction frames, combined with the corresponding shape features, trajectory motion parameters and trajectories of the detection frame to generate the trajectory image of the egg , the positioning and counting module locates each laying hen location according to the number of the cage column, and combines the egg trajectory image to generate the egg production number corresponding to the egg location;

步骤4、采用数据集对监测算法进行训练,以获得用于监测蛋鸡区位产蛋量的监测模型;Step 4. Use the data set to train the monitoring algorithm to obtain a monitoring model for monitoring egg production in laying hen locations;

步骤5、获取待计数蛋鸡区位的视频数据,并输入至所述监测模型中以获得对应蛋鸡区位的产蛋量。Step 5: Obtain the video data of the laying hen location to be counted, and input it into the monitoring model to obtain the egg production volume of the corresponding laying hen location.

本发明仅需两个工业摄像头俯视的方式进行多层多列笼养鸡笼的实时监测,同时根据硬件的选择对模型的目标跟踪部分进行更新,通过更换蛋鸡区位专用目标检测模块以及加入第二次的IOU匹配,对多层多列蛋鸡区位的编号定位和对应编号蛋鸡的产蛋计数的精度和速度有更大程度的提高。The present invention only needs two industrial cameras to perform real-time monitoring of multi-layer and multi-row chicken cages in a top-down manner. At the same time, the target tracking part of the model is updated according to the selection of hardware. By replacing the special target detection module for the laying hen location and adding the third The secondary IOU matching can greatly improve the accuracy and speed of number positioning of multi-layer and multi-column laying hens and egg counting of corresponding numbered laying hens.

具体的,在投放巡检设备前,需要对两个单目相机需要进行标定去畸变处理,以确保不同层列蛋鸡区位产蛋量监测的准确性。Specifically, before the inspection equipment is put into use, the two monocular cameras need to be calibrated and de-distorted to ensure the accuracy of egg production monitoring in different layers of laying hens.

具体的,所述标定去畸变处理的表达式如下:Specifically, the expression of the calibration dedistortion process is as follows:

式中,Zc为尺度因子,为光心到蛋鸡区位图像平面的距离,u和v为像素坐标系下蛋鸡区位的横纵坐标,fx和fy为蛋鸡区位x轴和y轴上的归一化焦距,R3×3为相机标定得到的旋转矩阵,XW、YW和ZW为世界坐标系,/>为蛋鸡区位畸变矫正坐标,k1、k2为径向畸变,p1、p2为切向畸变,r为曲率半径,/>为蛋鸡区位畸变后的坐标。In the formula, Z c is the scale factor, is the distance from the optical center to the image plane of the laying hen location, u and v are the horizontal and vertical coordinates of the laying hen location in the pixel coordinate system, f x and f y are the normalized focal lengths on the x-axis and y-axis of the laying hen location, R 3×3 is the rotation matrix obtained by camera calibration, X W , Y W and Z W are the world coordinate system,/> are the laying hen location distortion correction coordinates, k 1 and k 2 are radial distortions, p 1 and p 2 are tangential distortions, r is the radius of curvature, /> are the distorted coordinates of the laying hen location.

具体的,所述视频图像集的构建过程如下:Specifically, the construction process of the video image set is as follows:

采用VideoCapture函数对获取的视频数据进行预设帧数的截取操作,以获得初始图像集合;Use the VideoCapture function to intercept the acquired video data with a preset number of frames to obtain the initial image collection;

对获取的初始图像集合进行数据清洗和数据扩充,以获得包含蛋鸡区位的视频图像集合。Perform data cleaning and data expansion on the acquired initial image set to obtain a video image set containing the location of laying hens.

所述数据清洗包括数据去重,数据过滤和数据修复。The data cleaning includes data deduplication, data filtering and data repair.

所述数据扩充包括HSV数据增强、Brightness数据增强和Mixup数据增强。The data augmentation includes HSV data augmentation, Brightness data augmentation and Mixup data augmentation.

具体的,所述监测算法还包括全局注意力模块,所述全局注意力模块用于将输入视频数据的每一帧视频图像多维度下的注意力机制操作,并将不同维多下的特征进行依次相乘,以获得特征增强的视频数据并输入至目标检测模块,引入全局注意力机制操作能够更好地捕捉笼柱和鸡蛋特征图中通道和空间维度之间的依赖关系,从而显著提高模型的性能。Specifically, the monitoring algorithm also includes a global attention module, which is used to operate the attention mechanism in multiple dimensions of each frame of the video image of the input video data, and to perform feature extraction in different dimensions. Multiply in turn to obtain feature-enhanced video data and input it to the target detection module. The introduction of a global attention mechanism operation can better capture the dependence between channels and spatial dimensions in the cage column and egg feature maps, thereby significantly improving the model. performance.

具体的,所述目标检测模块还引入了Ghost模块,所述Ghost模块用于减少模型的参数量,通过分组卷积与线性操作的方式来使用更少的参数生成更多的特征图,大大降低了模型的复杂度,减少了网络参数数量,提升了模型运算速度。Specifically, the target detection module also introduces the Ghost module, which is used to reduce the number of parameters of the model and generate more feature maps with fewer parameters through grouped convolution and linear operations, which greatly reduces It reduces the complexity of the model, reduces the number of network parameters, and improves the model computing speed.

具体的,所述轨迹匹配模块采用匈牙利算法对代价矩阵进行求解以获得对应的线性匹配关系,所述线性匹配关系包括轨迹失配,检测框未匹配到以及检测框与预测框成功匹配;Specifically, the trajectory matching module uses the Hungarian algorithm to solve the cost matrix to obtain the corresponding linear matching relationship. The linear matching relationship includes trajectory mismatch, the detection frame is not matched, and the detection frame and the prediction frame are successfully matched;

若匹配结果为轨迹失配或检测框未匹配到,则重复IOU匹配直至达到结束条件。If the matching result is a trajectory mismatch or the detection frame is not matched, the IOU matching is repeated until the end condition is reached.

具体的,在轨迹失配的情况下,对于连续失配帧数超过50帧的轨迹,直接将其删除。Specifically, in the case of trajectory mismatch, trajectories with more than 50 consecutive mismatched frames will be deleted directly.

具体的,对于检测框未匹配到的情况,将这些未能匹配的检测框初始化为新的轨迹。Specifically, for the case where the detection frames are not matched, these unmatched detection frames are initialized as new trajectories.

具体的,所述定位计数模块通过上下层笼柱坐标生成上层蛋鸡区位感兴趣区域LROI1和下层蛋鸡区位感兴趣区域LROI2并对其进行实时编号,再对LROI1、LROI2区域内鸡蛋进行计数,视频结束后,对每个LROI1、LROI2区域内所有帧的鸡蛋计数值进行计算得出最终的蛋鸡区位产蛋观测结果。Specifically, the positioning and counting module generates the area of interest LROI1 for the upper layer hens and the area of interest LROI2 for the lower layer hens through the coordinates of the upper and lower cage columns and numbers them in real time, and then counts the eggs in the LROI1 and LROI2 areas. After the video ends, the egg count values of all frames in each LROI1 and LROI2 area are calculated to obtain the final laying hen location egg laying observation results.

具体的,在训练中,采用WIoU损失函数对监测算法的目标检测器进行训练,以更新目标检测器的参数。Specifically, during training, the WIoU loss function is used to train the target detector of the monitoring algorithm to update the parameters of the target detector.

具体的,所述损失函数的表达式如下:Specifically, the expression of the loss function is as follows:

LWIoUv1=RWIoULIoU (3)L WIoUv1 =R WIoU L IoU (3)

式中,xg和yg表示预测框宽和高,xgt和ygt表示真实框宽和高,Wg和Hg表示由预测框和真实框结合构成的最小矩形的宽度和长度,上标*代表不参与计算,有效地消除了阻碍收敛的因素,RWIoU代表WIoU的惩罚项将加强普通质量锚框的损失,β代表离群度,用于为高质量的锚框分配一个梯度权重增益,r′为梯度增益,α和δ为超参数,LIoU代表IOU损失。In the formula, x g and y g represent the width and height of the predicted box, x gt and y gt represent the width and height of the real box, W g and H g represent the width and length of the minimum rectangle formed by the combination of the predicted box and the real box, above The mark * represents not participating in the calculation, effectively eliminating factors that hinder convergence. R WIoU represents the penalty term of WIoU that will enhance the loss of ordinary quality anchor boxes. β represents outliers, which is used to assign a gradient weight to high-quality anchor boxes. Gain, r′ is the gradient gain, α and δ are hyperparameters, and L IoU represents the IOU loss.

为了实现本发明的第二个目的,提供了一种层叠式笼养蛋鸡区位产蛋监测系统,通过上述层叠式笼养蛋鸡区位产蛋监测方法实现,包括鸡场管理单元,产蛋监测单元以及系统单元;In order to achieve the second object of the present invention, a laminated caged laying hen positional egg production monitoring system is provided, which is realized by the above laminated caged layering hens positional egg production monitoring method, including a chicken farm management unit, and an egg production monitoring system. units and system units;

所述鸡场管理单元包括鸡场环境监测,鸡场饲养管理以及鸡场人员管理;The chicken farm management unit includes chicken farm environment monitoring, chicken farm feeding management and chicken farm personnel management;

所述产蛋监测单元包括鸡笼监测,产蛋量监测以及异常报警;The egg production monitoring unit includes chicken cage monitoring, egg production monitoring and abnormal alarm;

所述系统管理单元包括用于调配鸡场管理单元和产蛋监测单元的系统设置部分,用于管理账户的系统用户管理部分以及用于存储每日监测结果的总体数据管理部分。The system management unit includes a system setting part for deploying a chicken farm management unit and an egg production monitoring unit, a system user management part for managing accounts, and an overall data management part for storing daily monitoring results.

与现有技术相比,本发明的有益效果:Compared with the existing technology, the beneficial effects of the present invention are:

本发明提供的监测模型通过跟踪算法与蛋鸡区位的外在特征(笼柱)相结合,对蛋鸡区位进行定位编号计数,不需要任何外在的辅助标签,实施简单几乎不需要人工成本,对实时产蛋监测更加直观高效,设备维护起来非常简单,同时针对现有场景重新设计了跟踪算法,更换蛋鸡区位专用目标检测模块以及加入第二次的IOU匹配,对多层多列蛋鸡区位的编号定位和对应编号蛋鸡的产蛋计数的精度和速度有更大程度的提高,此外仅需两个工业摄像头俯视的方法对多层多列笼养鸡笼进行实时监测,前后期投入设备更少,监测速度更快且效率更高。The monitoring model provided by the present invention combines the tracking algorithm with the external characteristics (cage pillars) of the laying hen location to position and number the laying hen location. It does not require any external auxiliary labels, is simple to implement, and requires almost no labor cost. The real-time egg production monitoring is more intuitive and efficient, and the equipment maintenance is very simple. At the same time, the tracking algorithm has been redesigned according to the existing scenario, the special target detection module for the laying hen location has been replaced, and the second IOU matching has been added to monitor multi-layer and multi-row laying hens. The accuracy and speed of location number positioning and egg counting of corresponding numbered laying hens have been greatly improved. In addition, only two industrial cameras are needed to conduct real-time monitoring of multi-layer and multi-column cages. With less equipment, monitoring is faster and more efficient.

附图说明Description of drawings

图1为本实例提供的一种基于计算机视觉的层叠式笼养蛋鸡区位产蛋监测方法的流程图;Figure 1 is a flow chart of a computer vision-based egg production monitoring method for layered caged laying hens provided in this example;

图2为本实施例提供的巡检设备的结构示意图;Figure 2 is a schematic structural diagram of the inspection equipment provided in this embodiment;

图3为本实施例提供的监测算法的结构示意图;Figure 3 is a schematic structural diagram of the monitoring algorithm provided in this embodiment;

图4为本实施例提供的定位计数的工作流程图;Figure 4 is a work flow chart of positioning counting provided by this embodiment;

图5为本实施例提供的蛋鸡区位产蛋监测系统的结构示意图;Figure 5 is a schematic structural diagram of the laying hen location egg production monitoring system provided in this embodiment;

图6为本实施例提供的监测模块对测试编号1的监测数据柱状图;Figure 6 is a histogram of monitoring data for test number 1 by the monitoring module provided in this embodiment;

图7为本实施例提供的监测模块对测试编号1的错检鸡笼数据柱状图;Figure 7 is a histogram of the wrong detection chicken cage data of test number 1 by the monitoring module provided in this embodiment;

图8为本实施例提供的监测模块对测试编号2的监测数据柱状图;Figure 8 is a histogram of monitoring data for test number 2 by the monitoring module provided in this embodiment;

图9为本实施例提供的监测模块对测试编号2的错检鸡笼数据柱状图;Figure 9 is a histogram of incorrectly detected chicken cage data for test number 2 by the monitoring module provided in this embodiment;

图10为本实施例提供的监测模块对测试编号3的监测数据柱状图;Figure 10 is a histogram of monitoring data for test number 3 by the monitoring module provided in this embodiment;

图11为本实施例提供的监测模块对测试编号3的错检鸡笼数据柱状图;Figure 11 is a histogram of incorrectly detected chicken cage data for test number 3 by the monitoring module provided in this embodiment;

图12为本实施例提供的监测模块对测试编号4的监测数据柱状图;Figure 12 is a histogram of monitoring data for test number 4 by the monitoring module provided in this embodiment;

图13为本实施例提供的监测模块对测试编号4的错检鸡笼数据柱状图。Figure 13 is a histogram of the wrong detection chicken cage data of test number 4 by the monitoring module provided in this embodiment.

具体实施方式Detailed ways

以下将结合附图详细描述本发明的实施例,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。在不具备创造性的前提下,普通技术管理人员所获得的所有其他实施例,均属于本发明的保护范围。The embodiments of the present invention will be described in detail below with reference to the accompanying drawings. Obviously, the described embodiments are only some, not all, of the embodiments of the present invention. As long as there is no inventive step, all other embodiments obtained by ordinary technical managers belong to the protection scope of the present invention.

如图1所示,为本实施例提供的蛋鸡区位产蛋监测方法,包括:As shown in Figure 1, the laying hen location egg production monitoring method provided in this embodiment includes:

如图2所示,层叠式笼养蛋鸡区位包括多层多列由两个笼柱1为特征构成的产蛋鸡笼2以及蛋鸡所产鸡蛋通过一定坡度的滚蛋角3滚动到产蛋鸡笼下方的金属底网平台4。As shown in Figure 2, the stacked cage laying hen location includes a multi-layered and multi-row laying hen cage 2 characterized by two cage columns 1, and the eggs produced by the laying hens roll through a certain slope of the rolling angle 3 to the laying hens. Metal bottom mesh platform 4 under the chicken coop.

步骤1、将两个X001单目工业摄像头5分别放置在离产蛋鸡笼0.58m的摄像支架6上的第二层以及第四层产蛋鸡笼高度上,以45度角俯视非重叠的两层蛋鸡区位视场方式配合树莓派7和巡检机器人8构建巡检设备并对其两个单目工业摄像头进行标定去畸变处理,以确保不同层列蛋鸡区位产蛋量监测的准确性。Step 1. Place two X001 monocular industrial cameras 5 on the second and fourth layer of the camera bracket 6 0.58m away from the laying hen cage, and look down at the non-overlapping images at a 45-degree angle. The two-layer laying hen position field of view method is used with Raspberry Pi 7 and inspection robot 8 to build inspection equipment and calibrate and dedistort its two monocular industrial cameras to ensure accurate monitoring of egg production in different layers of laying hens. accuracy.

本实施例中,所述对两个单目工业摄像头进行标定去畸变处理的计算公式为:In this embodiment, the calculation formula for calibrating and de-distorting two monocular industrial cameras is:

式中,Zc为尺度因子,为光心到蛋鸡区位图像平面的距离,u和v为像素坐标系下蛋鸡区位的横纵坐标,fx和fy为蛋鸡区位x轴和y轴上的归一化焦距,R3×3为相机标定得到的旋转矩阵,XW、YW和ZW为世界坐标系,/>为蛋鸡区位畸变矫正坐标,k1、k2为径向畸变,p1、p2为切向畸变,r为曲率半径,/>为蛋鸡区位畸变后的坐标。In the formula, Z c is the scale factor, is the distance from the optical center to the image plane of the laying hen location, u and v are the horizontal and vertical coordinates of the laying hen location in the pixel coordinate system, f x and f y are the normalized focal lengths on the x-axis and y-axis of the laying hen location, R 3×3 is the rotation matrix obtained by camera calibration, X W , Y W and Z W are the world coordinate system,/> are the laying hen location distortion correction coordinates, k 1 and k 2 are radial distortions, p 1 and p 2 are tangential distortions, r is the radius of curvature, /> are the distorted coordinates of the laying hen location.

步骤2、用步骤1构建的巡检设备以0.3m/s的速度以规定路径获取两层蛋鸡区位视频数据并截帧处理得到带有特征的图片信息并进行数据预处理,分别整理出适用于YOLO模型的识别数据集和适用于DeepSort模型的跟踪数据集,数据集类别包括笼柱和鸡蛋。Step 2. Use the inspection equipment constructed in step 1 to obtain the video data of the two layers of laying hens along the prescribed path at a speed of 0.3m/s, and perform frame interception processing to obtain characteristic picture information and perform data preprocessing to sort out applicable Recognition data set for YOLO model and tracking data set for DeepSort model. Data set categories include cage columns and eggs.

本实施例中,其具体包括以下步骤:In this embodiment, it specifically includes the following steps:

步骤2.1、对不同高度的层叠式蛋鸡区位视频进行采集,通过VideoCapture函数对视频按8帧进行截取操作得到754张图片;Step 2.1. Collect the videos of laminated laying hens at different heights, and use the VideoCapture function to intercept the videos in 8 frames to obtain 754 pictures;

步骤2.2、对步骤2.1截取操作的数据集进行数据清洗,包括数据去重,数据过滤,数据修复,得到743张蛋鸡区位图片;Step 2.2. Perform data cleaning on the data set intercepted in step 2.1, including data deduplication, data filtering, and data repair, and obtain 743 laying hen location pictures;

步骤2.3、为了提高模型的泛化能力,将步骤2.2中的743张蛋鸡区位图片,使用HSV数据增强、Brightness数据增强和Mixup数据增强对数据集进行扩充处理,共得到包含2229个图像数据的数据集;Step 2.3. In order to improve the generalization ability of the model, the 743 laying hen location pictures in step 2.2 were expanded using HSV data enhancement, Brightness data enhancement and Mixup data enhancement. A total of 2229 image data were obtained. data set;

步骤2.4、使用LabelImg进行手动标注,生成带有笼柱和鸡蛋特征坐标信息的txt文件,图片与对应标注的txt文件整合成笼柱、鸡蛋识别数据集,再根据手动标注的数据集,将笼柱与鸡蛋区域截取出来,并按照类别保存为笼柱、鸡蛋跟踪数据集;Step 2.4. Use LabelImg for manual annotation to generate a txt file with cage column and egg characteristic coordinate information. The picture and the corresponding annotated txt file are integrated into a cage column and egg identification data set. Then, based on the manually labeled data set, the cage The column and egg areas are intercepted and saved as cage column and egg tracking data sets according to categories;

步骤2.5、将笼柱与鸡蛋识别数据集按照8:1:1划分为训练集、验证集和测试集;模型在训练阶段,模型通过比较预测的目标框与实际候选框来计算损失函数,从而进行权重优化;模型在验证阶段,模型的性能在实时中进行评估,以避免出现过拟合情况,并进行超参数的调整,以进一步优化模型的性能;模型最后,在测试阶段,使用测试数据集来验证经过训练的模型的鲁棒性和在不同情境下的表现能力;Step 2.5. Divide the cage column and egg recognition data set into a training set, a verification set and a test set according to 8:1:1; during the training phase of the model, the model calculates the loss function by comparing the predicted target frame with the actual candidate frame, thereby Carry out weight optimization; in the verification phase of the model, the performance of the model is evaluated in real time to avoid overfitting, and hyperparameters are adjusted to further optimize the performance of the model; finally, in the testing phase of the model, test data is used Set to verify the robustness of the trained model and its ability to perform in different situations;

步骤2.6、将笼柱与鸡蛋跟踪数据集按照9:1划分为训练集和测试集;模型通过分析训练集中的数据来学习目标跟踪的模式和特征,通过反复训练和优化来逐渐提高其性能,以更好地捕获目标对象的运动、外观和行为等特征;模型通过测试集来确定模型在实际过程中是否能够成功地跟踪目标对象。Step 2.6. Divide the cage column and egg tracking data set into a training set and a test set according to 9:1; the model learns the patterns and characteristics of target tracking by analyzing the data in the training set, and gradually improves its performance through repeated training and optimization. To better capture the characteristics of the target object such as movement, appearance, and behavior; the model uses the test set to determine whether the model can successfully track the target object in the actual process.

步骤3、构建基于改进DeepSort框架的监测算法,所述监测算法包括目标检测模块,轨迹匹配模块,轨迹生成模块以及定位计数模块,所述目标检测模块包括目标检测模块,所述目标检测模块用于标定输入视频数据中笼柱的编号和识别输入视频数据以生成每一帧图像中的检测信息,所述检测信息包括识别获得的检测框或预测获得的预测框,所述检测框内包括与鸡蛋匹配的轨迹和对应的外形特征以及轨迹运动参数,所述预测框采用卡尔曼滤波对未匹配的轨迹进行预测获得,所述轨迹匹配模块根据当前帧的检测框与上一帧预测获得的预测框进行两次IOU匹配,并基于匹配结果进行代价矩阵分析,以输出对应的线性匹配关系,所述轨迹生成模块根据检测框与所有预测框之间的线性匹配关系,并结合检测框对应的外形特征、轨迹运动参数以及轨迹进行级联匹配,以生成鸡蛋的轨迹图像,所述定位计数模块根据笼柱的编号对每个蛋鸡区位进行定位,并结合鸡蛋的轨迹图像生成对应鸡蛋区位的产蛋量个数。Step 3. Construct a monitoring algorithm based on the improved DeepSort framework. The monitoring algorithm includes a target detection module, a trajectory matching module, a trajectory generation module and a positioning counting module. The target detection module includes a target detection module. The target detection module is used to Calibrate the number of the cage column in the input video data and identify the input video data to generate detection information in each frame of the image. The detection information includes a detection frame obtained by identification or a prediction frame obtained by prediction, and the detection frame includes an egg. The matching trajectory and the corresponding shape features and trajectory motion parameters are obtained by predicting the unmatched trajectory using Kalman filtering. The trajectory matching module is based on the detection frame of the current frame and the prediction frame obtained by prediction of the previous frame. Two IOU matches are performed, and cost matrix analysis is performed based on the matching results to output the corresponding linear matching relationship. The trajectory generation module is based on the linear matching relationship between the detection frame and all prediction frames, and combines the corresponding shape features of the detection frame , trajectory motion parameters and trajectories are cascade matched to generate the trajectory image of the egg. The positioning and counting module locates the location of each laying hen according to the number of the cage column, and combines the trajectory image of the egg to generate the egg laying position corresponding to the egg location. Measure the number.

本实施例中,该监测算法的结构如图3所示:In this embodiment, the structure of the monitoring algorithm is shown in Figure 3:

首先,改进YOLO算法的蛋鸡区位专用目标检测模块通过改进YOLO算法将YOLO算法中的backbone层和neck层的若干CBS模块、C3_x模块和C3_1_F模块的普通二维卷积Conv替换为轻量化卷积Ghostconv得到GhostCBS模块、ChostC3_x和GhostC3_1f模块,大幅度减小网络模型的计算成本,再通过backbone层结合GAM全局注意力机制加强空间与通道的笼柱与鸡蛋的特征感知,并配合WIOUv3损失函数进行目标检测;其次,将第二次IOU匹配加入DeepSort算法中减少跟踪目标丢失问题,对目标进行更加精准的多目标跟踪。First, the laying hen location-specific target detection module of the improved YOLO algorithm replaces the ordinary two-dimensional convolution Conv of several CBS modules, C3_x modules and C3_1_F modules in the backbone layer and neck layer of the YOLO algorithm with lightweight convolutions by improving the YOLO algorithm. Ghostconv obtains the GhostCBS module, ChostC3_x and GhostC3_1f modules, which greatly reduces the computational cost of the network model. It then uses the backbone layer combined with the GAM global attention mechanism to enhance the feature perception of cage columns and eggs in space and channels, and cooperates with the WIOUv3 loss function to target detection; secondly, the second IOU matching is added to the DeepSort algorithm to reduce the problem of tracking target loss and enable more accurate multi-target tracking of targets.

改进YOLO算法的蛋鸡区位专用目标检测模块算法流程为输入蛋鸡区位图片通道数为3,长宽为640×640,经过步长为2的6×6卷积核的CBS模块得到32×320×320特征图,经过步长为2的3×3卷积核的GhostCBS得到64×160×160特征图,经过GhostC3_1模块得到64×160×160特征图,经过步长为2的3×3卷积核的GhostCBS得到128×80×80特征图,经过GHOSTC3_2模块得到128×80×80特征图,经过步长为2的3×3卷积核的GhostCBS得到256×40×40特征图,经过GHOSTC3_3模块得到256×40×40特征图,经过步长为2的3×3卷积核的GhostCBS得到512×20×20特征图经过GhostC3_1模块得到512×20×20特征图,再经过GAM全局注意力模块和SPPF空间金字塔池化模块得到512×20×20特征图;经过步长为1的1×1卷积核的GhostCBS模块得到256×20×20特征图,经过Upsample上采样模块得到256×40×40特征图,经过Concat模块与backbone层得到的256×40×40特征图连接得到512×40×40特征图,经过GhostC3_1f模块得到256×40×40特征图,经过步长为1的1×1卷积核的GhostCBS模块得到128×40×40特征图,经过Upsample上采样模块得到128×80×80特征图,经过Concat模块与backbone层得到的128×80×80特征图连接得到256×80×80特征图,经过GhostC3_1f模块得到128×80×80特征图,经过步长为2的3×3卷积核的GhostCBS模块得到128×40×40特征图,经过Concat模块与backbone层得到的128×40×40特征图连接得到128×40×40特征图,经过GhostC3_1f模块得到256×40×40特征图,经过步长为2的3×3的GhostCBS模块得到256×20×20特征图,经过Concat模块与backbone层得到的256×40×40特征图连接得到512×20×20特征图,经过GhostC3_1f模块得到512×20×20特征图,将neck中得出的128×80×80特征图、256×40×40特征图、512×20×20特征图经过输出的通道为256的GhostConv转换为256×80×80特征图、256×40×40特征图、256×20×20特征图到head层后,使用预设的先验框对特征图中的每个像素采用WIOUv3损失函数计算边界框回归损失,获取包含物体类别、类别置信度、边界框坐标、宽度和高度等信息的多维数组,运用置信度阈值和IOU阈值对阵列中的冗余信息进行筛选,以滤除无效内容,应用非极大值抑制算法来消除多余的预测框,从而得到笼柱和鸡蛋的检测信息。The algorithm flow of the layer-specific target detection module that improves the YOLO algorithm is to input the layer-location image with 3 channels and a length and width of 640×640. After passing through the CBS module of the 6×6 convolution kernel with a step size of 2, 32×320 is obtained. ×320 feature map. After GhostCBS with a 3×3 convolution kernel with a stride of 2, a 64×160×160 feature map is obtained. After the GhostC3_1 module, a 64×160×160 feature map is obtained. After a 3×3 convolution with a stride of 2 The GhostCBS with accumulation kernel obtains a 128×80×80 feature map, and the 128×80×80 feature map is obtained through the GHOSTC3_2 module. The GhostCBS with a 3×3 convolution kernel with a step size of 2 obtains a 256×40×40 feature map, and the GHOSTC3_3 module is used to obtain a 256×40×40 feature map. The module obtains a 256×40×40 feature map. After GhostCBS with a 3×3 convolution kernel with a stride of 2, a 512×20×20 feature map is obtained. The GhostC3_1 module obtains a 512×20×20 feature map, and then the GAM global attention is obtained. The module and the SPPF spatial pyramid pooling module obtain a 512×20×20 feature map; the GhostCBS module with a 1×1 convolution kernel with a stride of 1 obtains a 256×20×20 feature map, and the Upsample upsampling module obtains a 256×40 ×40 feature map, the 256×40×40 feature map obtained through the Concat module and the backbone layer is connected to obtain a 512×40×40 feature map, the 256×40×40 feature map is obtained through the GhostC3_1f module, and the 1× with a step size of 1 is obtained. The GhostCBS module with 1 convolution kernel obtains a 128×40×40 feature map, the Upsample upsampling module obtains a 128×80×80 feature map, and the 128×80×80 feature map obtained by the Concat module and the backbone layer is connected to obtain 256×80 ×80 feature map, the 128×80×80 feature map is obtained through the GhostC3_1f module, the 128×40×40 feature map is obtained through the GhostCBS module with a 3×3 convolution kernel with a stride of 2, and the 128 is obtained through the Concat module and the backbone layer. The ×40×40 feature map is connected to obtain a 128×40×40 feature map. The 256×40×40 feature map is obtained through the GhostC3_1f module. The 256×20×20 feature map is obtained through the 3×3 GhostCBS module with a step size of 2. After The 256×40×40 feature map obtained by the Concat module and the backbone layer is connected to obtain a 512×20×20 feature map. The 512×20×20 feature map is obtained through the GhostC3_1f module. The 128×80×80 feature map obtained from the neck, The 256×40×40 feature map and 512×20×20 feature map are converted into 256×80×80 feature map, 256×40×40 feature map and 256×20×20 feature map through GhostConv with output channel 256 to head After the layer, use the preset a priori box to calculate the bounding box regression loss using the WIOUv3 loss function for each pixel in the feature map, and obtain a multi-dimensional array containing information such as object category, category confidence, bounding box coordinates, width and height, etc. The confidence threshold and IOU threshold are used to screen the redundant information in the array to filter out invalid content, and the non-maximum suppression algorithm is applied to eliminate redundant prediction boxes to obtain the detection information of cage columns and eggs.

其中GhostC3模块、ChostC3_x和GhostC3_1f模块中的GhostConv卷积算法流程为将输入特征1经过1×1的卷积得到特征2,将特征2经过5×5大小的卷积进行线性变换得到特征3,最后将特征2和输入特征1进行合并输出得到特征4;GhostConv模块的加入通过分组卷积和线性操作的方式来使用更少的参数生成更多的特征图,大大降低了模型的复杂度,减少了网络参数数量,提升了模型运算速度。The GhostConv convolution algorithm process in the GhostC3 module, ChostC3_x and GhostC3_1f modules is to perform a 1×1 convolution on the input feature 1 to obtain feature 2, and linearly transform the feature 2 through a 5×5 convolution to obtain feature 3. Finally, Feature 2 and input feature 1 are combined and output to obtain feature 4; the addition of the GhostConv module generates more feature maps with fewer parameters through grouped convolution and linear operations, greatly reducing the complexity of the model and reducing The number of network parameters improves the model operation speed.

其中GAM全局注意力模块的算法流程为输入通道为C,长宽为W×H的图片特征1经过维度转换成W×H×C特征图得到特征2,经过两层的多层感知机转换为C×W×H的特征图得到特征3,进行Sigmoid处理输出特征图与输入特征图进行逐位相乘得到特征4,输入特征4经过卷积核为7×7的卷积得到C/r的特征5,r参数决定了通道分支和空间分支的压缩比例,在经过卷积核为7×7的卷积得到C×W×H的特征6,最后经过Sigmoid输出特征图与特征4进行逐位相乘得到特征7;通过这些步骤,GAM全局注意力模块能够更好地捕捉笼柱和鸡蛋特征图中通道和空间维度之间的依赖关系,从而显著提高模型的性能。The algorithm flow of the GAM global attention module is that the input channel is C, and the image feature 1 with a length and width of W×H is dimensionally converted into a W×H×C feature map to obtain feature 2. After two layers of multi-layer perceptrons, it is converted into The feature map of C×W×H is obtained as feature 3. The output feature map and the input feature map are multiplied bit by bit by Sigmoid processing to obtain feature 4. The input feature 4 is convolved with a convolution kernel of 7×7 to obtain C/r. Feature 5. The r parameter determines the compression ratio of the channel branch and the spatial branch. After convolution with a convolution kernel of 7×7, the feature 6 of C×W×H is obtained. Finally, the Sigmoid output feature map is compared with feature 4 bit by bit. The multiplication results in feature 7; through these steps, the GAM global attention module can better capture the dependence between channels and spatial dimensions in the cage column and egg feature maps, thereby significantly improving the performance of the model.

步骤4、采用数据集对监测算法进行训练,以获得用于监测蛋鸡区位产蛋量的监测模型。Step 4. Use the data set to train the monitoring algorithm to obtain a monitoring model for monitoring egg production in laying hen locations.

本实施例中所采用的损失函数表达式如下:The expression of the loss function used in this embodiment is as follows:

LWIoUv1=RWIoULIoU (3)L WIoUv1 =R WIoU L IoU (3)

式中,xg和yg表示预测框宽和高,xgt和ygt表示真实框宽和高,Wg和Hg表示由预测框和真实框结合构成的最小矩形的宽度和长度,上标*代表不参与计算,有效地消除了阻碍收敛的因素,RWIoU代表WIoU的惩罚项将加强普通质量锚框的损失,β代表离群度,用于为高质量的锚框分配一个梯度权重增益,r′为梯度增益,α和δ为超参数,LIoU代表IOU损失。In the formula, x g and y g represent the width and height of the predicted box, x gt and y gt represent the width and height of the real box, W g and H g represent the width and length of the minimum rectangle formed by the combination of the predicted box and the real box, above The mark * represents not participating in the calculation, effectively eliminating factors that hinder convergence. R WIoU represents the penalty term of WIoU that will enhance the loss of ordinary quality anchor boxes. β represents outliers, which is used to assign a gradient weight to high-quality anchor boxes. Gain, r′ is the gradient gain, α and δ are hyperparameters, and L IoU represents the IOU loss.

其中目标检测模块的训练超参数:训练设备为RTX3080;输入图片大小为640×640;批量大小为8;使用Adam优化器进行模型梯度优化;设置早停机制轮数为150轮;初始学习率为0.01;动量因子为0.937;优化器权重衰减为0.001;训练次数为300轮。The training hyperparameters of the target detection module: the training device is RTX3080; the input image size is 640×640; the batch size is 8; the Ad a m optimizer is used for model gradient optimization; the number of early stopping rounds is set to 150 rounds; initial learning The rate is 0.01; the momentum factor is 0.937; the optimizer weight decay is 0.001; the number of training rounds is 300 rounds.

其中监测模型的训练超参数:最小置信度阈值为0.2;最大IOU匹配阈值为0.7;轨迹在初始化阶段保持的帧数为3;可最大连续丢失帧为50。The training hyperparameters of the monitoring model: the minimum confidence threshold is 0.2; the maximum IOU matching threshold is 0.7; the number of frames maintained in the trajectory during the initialization phase is 3; the maximum number of frames that can be continuously lost is 50.

此外,如表1所示为本实施例的监测模型与其他模型的性能对比结果。In addition, Table 1 shows the performance comparison results between the monitoring model of this embodiment and other models.

MethodMethod P/%P/% R/%R/% mAP@0.5/%mAP@0.5/% F1_scoreF1_score ParamsParams FPSFPS SSDSSD 95.995.9 86.386.3 96.396.3 90.0890.08 13.6413.64 77.577.5 Faster R-CNNFaster R-CNN 96.296.2 86.886.8 96.596.5 91.2691.26 136.2136.2 56.156.1 YOLOv5sYOLOv5s 97.897.8 97.297.2 98.398.3 97.5097.50 7.017.01 91.591.5 YOLOv8YOLOv8 98.898.8 97.697.6 98.698.6 98.2098.20 11.711.7 79.379.3 DETRDETR 97.697.6 97.397.3 97.697.6 97.4597.45 27.9827.98 64.164.1 OursOurs 99.199.1 9898 99.299.2 98.5598.55 5.965.96 113.4113.4

步骤5、获取待计数蛋鸡区位的视频数据,并输入至所述监测模型中以获得对应蛋鸡区位的定位编号以及产蛋量。Step 5: Obtain the video data of the laying hen location to be counted, and input it into the monitoring model to obtain the positioning number and egg production volume of the corresponding laying hen location.

本实施例中,如图4所示定位计数的过程如下:In this embodiment, the positioning and counting process as shown in Figure 4 is as follows:

步骤5.1、获取养鸡场现场的视频帧,监测模型对上下层的笼柱和鸡蛋进行追踪并进行LROI生成的条件判断,如在上下层跟踪到两个以上的笼柱特征,则执行步骤5.2,反之则直接进入步骤5.3;Step 5.1. Obtain video frames from the chicken farm site. The monitoring model tracks the cage columns and eggs on the upper and lower floors and makes conditional judgments for LROI generation. If more than two cage column features are tracked on the upper and lower floors, proceed to step 5.2. , otherwise go directly to step 5.3;

步骤5.2、分别获取当前帧上层蛋鸡区位的两个笼柱追踪编号(pn1、pn1+1)以及下层蛋鸡区位的两个笼柱追踪编号(pn2、pn2+1)的左上横纵坐标和右上横纵坐标的均值 并绘制上层蛋鸡区位感兴趣区域LROI1以及下层蛋鸡区位感兴趣区域LROI2,绘制公式为:Step 5.2: Obtain the upper left corner of the two cage column tracking numbers (p n1 , p n1+1 ) of the upper layer hen area of the current frame and the two cage column tracking numbers (p n2 , p n2+1 ) of the lower layer hen area respectively. The mean value of the horizontal and vertical coordinates and the upper right horizontal and vertical coordinates And draw the area of interest LROI1 for the upper layer hens and the area of interest LROI2 for the lower layer hens. The drawing formula is:

其中LROI1(Xmin,Xmax)、LROI1(Ymin,Ymax)、LROI2(Xmin,Xmax)、LROI2(Ymin,Ymax)为生成LROI1、LROI2的横纵坐标值范围,min、max为最大、小值函数,sum为求和函数。Among them, LROI1 (X min , X max ), LROI1 (Y min , Y max ), LROI2 (X min , max is the maximum and minimum value function, and sum is the summation function.

此外,h1、h2的计算公式为In addition, the calculation formulas of h 1 and h 2 are:

其中H为蛋鸡区位实际高度,f为焦距,d为镜头离蛋鸡区位的实际距离,Q1、Q2为镜头离下层蛋鸡区位的角度,E为一个像元所占实际尺寸大小。Among them, H is the actual height of the laying hens, f is the focal length, d is the actual distance between the lens and the laying hens, Q 1 and Q 2 are the angles of the lens from the lower laying hens, and E is the actual size of one pixel.

步骤5.3、则对生成的LROI1、LROI2的笼柱特征进行判断,如没有检测到新的笼柱编号,即当前帧t的最后一个笼柱的追踪编号pt,n存在于前一帧的笼柱追踪编号列表Pt-1,则不更新编号,否则对其生成的LROI1、LROI2进行递增编号,对于当前帧t上下层的第n个LROI1、LROI2编号lt,n,计算公式为:Step 5.3, then judge the cage column characteristics of the generated LROI1 and LROI2. If no new cage column number is detected, that is, the tracking number p t,n of the last cage column of the current frame t exists in the cage of the previous frame. Column tracking number list P t-1 , the number will not be updated, otherwise the generated LROI1 and LROI2 will be incrementally numbered. For the nth LROI1 and LROI2 number l t,n of the upper and lower layers of the current frame t, the calculation formula is:

其中,pt,n为第t帧第n个笼柱的追踪编号,Pt={pt,1,pt,2,…,pt,n}为第t帧笼柱分配的追踪编号列表,1_Lt={1_lt,1,1_lt,2,…,1_lt,n}为第一层第t帧的笼位编号列表,2_Lt={2_lt,1,2_lt,2,…,2_lt,n}为第二层第t帧的笼位编号列表,分别计算LROI1、LROI2内的鸡蛋数量,第t帧编号n的LROI1、LROI2内的鸡蛋数量et,n的公式为:Among them, p t,n is the tracking number of the n-th cage column in the t-th frame, and P t ={p t,1 ,p t,2 ,…,p t,n } is the tracking number assigned to the cage column in the t-th frame. The list, 1_L t ={1_l t, 1,1_l t,2 ,...,1_l t,n } is the cage position number list of the tth frame of the first layer, 2_L t ={2_l t, 1,2_l t,2 , …,2_l t,n } is the list of cage numbers in the tth frame of the second layer. Calculate the number of eggs in LROI1 and LROI2 respectively. The formula for the number of eggs e t,n in LROI1 and LROI2 in the tth frame number n is: :

其中,表示第t帧中编号n的LROI区域,x是当前帧图像中检测到的鸡蛋的总数,xt,i、yt,i为第t帧第i个鸡蛋的中心点坐标,/>和/>是指示函数,表示xt,i和yt,i在LROIn区域内。in, Represents the LROI area numbered n in the t-th frame, x is the total number of eggs detected in the current frame image, x t,i , y t,i are the center point coordinates of the i-th egg in the t-th frame,/> and/> is an indicator function, indicating that x t,i and y t,i are within the LROIn area.

步骤5.4、为了更好地估计每个蛋鸡区位内的鸡蛋数量,使得变化较大的帧对预测值的影响较小,视频结束后,对每个LROI1、LROI2区域内所有帧的鸡蛋计数值进行加权平均计算得到每个编号蛋鸡区位产蛋量的预测值。Step 5.4. In order to better estimate the number of eggs in each laying hen location so that frames with large changes have less impact on the predicted value, after the video ends, count the eggs of all frames in each LROI1 and LROI2 area. A weighted average calculation is performed to obtain the predicted egg production value of each numbered laying hen location.

如图5所示,为本实施例提供的层叠式笼养蛋鸡区位产蛋监测系统,通过上述实例所提供的层叠式笼养蛋鸡区位产蛋监测方法实现,其包括鸡场管理单元,产蛋监测单元以及系统单元;所述鸡场管理单元包括鸡场环境监测,鸡场饲养管理以及鸡场人员管理;所述产蛋监测单元包括鸡笼监测,产蛋量监测以及异常报警;所述系统管理单元包括用于调配鸡场管理单元和产蛋监测单元的系统设置部分,用于管理账户的系统用户管理部分以及用于存储每日监测结果的总体数据管理部分。As shown in Figure 5, the laminated caged laying hens location egg production monitoring system provided in this embodiment is implemented by the cascaded caged laying hens location egg production monitoring method provided in the above example, which includes a chicken farm management unit, Egg production monitoring unit and system unit; the chicken farm management unit includes chicken farm environment monitoring, chicken farm feeding management and chicken farm personnel management; the egg production monitoring unit includes chicken cage monitoring, egg production monitoring and abnormal alarm; The system management unit includes a system setting part for deploying a chicken farm management unit and an egg production monitoring unit, a system user management part for managing accounts, and an overall data management part for storing daily monitoring results.

本实施例中的层叠式笼养蛋鸡区位产蛋监测系统采用SpringBoot架构,前端基于Vue框架实现,数据库采用Mysql,Jpa是一种规范,用于将实体对象持久化到关系型数据库中;该系统体系架构由View层、Controller层、Service层、Dao层组成,View层用于每个蛋鸡区位产蛋量的数据可视化以及与鸡场管理人员的交互,Controller层负责接收鸡场管理人员的请求,Service层处理鸡场管理管理人员的请求,Dao层是数据访问层,它负责与数据库进行交互,同时将监测模型写入巡检设备的边缘计算设备中,通过X001单目工业摄像头对现场层叠式笼养蛋鸡区位的视频进行获取并通过CSI接口传输至树莓派中进行实时的产蛋量统计得到各个蛋鸡区位的产蛋结果,将结果发送至MySQL数据库中,鸡场管理人员使用智能蛋鸡区位产蛋监测系统对数据库中各个蛋鸡区位的产蛋结果进行实时监测和分析,制定更加科学和有效的鸡场管理策略,去除异常状况的蛋鸡,提高蛋鸡的产蛋量和鸡蛋质量。In this embodiment, the cascading caged laying hen location egg production monitoring system adopts the SpringBoot architecture, the front end is implemented based on the Vue framework, and the database uses Mysql. Jpa is a specification used to persist entity objects into a relational database; The system architecture consists of the View layer, the Controller layer, the Service layer, and the Dao layer. The View layer is used to visualize the data of egg production in each laying hen location and interact with the chicken farm managers. The Controller layer is responsible for receiving the data from the chicken farm managers. The Service layer handles requests from chicken farm managers. The Dao layer is the data access layer. It is responsible for interacting with the database. At the same time, it writes the monitoring model into the edge computing device of the inspection equipment, and monitors the scene through the X001 monocular industrial camera. The video of the cascading cage laying hens is acquired and transmitted to the Raspberry Pi through the CSI interface for real-time egg production statistics to obtain the egg production results of each laying hen location, and the results are sent to the MySQL database for chicken farm managers. Use the intelligent laying hen location egg production monitoring system to monitor and analyze the egg production results of each laying hen location in the database in real time, formulate a more scientific and effective chicken farm management strategy, remove abnormal laying hens, and improve the egg production of laying hens. quantity and egg quality.

为了验证本发明在现实场景中的鲁棒性,在商业化养殖场开展现场实验,针对不同的时间和不同光照度的笼养蛋鸡养殖环境开展测试,表2为本发明监测模型对养殖过程中层叠式笼养蛋鸡区位产蛋的总体实时监测结果。In order to verify the robustness of the present invention in real-life scenarios, on-site experiments were carried out in commercial farms, and tests were conducted on caged layer hens breeding environments at different times and with different illumination. Table 2 shows the monitoring model of the present invention during the breeding process. Overall real-time monitoring results of egg production in stacked caged laying hens.

如图6和图7所示,为测试编号1的区位产蛋的数据柱状图以及错检鸡笼数据柱状图,柱状图的横轴表示不同的鸡笼编号,纵轴表示相应的鸡蛋数量,检测鸡笼505笼,实测鸡笼505笼,鸡笼漏检0个,鸡笼的鸡蛋产蛋错检6笼。As shown in Figure 6 and Figure 7, it is the data histogram of egg production in the location of test number 1 and the data histogram of the wrong chicken cage. The horizontal axis of the histogram represents different chicken cage numbers, and the vertical axis represents the corresponding number of eggs. There were 505 chicken cages tested, and 505 chicken cages were actually measured. 0 chicken cages were missed, and 6 cages were incorrectly detected for egg production.

如图8和图9所示,为第二段采样视频的区位产蛋的数据柱状图以及错检鸡笼数据柱状图,柱状图的横轴表示不同的鸡笼编号,纵轴表示相应的鸡蛋数量,检测鸡笼544笼,实测鸡笼544笼,鸡笼漏检0个,鸡笼的鸡蛋产蛋错检4笼;As shown in Figure 8 and Figure 9, the data histogram of egg production in the second sampling video and the data histogram of wrongly detected chicken cages are shown. The horizontal axis of the histogram represents different chicken cage numbers, and the vertical axis represents the corresponding eggs. Quantity: 544 chicken cages were tested, 544 cages were actually measured, 0 chicken cages were missed, and 4 cages were incorrectly detected for egg production;

如图10和图11所示,为第三段采样视频的区位产蛋的数据柱状图以及错检鸡笼数据柱状图,柱状图的横轴表示不同的鸡笼编号,纵轴表示相应的鸡蛋数量,检测鸡笼524笼,实测鸡笼525笼,鸡笼漏检1个,鸡笼的鸡蛋产蛋错检7笼。As shown in Figure 10 and Figure 11, it is the data histogram of egg production in the third sampling video and the data histogram of wrongly detected chicken cages. The horizontal axis of the histogram represents different chicken cage numbers, and the vertical axis represents the corresponding eggs. In terms of quantity, 524 chicken cages were tested, 525 chicken cages were actually measured, 1 chicken cage was missed, and 7 chicken cages were incorrectly detected for egg production.

如图12和图13所示,为第四段采样视频的区位产蛋的数据柱状图以及错检鸡笼数据柱状图,柱状图的横轴表示不同的鸡笼编号,纵轴表示相应的鸡蛋数量,检测鸡笼577笼,实测鸡笼578笼,鸡笼漏检1个,鸡笼的鸡蛋产蛋错检4笼。As shown in Figure 12 and Figure 13, it is the data histogram of egg production in the fourth sampling video and the data histogram of wrongly detected chicken cages. The horizontal axis of the histogram represents different chicken cage numbers, and the vertical axis represents the corresponding eggs. Quantity: 577 chicken cages were tested, 578 chicken cages were actually measured, 1 chicken cage was missed, and 4 cages were incorrectly detected for egg production.

本发明在规定典型“124”生产模式下的监测结果表现出非常高的准确度,蛋鸡区位计数准确率达到了99%,蛋鸡区位定位准确率达到了99.9%,这表明本发明提出的方法在实际应用中也能保持高水平的监测质量。The monitoring results of the present invention under the specified typical "124" production mode show very high accuracy. The accuracy of counting the location of laying hens reaches 99%, and the accuracy of location positioning of laying hens reaches 99.9%. This shows that the accuracy proposed by the present invention The method can also maintain a high level of monitoring quality in practical applications.

Claims (10)

1. The method for monitoring the laying of the layered cage-raising laying hen in the zone based on computer vision is characterized by comprising the following steps:
step 1, obtaining video data of two layers of laying hen positions through inspection equipment, wherein the inspection equipment comprises two monocular cameras with the same overlooking angle and different heights;
step 2, capturing the acquired video data frame by frame to construct a video image set, labeling the video image set by using cage columns and eggs in the image, and forming a data set by the video image set and the labels;
step 3, a monitoring algorithm based on an improved deep sort framework is constructed, the monitoring algorithm comprises a target detection module, a track matching module, a track generation module and a positioning counting module, the target detection module comprises a target detection module, the target detection module is used for calibrating the serial numbers of cage columns in input video data and identifying the input video data to generate detection information in each frame of image, the detection information comprises an obtained detection frame or a predicted prediction frame, the detection frame comprises tracks matched with eggs and corresponding outline characteristics and track motion parameters, the prediction frame predicts the unmatched tracks by adopting Kalman filtering, the track matching module performs two-round IOU matching according to the detection frame of the current frame and the predicted frame obtained by the previous frame and performs cost matrix analysis based on the matching result, the track generation module performs cascading matching according to the linear matching relation between the detection frame and all the predicted frames and combining the outline characteristics, the track motion parameters and the tracks corresponding to the detection frame so as to generate track images of eggs, the positioning counting module performs positioning position quantity combination on the egg production positions of each egg column according to the serial numbers of the cage columns;
training a monitoring algorithm by adopting a data set to obtain a monitoring model for monitoring the egg yield of the laying hen zone bit;
and step 5, acquiring video data of the laying hen positions to be counted, and inputting the video data into the monitoring model to obtain the positioning numbers and the egg yield of the corresponding laying hen positions.
2. The method for monitoring the regional egg laying of the stacked cage-raising layers based on computer vision according to claim 1, wherein calibration de-distortion treatment is required for the two monocular cameras before the inspection equipment is put in.
3. The method for monitoring the regional egg laying of the stacked cage laying hens based on computer vision according to claim 2, wherein the expression of the calibration de-distortion treatment is as follows:
wherein Z is c As a scale factor of the dimensions of the device,the distance from the optical center to the image plane of the layer location is shown as u and v, and is the abscissa and ordinate of the layer location in the pixel coordinate system, f x And f y For the normalized focal length of the laying hen on the x axis and the y axis, R 3×3 The rotation matrix obtained for camera calibration, X W 、Y W And Z W Is the world coordinate system>Correcting coordinates, k for zone bit distortion of laying hens 1 、k 2 For radial distortion, p 1 、p 2 Is tangential distortion, r is radius of curvature, +.>The coordinates of the laying hen after the zone position is distorted.
4. The method for monitoring the regional egg laying of the stacked cage-raising layers based on computer vision according to claim 1, wherein the construction process of the video image set is as follows:
intercepting the acquired video data by adopting a video capture function for a preset frame number to acquire an initial image set;
and performing data cleaning and data expansion on the obtained initial image set to obtain a video image set containing the laying hen zone bit.
5. The method for monitoring the regional egg laying of the stacked cage-raising layers based on computer vision according to claim 1, wherein the target detection module of the monitoring algorithm introduces a global attention module, and the global attention module is used for performing attention mechanism operation under multiple dimensions on each frame of video image of input video data and multiplying the features under different dimensions bit by bit so as to obtain feature-enhanced image data output.
6. The method for monitoring the regional egg laying of the stacked cage-raising layers based on computer vision according to claim 1, wherein the track matching module adopts a hungarian algorithm to solve a cost matrix to obtain a corresponding linear matching relationship, the linear matching relationship comprises track mismatch, unmatched detection frames and successful matching of the detection frames and the prediction frames;
and if the matching result is that the track is mismatched or the detection frame is not matched, repeating IOU matching until the ending condition is reached.
7. The method for monitoring the laying hen zone bit egg production based on computer vision according to claim 1, wherein the positioning counting module generates an upper layer hen zone bit interest area LROI1 and a lower layer hen zone bit interest area LROI2 through upper layer cage column coordinates, numbers the upper layer hen zone bit interest area LROI1 and the lower layer hen zone bit interest area LROI2 in real time, counts eggs in the LROI1 and LROI2 areas, and after the video is finished, performs weighted average calculation on egg count values of all frames in each LROI1 and LROI2 area to obtain a final hen zone bit egg production observation result.
8. The method for monitoring the regional egg production of stacked caged layer based on computer vision according to claim 1, wherein the target detector of the monitoring algorithm is trained using a WIOU loss function to update parameters of the monitoring algorithm.
9. The computer vision-based stacked caged layer location egg production monitoring method of claim 8, wherein the expression of the WIOU loss function is as follows:
L WIoUv1 =R WIoU L IoU (3)
wherein x is g And y g Representing prediction frame width and height, x gt And y gt Representing the width and height of a real frame, W g And H g Representing the width and length of the smallest rectangle formed by the combination of the predicted frame and the real frame, and superscript represents that the calculation is not participated, effectively eliminating the factor of preventing convergence, R WIoU Penalty terms representing WIoU will enhance the loss of normal quality anchor boxes, β represents outliers, for assigning a gradient weight gain, r, to high quality anchor boxes For gradient gain, alpha and delta are superparameters, L IoU Representing the IOU penalty.
10. A layered cage-raising layer-location egg-laying monitoring system, which is characterized by comprising a chicken farm management unit, an egg-laying monitoring unit and a system unit, wherein the layered cage-raising layer-location egg-laying monitoring method is realized by the layered cage-raising layer-location egg-laying monitoring method as claimed in any one of claims 1 to 9;
the chicken farm management unit comprises chicken farm environment monitoring, chicken farm raising management and chicken farm personnel management;
the egg laying monitoring unit comprises chicken coop monitoring, egg yield monitoring and abnormal alarm;
the system management unit comprises a system setting part for allocating the chicken farm management unit and the egg production monitoring unit, a system user management part for managing accounts and a total data management part for storing daily monitoring results.
CN202311435584.0A 2023-10-30 2023-10-30 Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision Pending CN117351430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311435584.0A CN117351430A (en) 2023-10-30 2023-10-30 Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311435584.0A CN117351430A (en) 2023-10-30 2023-10-30 Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision

Publications (1)

Publication Number Publication Date
CN117351430A true CN117351430A (en) 2024-01-05

Family

ID=89362983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311435584.0A Pending CN117351430A (en) 2023-10-30 2023-10-30 Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision

Country Status (1)

Country Link
CN (1) CN117351430A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118196699A (en) * 2024-02-05 2024-06-14 中国科学院自动化研究所 Automatic behavior monitoring method and device for cage-raising laying hens and electronic equipment
CN118570736A (en) * 2024-06-27 2024-08-30 玖兴农牧(涞源)有限公司 A whole-process management method and system for broiler slaughtering based on segmentation model
CN119228898A (en) * 2024-12-05 2024-12-31 浙江大学 A method and device for estimating residual material in a feed trough of a stacked cage laying hen house
CN119302243A (en) * 2024-12-16 2025-01-14 华南农业大学 Device and method for automatically measuring egg-laying performance of individual geese in small groups of families and cages

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118196699A (en) * 2024-02-05 2024-06-14 中国科学院自动化研究所 Automatic behavior monitoring method and device for cage-raising laying hens and electronic equipment
CN118570736A (en) * 2024-06-27 2024-08-30 玖兴农牧(涞源)有限公司 A whole-process management method and system for broiler slaughtering based on segmentation model
CN119228898A (en) * 2024-12-05 2024-12-31 浙江大学 A method and device for estimating residual material in a feed trough of a stacked cage laying hen house
CN119228898B (en) * 2024-12-05 2025-04-01 浙江大学 Method and device for estimating surplus materials of feed slots of layered cage-raising laying hens
CN119302243A (en) * 2024-12-16 2025-01-14 华南农业大学 Device and method for automatically measuring egg-laying performance of individual geese in small groups of families and cages

Similar Documents

Publication Publication Date Title
CN117351430A (en) Stacked cage-raising laying hen zone-position egg-laying monitoring method and system based on computer vision
CN109785337B (en) A method of counting mammals in pen based on instance segmentation algorithm
CN108416378B (en) A large-scene SAR target recognition method based on deep neural network
CN108830144B (en) Lactating sow posture identification method based on improved Faster-R-CNN
Bai et al. Automated construction site monitoring based on improved YOLOv8-seg instance segmentation algorithm
CN107818571A (en) Ship automatic tracking method and system based on deep learning network and average drifting
CN111339839B (en) Intensive target detection metering method
CN109684906B (en) Method for detecting red fat bark beetles based on deep learning
CN113657287B (en) A target detection method based on deep learning to improve YOLOv3
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN110766690B (en) Wheat ear detection and counting method based on deep learning point supervision thought
CN114624715B (en) A radar echo extrapolation method based on self-attention spatiotemporal neural network model
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
CN111079518A (en) Fall-down abnormal behavior identification method based on scene of law enforcement and case handling area
Sun et al. FBoT-Net: Focal bottleneck transformer network for small green apple detection
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN109934170A (en) A Computer Vision-Based Mine Resource Statistical Method
CN111046756A (en) Convolutional neural network detection method for high-resolution remote sensing image target scale features
CN112215873A (en) A method for tracking and locating multiple targets in a substation
CN119832602A (en) Hyperspectral detection method and hyperspectral detection system for low-yield laying hens
Hu et al. Automatic detection of pecan fruits based on Faster RCNN with FPN in orchard
Pu et al. Multi-Target spraying behavior detection based on an improved YOLOv8n and ST-GCN model with Interactive of video scenes
CN114255342A (en) Improved YOLOv 4-based onshore typical target detection method
CN119007292A (en) Human body action recognition method based on infrared image
Qin et al. A deep learning method based on YOLOv5 and SuperPoint-SuperGlue for digestive disease warning and cage location backtracking in stacked cage laying hen systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination