CN113989538B - Chicken flock uniformity estimation method, device, system and medium based on depth image - Google Patents
Chicken flock uniformity estimation method, device, system and medium based on depth image Download PDFInfo
- Publication number
- CN113989538B CN113989538B CN202111062601.1A CN202111062601A CN113989538B CN 113989538 B CN113989538 B CN 113989538B CN 202111062601 A CN202111062601 A CN 202111062601A CN 113989538 B CN113989538 B CN 113989538B
- Authority
- CN
- China
- Prior art keywords
- weight
- image
- network
- data set
- dataset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a chicken flock uniformity estimation method, device, system and medium based on a depth image, wherein the method comprises the following steps: obtaining a depth image library of broiler chickens; constructing a target segmentation data set and a weight classification data set according to the broiler depth image library; constructing a target segmentation network and a weight classification network; training a target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set; training the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset; estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens. The invention utilizes a deep learning network to automatically extract image characteristics, carries out weight classification, increases the classification number to achieve weight estimation within an error (50 g), and finally calculates the uniformity of chicken flocks by combining the weight.
Description
Technical Field
The invention relates to a chicken flock uniformity estimation method, device, system and medium based on a depth image, belonging to the fields of computer vision, machine learning, deep learning and software development.
Background
The traditional method for measuring the weight of the broiler chickens mainly adopts electronic weighing measuring equipment to measure the weight of the broiler chickens, and although a measuring instrument can accurately measure the weight of the chickens, the broiler chickens struggle frequently in the weighing process, a weighing platform shakes obviously, and the result and the actual weight deviate along with accumulation of dirt. Meanwhile, the situation that the weight of the broiler chickens is often contacted with the broiler chickens is measured, the stress reaction of the broiler chickens is increased, so that the broiler chickens are in poor conditions in subsequent raising environments, the broiler chickens die due to poor mood of the broiler chickens when serious conditions are caused, the welfare raising [1-2] of the broiler chickens is not facilitated, the broiler chickens are required to be manually pulled out from one chicken coil or chicken cage when the weight of the broiler chickens is measured, manpower and material resources are very consumed, the raising cost is caused to rise linearly, the uniformity of the chicken flock is not calculated frequently, a significant index for judging whether the chicken flock is grown normally or not can not be obtained timely by a raiser, the chicken flock is in a sub-health or unhealthy condition possibly caused, diseases frequently occur, and the condition that the feed-meat ratio is reduced appears.
The uniformity of chicken flocks comprises 3 aspects of uniformity of body state, uniformity of weight and uniformity of sexual maturity. The uniformity of body form is substantially fixed during the early stages of brooding, which determines the uniformity of body weight. The weight uniformity can be improved by grouping, group regulation and feed regulation, and the uniformity of sexual maturity, namely the uniformity of sexual maturity, is determined. The three uniformity relationships are very close and together form a complete uniformity concept. In chicken raising production, it is very important to improve the uniformity of chicken flocks, and flock management is an important measure for improving the uniformity of chicken flocks, so that when daily raising management is enhanced, reasonable flock of chicken flocks with poor uniformity is required. The untimely grouping and the group adjustment are one of the main factors of the uniformity reduction. The chicken flocks are found to be bad in uniformity and are not required to be weighed in the grouping operation or daily inspection, and although the chicken flocks are sampled and weighed (10% -20%), a large amount of manpower and material resources are still required to be input, so that the breeding cost is increased.
In addition, the traditional two-dimensional digital image processing technology has the problems that the acquired object information is small, the camera is sensitive to surrounding environment information, and the monitoring precision is inaccurate. Therefore, in the field of computer vision, the three-dimensional model acquisition technology is favored by research personnel at home and abroad, compared with the traditional two-dimensional color image, the depth image can not be influenced by environmental factors such as external illumination and the like, and the later-stage image processing is concise, so that the method has wide application [3-5] in behavior monitoring and weight prediction of large animals.
Since the 21 st century, electronic information technology has rapidly evolved, and its use in a variety of industries has also changed the mode of development of these industries. At present, the research direction of a plurality of livestock and poultry specialists is how to combine information technology with the poultry farming industry to realize automation and intelligence of the poultry farming industry, so as to greatly improve the quality and yield of poultry products. In the aspect of poultry behavior analysis and weight estimation, because of the characteristics of small poultry size, multiple joints, variable movement forms, wide space, wide range and the like, many fine movements of an individual are difficult to capture and analyze by a camera. So the analysis research of the poultry at present is mainly focused on the behavior of the poultry group. Weight weighing methods are often used in poultry farming, and traditional weighing methods employ manual weighing, but are not only labor intensive, but also costly, and due to various external causes, accuracy is not high. Therefore, with the development of computer vision technology, the application of the computer vision technology to the research on animal behavior and weight monitoring is more and more, and the computer vision technology has the advantages of low cost, no direct contact with animals, continuous shooting and the like, and is becoming an important research method.
Currently, computer vision techniques are mainly studied on poultry in two aspects: 1) Monitoring the posture change of poultry: the feature of the posture change is a very important feature when analyzing the behavior of the poultry, and the posture change includes various behavior information such as the position, posture, speed, contour and the like of the poultry. When the poultry has abnormal behaviors, the abnormal conditions of the living environment of the poultry are indicated, or the poultry has health problems. Therefore, many students at home and abroad begin to automatically identify individual and group behaviors of poultry by utilizing digital image processing technology. The scholars (6) such as Lao Fengdan and the like propose to analyze and identify the shot laying hen images by utilizing a machine vision technology, and the method provided by the scholars can accurately identify 9 behavior characteristics such as drinking water, feeding, shaking, resting, wing beating, wing lifting and the like of a single chicken. The Lao Fengdan et al [7] adopts 1 3D camera to start to synchronously collect two-dimensional and depth image data of the laying hen in the experimental environment, develops intelligent recognition software of the behavior of the laying hen, and has the average recognition accuracy rate of more than 80% for four behaviors of feeding, lying, standing and sitting of the laying hen; 2) Poultry weight estimation: the weight measurement method is mainly applied to the growth and development process of the broiler chickens, and the weight can show the growth condition, feed conversion rate and uniformity of the broiler chickens, so the weight measurement method is one of important indexes for the breeders to evaluate the growth state of the broiler chickens. The common weight weighing method of the broiler chickens is mainly carried out manually, and the most serious problem of the method is that the broiler chickens need to be directly contacted, so that stress reaction is generated on the broiler chickens, and the broiler chickens die when serious stress reaction is generated. With the rapid development of computer vision technology, many students are gradually shooting two-dimensional and depth images of animals, calculating weight-related data such as the volume of poultry after identifying, analyzing and extracting features, and constructing a corresponding mathematical model for estimating the weight [8-9] of the poultry. Mollah and other scholars [10] shoot broilers by using a camera, develop a brand-new image processing technology according to the obtained two-dimensional color image, extract the surface area of the broilers from the image, then establish a linear equation of the surface area of the broilers in the image and the actual weight, and experimental results show that the average relative error of estimating the weight of the broilers by using the technology is only 0.04% -16.47%. Wet et al [11] uses the correlation between the body weight and the two characteristics of the surface and the periphery of the broiler chicken to conduct research and analysis, and also uses an image processing technology, and the research result shows that the estimated average relative error of the body weight is only 11%. The above experiments are all fully illustrative of the ability to estimate poultry weight using machine vision techniques.
The uniformity of the chicken flock is estimated conventionally, when the weight of the broiler chicken is measured, the chicken is required to be grabbed, then manual measurement and calculation are carried out by using a hanging scale, an electronic balance and the like, and statistical data are carried out to obtain uniformity information of the chicken flock. The invention aims to predict the average weight of the chicken flock by utilizing new technologies such as computer vision, machine learning and the like, automatically acquire uniformity information of the chicken flock, simplify the operation, facilitate a chicken flock manager to manage the chicken flock, improve the overall quality of broiler chickens, reduce the manpower investment and reduce the breeding cost.
Foreign scholars Anders Krogh Mortensen [12] et al have achieved estimating broiler weight using depth images. After they extracted 12 features from the depth map, modeling experiments were performed using multiple linear regression, feed forward neural networks, and bayesian artificial neural networks. The result shows that the average relative error of the predicted weight and the reference weight is 7.8%, the model diagram of the data acquisition system is shown in fig. 1, and the flow chart for estimating the weight of the broiler chickens by the depth image is shown in fig. 2.
The broiler image segmentation in the image processing flow adopts a watershed segmentation algorithm, and adopts one-dimensional characteristic day age, projection area, broiler width, perimeter, maximum inscribed circle radius and eccentricity as two-dimensional characteristics in a model. The three-dimensional characteristics of volume, convex volume, surface area, convex surface area, back width and back height are taken as the three-dimensional characteristics. Modeling estimates were made using multiple linear regression, feed forward neural networks, and bayesian artificial neural networks, with the best average relative average error being 7.8%.
The domestic scholars Wang Lin [13] et al have also used depth images taken with a Kinect camera and BP neural networks to build models to estimate broiler body mass and to fit the growth model of broilers. The method comprises the steps of selecting and cutting images, median filtering, ojin threshold segmentation and binarization, obtaining a maximum target by an object identification method, and carrying out morphological opening and closing reconstruction to process depth images of broilers. Then 9 features were also extracted from the depth map, but unlike foreign Anders Krogh Mortensen, wang Lin et al used BP neural network (a typical supervised neural network classifier BP neural network) for weight modeling, and the results showed accuracy up to 99.43%, and the flow of their image processing and the result of target extraction are shown in fig. 3 and 4, respectively.
9 Characteristics of day-old, minimum rectangular length and width of a target, projection area, outline perimeter, maximum inscribed circle radius and eccentricity, volume and back width are adopted, and BP neural network is combined to realize the mass quality estimation of the group broiler chickens. The estimated result is compared with the actual measurement result, and the research result shows that the root mean square error of the estimated result and the actual measurement result is 0.048, the average relative error is 3.3%, the absolute error is in the range of 0.0010-0.0682 kg, and the optimal fitting degree is 0.9943. In the broiler weight grading experiment, a Support Vector Machine (SVM) algorithm and an RBF neural network algorithm are adopted to carry out grading (three-level) identification and verification on the broiler weight. The result shows that the grading accuracy of the support vector machine and RBF neural network algorithm to the test set reaches 94.17% and 86.39% based on the nine extracted features. The weight grading effect is good, and the requirement of automatic weight grading of the broiler pictures can be met.
In summary, the prior art has the following disadvantages: 1) The image information acquisition device is fixed in a laboratory and cannot be put into a real production environment; 2) Image processing does not fully utilize depth information, and target recognition and segmentation are incomplete and efficient; 3) The effective feature extraction is insufficient, and the model effect also has a lifting space; 4) Fitting and estimating the weight by using a better model; 5) There is no client system provided to the flock manager to track information. The reason for this is: 1) A complete set of equipment and a feasible scheme are not designed, so that the equipment can be truly applied to a production environment; 2) The chickens have multiple dynamic states, the behaviors under the cameras are not controlled, and the behaviors such as shaking, wing opening and the like often occur to cause inaccurate classification or estimation results.
The references are as follows:
[1] zhang Jie, zhang Denghui comparison of the current state of animal welfare at home and abroad and thinking [ J ]. J.animal husbandry, 2013, 32 (1): 36-38
[2] Zhao Ziguang influence of feeding mode on welfare status of broiler chickens [ D ]. Harbin: university of northeast agriculture, 2011.
[3]J Kongsro.Estimation of pig weight using a Microsoft Kinect prototype imaging system[J].Computers&Electronics in Agriculture,2014,109:32-35.
[4] Guo Hao, wang Peng, ma Qin, etc. depth image-based cow body type assessment index acquisition technique [ J ]. Agricultural machinery journal, 2013, 44 (S1): 273-276.
[5] Liu Bo, zhu Weixing, yang Jianjun, etc. live pig step frequency feature extraction based on depth image and live pig skeleton endpoint analysis [ J ]. Agricultural engineering journal, 2014, 30 (10): 131-137.
[6] Laofu dan, teng Guanghui, li Jun, etc. machine vision identifies the behavior of individual layers [ J ]. Agricultural engineering journal, 2012, 28 (24): 157-163.
[7] Laofu dan Du Xiaodong, teng Guanghui depth image based layer behavior recognition method [ J ] agricultural machinery journal, 2017, 48 (1): 156-162.
[8] Shen Mingxia, liu Longshen, li, etc. the information monitoring technology of individual livestock and poultry raising is developed [ J ]. Agricultural machinery journal, 2014, 45 (10): 245-251.
[9] Xiong Benhai, luo Qingyao, yang Liang. Research on the key technology of the Internet of things for fine rearing of livestock [ J ]. Chinese agricultural science and technology guide, 2011, 13 (5): 19-25.
[10]M Mollah,M Masan,M Salam.Digital image analysis to estimate the live weigt of broiler[J].Computers&Electronics in Agriculture,2012,72(1):48-52.
[11]W De,E Vranken,A Chedad,et al.Computer-assisted image analysis to quantify daily growth rates of broiler chickens[J].British Poultry Science,2003,44(4):524-32.
[12]Mortensen A,Lisouski P,Ahrendt P.Weight prediction of broiler chickens using 3D computer vision[J].Computers&Electronics in Agriculture,2016,123(C):319-326
[13] Wang Lin, sun Chuanheng, li Wenyong, ji Zengtao, zhang Xiang, wang Yizhong, lei Peng, yang Xinting broiler body mass estimation model based on depth image and BP neural network [ J ]. Agricultural engineering journal, 2017, 33 (13): 199-205.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, system, computer device and storage medium for estimating uniformity of chicken flock based on depth image, which uses a depth learning network to automatically extract image features, classify weight, increase classification number to achieve weight estimation within error (50 g), and finally calculate uniformity of chicken flock in combination with weight.
The first objective of the present invention is to provide a chicken uniformity estimation method based on depth images.
The second objective of the present invention is to provide a chicken uniformity estimation device based on depth images.
The third objective of the present invention is to provide a chicken uniformity estimation system based on depth images.
A fourth object of the present invention is to provide a computer device.
A fifth object of the present invention is to provide a storage medium.
The first object of the present invention can be achieved by adopting the following technical scheme:
a depth image-based chicken flock uniformity estimation method, the method comprising:
obtaining a depth image library of broiler chickens;
constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
constructing a target segmentation network and a weight classification network;
training a target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
training the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset;
Estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens.
Further, the training the target segmentation network by using the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set specifically includes:
Processing the image in the target segmentation dataset;
Inputting the processed image into a target segmentation network, training the target segmentation network by using a Dice loss function, and generating a weight file;
A target mask image dataset and a rectangular cropped image dataset are generated from the weight file.
Further, the Dice loss function is as follows:
d=1-2|X∩Y||X|+|Y|
Wherein, |x n y| refers to the intersection between X and Y; the |x| and |y| denote the number of elements of X and Y, respectively, X denotes the GT-divided image, and Y denotes the Pred-divided image.
Further, the training of the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset specifically includes:
the target mask image dataset, the rectangular cropped image dataset, and the weight classification dataset are input into a weight classification network, which is trained using CrossEntropy loss functions.
Further, the CrossEntropy loss function specifically includes:
where p (x) is the true probability distribution and q (x) is the predicted probability distribution.
Furthermore, the pre-segmentation for the target segmentation network is Unet neural network, and the re-segmentation uses a watershed segmentation algorithm;
The weight classification network is a modified basic image classification network, and the last layer of classifier network of the basic image classification network is replaced by a complete connection layer to output quality class scores; the underlying image classification network weights are initialized by training on the ImageNet dataset and randomly initializing the added full connection weights.
Further, the uniformity of the chickens is calculated according to the chickens data and the standard weight of the chickens at the age of days, and the formula is as follows:
B=[0.1-0.15]
H1=W-W*B
H2=W+W*B
A=P/X*100%
Wherein, the value of B is in the range of 0.1 to 0.15, W is the standard weight of the day-old chicken species or the average weight of the sampled samples, P is the number between the upper limit and the lower limit of the weight, and A is the weight uniformity.
The second object of the invention can be achieved by adopting the following technical scheme:
A depth image-based chicken flock uniformity estimation apparatus, the apparatus comprising:
the acquisition module is used for acquiring a broiler chicken depth image library;
the construction module is used for constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
the building module is used for building a target segmentation network and a weight classification network;
The first training module is used for training the target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
The second training module is used for training the weight classification network by utilizing the target mask image data set, the rectangular clipping image data set and the weight classification data set;
the estimating module is used for estimating the image to be detected by utilizing the trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens.
The third object of the present invention can be achieved by adopting the following technical scheme:
The chicken crowd uniformity estimation system based on the depth image comprises an image acquisition device, a client and a server, wherein the image acquisition device and the client are respectively connected with the server;
the image acquisition device is used for acquiring depth images of broiler chickens;
The client is used for collecting the image to be detected, uploading the image to be detected and the necessary information of the chicken farm to the server, and receiving the chicken flock uniformity returned by the server;
the server side is used for executing the chicken flock uniformity estimation method.
The fourth object of the present invention can be achieved by adopting the following technical scheme:
the computer equipment comprises a processor and a memory for storing a program executable by the processor, wherein the processor realizes the chicken flock uniformity estimation method when executing the program stored by the memory.
The fifth object of the present invention can be achieved by adopting the following technical scheme:
A storage medium storing a program which, when executed by a processor, implements the chicken uniformity estimation method described above.
Compared with the prior art, the invention has the following beneficial effects:
1. According to the invention, the characteristics of the depth image of the broiler chicken are not manually extracted, but the characteristic extraction is automatically carried out by utilizing a depth neural network, a new scheme is provided for predicting the weight of the animal, compared with the prior art that the extracted characteristic data is used for predicting the weight by using regression prediction or Bayesian network or BP neural network, the task of predicting the weight is converted into the image classification problem, the image classification network is utilized for predicting the weight, so that the uniformity of chicken flock is obtained, namely, the characteristic is automatically extracted by utilizing the neural network and is used for estimating the weight by utilizing the image classification method, the characteristic is different from the previous manual extraction image characteristic and the weight is estimated by utilizing a machine learning model, the method is friendly to operators, the influence on the chicken flock is extremely small, the depth image is adopted, the target characteristic can be reflected better than the RGB image, and 3D point cloud modeling is not used on the model, so that the model speed is accelerated.
2. Compared with the broiler chicken data set constructed by the prior art, which is only a top view with fixed height, the broiler chicken data set constructed by the prior art also constructs the data set of the top view and the side view, and then can construct the binary group of the data set for the weight classification network.
3. According to the weight estimation method, weight classification is carried out on the single chicken depth image by utilizing the original pictures subjected to center cutting at all angles, weight classification is carried out by utilizing the segmented mask pictures, and weight classification is carried out on the original pictures subjected to fixed-size rectangle cutting; depth image weight estimation of a plurality of chickens, and weight classification was performed using the segmented mask map.
4. The target segmentation network adopts Unet segmentation network to perform pre-segmentation, and for abnormal segmentation, such as the condition of adhesion of a plurality of chickens, the watershed segmentation algorithm is used for re-segmentation to obtain each chicken instance.
5. The weight classification network is a modified basic image classification network, and the last layer of classifier network of the basic image classification network is replaced by a complete connection layer to output quality class scores;
6. The invention provides a computer end management system and a WeChat applet end which are used for submitting data acquisition results and tracking the results for a chicken flock manager to make a chicken flock management decision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a prior art data acquisition system model diagram.
Fig. 2 is a flow chart of estimating weight of broiler chickens by using depth images in the prior art.
Fig. 3 is a flow chart of prior art image processing.
Fig. 4 is a graph of the results of prior art object extraction.
Fig. 5 is a design diagram of a folded camera with support according to embodiment 1 of the present invention.
Fig. 6 is a design diagram of a handheld camera according to embodiment 1 of the present invention.
Fig. 7 is a schematic diagram of a depth image acquired according to embodiment 1 of the present invention.
FIG. 8 is a schematic view of another depth image acquired according to embodiment 1 of the present invention
Fig. 9 is a flowchart of a chicken uniformity estimation method based on a depth image according to embodiment 1 of the present invention.
Fig. 10 is a schematic diagram of the construction of a depth image dataset of broiler chicken of embodiment 1 of the present invention.
Fig. 11 is a block diagram of a target division network according to embodiment 1 of the present invention.
Fig. 12 is a block diagram of a weight classification network according to embodiment 1 of the present invention.
Fig. 13 is a view showing a structure of an image classification network according to the embodiment 1 of the present invention.
Fig. 14 is an interaction diagram of a client and a server according to embodiment 1 of the present invention.
Fig. 15 is a block diagram illustrating a depth image-based chicken uniformity estimation apparatus according to embodiment 2 of the present invention.
Fig. 16 is a block diagram showing the structure of a computer device according to embodiment 3 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Example 1:
The poultry farming industry is also required to search for a new transformation mode as in other industries, but the embodiment is to predict the weight of the chickens by using computer vision, calculate the uniformity of the chickens, and realize the zero-contact prediction of the uniformity of the chickens, thereby greatly reducing the weight of the chickens and the manpower and material resources consumed by manually calculating the uniformity of the chickens, and most importantly, not causing irritation to the chickens.
The embodiment provides a chicken crowd uniformity estimation system based on a depth image, which comprises an image acquisition device, a client and a server, wherein the image acquisition device and the client are respectively connected with the server.
The image acquisition device is used for acquiring depth images of broiler chickens, and the image acquisition device can be an automatic support folding camera, and a design diagram is shown in fig. 5, and the specific operation is as follows: erecting a supporting folding camera (opened horizontally by 90 degrees) in a henhouse in advance, pulling out a data transmission line to a remote computer, controlling the camera to take a picture at intervals of 1 minute by a computer program, taking 15 to 20 minutes at one sampling point (in order to avoid the problem of uneven continuous taking of samples), and then moving the supporting folding camera to the next sampling point; the image acquisition device can also be a handheld camera, the design diagram is shown in fig. 6, the top is a depth camera, the depth camera is fixed on a holding bar by using a 1/4-20UNC threaded hole, a data line is wound around the holding bar, the tail end of the data line is connected to a notebook computer, the holding bar is 60-100cm, and the specific operation is that: the hand-held camera handle bar randomly walks (shoots or records) in the chicken house, the data transmission line is connected to a notebook computer or a tablet computer carried along with sound, video recording or image shooting is started, a program can be used for controlling the camera to shoot at intervals of 1 second and store images, or a computer is manually controlled to shoot, and the chicken house of about 1000 chickens shoots for 10 to 15 minutes. The photographed image may have an ill-or invalid image, and is manually rejected. When video is shot, the program is utilized to automatically track the chicken and reject the image data with poor filtering effect or invalid, and only the chicken in the effective image is tracked and calculated.
The acquired depth image information can be automatically stored in a computer disk and then is locally processed or uploaded to a server (cloud server) in a networking way for further processing, and the acquired depth image of the broiler chickens is divided into various angles:
1) The top view is that the distance between the camera plane and the broiler chicken is about 50cm (the same applies below).
2) Side view (including left and right sides).
3) Front view (including front and back sides).
If the video is recorded by using the handheld depth camera, a lower frame rate, such as 6 frames, is adopted, the resolution is 1280 x 720, the shooting height is about 1m (waist position), the shooting time is 10 minutes, and the depth video is stored as a bag format; then each frame of the video is analyzed, a depth matrix is cut according to 1 piece per second or 1 piece per two seconds, the depth matrix is converted into a gray level image, and then the gray level image is converted into a pseudo color image and then stored for manufacturing a data set.
A "depth camera" is a camera capable of capturing the real world as human perception, both in color and in "distance", providing an RGB image and a depth image, i.e. a range image, with the addition of a color rendering depth range, displayed with a pseudo-color map, as shown in fig. 7 and 8.
The client can provide a management system based on the computer end, so that a chicken farm manager can upload image data and necessary chicken farm information, and the chicken farm information and chicken farm uniformity can be obtained after the background system is used for processing; or based on the WeChat platform, an applet client is provided or a mobile phone APP is constructed, so that the historical extraction and measurement result can be inquired and tracked.
As shown in fig. 9, the present embodiment provides a chicken crowd uniformity estimation method based on a depth image, which includes the following steps:
and S901, acquiring a broiler chicken depth image library.
And acquiring a broiler chicken depth image by an image acquisition device, and preprocessing (removing abnormal images such as images without targets in the field of view) to acquire a broiler chicken depth image library.
S902, constructing a target segmentation data set and a weight classification data set according to the broiler depth image library.
According to the depth images of the broiler chickens in the depth image library, a training set and a testing set for target segmentation are constructed to serve as target segmentation data sets, and a training set and a testing set for weight classification are constructed to serve as weight classification data sets, wherein the specific principle is shown in fig. 10.
S903, building a target segmentation network (OSN) and a weight classification network (WSN).
The structure of the target segmentation network is shown in fig. 11, the pre-segmentation is Unet neural network, and the re-segmentation uses watershed segmentation algorithm; the structure of the weight classification network is shown as 12, and the structure of the basic image classification network is shown as fig. 12, and the last classifier network of the basic image classification network is replaced by a connection layer to output quality class scores; the underlying image classification network weights are initialized by training on the ImageNet dataset and randomly initializing the added full connection weights.
S904, training the target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set.
The step S904 specifically includes:
S9041, processing the image in the target segmentation data set.
The processing performed in this embodiment specifically includes: converting into a gray level map; scaling down to 256 x 256; the images were normalized with a mean of 0.5 and standard deviation of 0.5.
S9042, inputting the processed image into a target segmentation network, training the target segmentation network by using a Dice loss function, and generating a weight file.
Specifically, the processed training set image and the test set image are respectively input into a target segmentation network, and the target segmentation network is trained by using a Dice loss function.
The Dice loss function is as follows:
d=1-2|X∩Y||X|+|Y|
wherein, |x n y| refers to the intersection between X and Y; the number of elements of X and Y is represented by |x| and |y|, respectively, and the coefficient 2 in the numerator is because the denominator is responsible for the repeated computation of common elements between X and Y, X represents the GT-segmented image, and Y represents the Pred-segmented image.
S9043, generating a target mask image dataset and a rectangular clipping image dataset according to the weight file.
S905, training the weight classification network by using the target mask image data set, the rectangular clipping image data set and the weight classification data set.
Specifically, the training set and the test set of the target mask image dataset, the rectangular cropped image dataset, and the weight classification dataset are respectively input into the weight classification network, and the weight classification network is trained using CrossEntropy loss functions.
CrossEntropy loss functions, specifically including:
where p (x) is the true probability distribution and q (x) is the predicted probability distribution.
S906, estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating uniformity of chicken flocks according to chicken flock data and standard daily age body weights of the broilers.
And (3) saving the weight of the trained weight classification network, inputting an image to be tested transmitted from the client into the trained weight classification network in actual use, obtaining weight types and corresponding weight intervals, calculating uniformity of chicken flocks according to chicken flock data and standard daily age weights of the broiler chickens, and returning to the client, wherein interaction between the client and the server is shown in fig. 14.
And (3) calculating uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens, wherein the uniformity of the chickens is calculated according to the following formula:
B=[0.1-0.15]
H1=W-W*B
H2=W+W*B
A=P/X*100%
Wherein, the value of B is in the range of 0.1 to 0.15, W is the standard weight of the day-old chicken species or the average weight of the sampled samples, P is the number between the upper limit and the lower limit of the weight, and A is the weight uniformity.
Compared with the prior art, in the experiment, for 200 hundred short yellow chickens before marketing, the uniformity of the chickens is estimated by the method used in the embodiment, and the result is that the relative error of uniformity is small (1.53%), the accuracy of weight estimation is relatively high, and the average relative error of weight estimation is 4.27%, which is superior to the prior prediction method.
It should be noted that although the method operations of the above embodiments are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all illustrated operations be performed in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Example 2:
As shown in fig. 15, the present embodiment provides a chicken crowd uniformity estimation device based on a depth image, which includes an acquisition module 1501, a construction module 1502, a construction module 1503, a first training module 1504, a second training module 1505 and an estimation module 1506, wherein specific functions of each module are as follows:
an obtaining module 1501 is configured to obtain a depth image library of broiler chickens.
The constructing module 1502 is configured to construct a target segmentation dataset and a weight classification dataset according to the broiler depth image library.
A building module 1503 is configured to build a target segmentation network and a weight classification network.
A first training module 1504 is configured to train the target segmentation network with the target segmentation dataset to generate a target mask image dataset and a rectangular cropped image dataset.
The second training module 1505 is configured to train the weight classification network using the target mask image dataset, the rectangular cropped image dataset, and the weight classification dataset.
The estimating module 1506 is configured to estimate the image to be measured by using the trained weight classification network to obtain a weight class and a corresponding weight interval, and calculate the uniformity of the chicken flock according to the chicken flock data and the standard daily age weight of the broiler chickens.
Specific implementation of each module in this embodiment may be referred to embodiment 1 above, and will not be described in detail herein; it should be noted that, in the system provided in this embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure is divided into different functional modules to perform all or part of the functions described above.
It will be understood that the terms first, second, etc. used in the above devices may be used to describe various modules, but these modules are not limited by these terms. These terms are only used to distinguish one module from another. For example, a first training module may be referred to as a second training module, and similarly, a second training module may be referred to as a first training module, both of which are acquisition modules, but not the same acquisition module, without departing from the scope of the present invention.
Example 3:
the present embodiment provides a computer device, which may be a computer, a server, or the like, as shown in fig. 16, and includes a processor 1602, a memory, an input device 1603, a display device 1604, and a network interface 1605 connected via a system bus 1601, where the processor is configured to provide computing and control capabilities, the memory includes a nonvolatile storage medium 1606 and an internal memory 1607, where the nonvolatile storage medium 1606 stores an operating system, a computer program, and a database, and the internal memory 1607 provides an environment for the operation of the operating system and the computer program in the nonvolatile storage medium, and when the processor 1602 executes the computer program stored in the memory, the chicken uniformity estimation method of the above embodiment 1 is implemented as follows:
obtaining a depth image library of broiler chickens;
constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
constructing a target segmentation network and a weight classification network;
training a target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
training the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset;
Estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens.
Example 4:
the present embodiment provides a storage medium, which is a computer readable storage medium storing a computer program, wherein when the computer program is executed by a processor, the chicken uniformity estimation method of the above embodiment 1 is implemented, as follows:
obtaining a depth image library of broiler chickens;
constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
constructing a target segmentation network and a weight classification network;
training a target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
training the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset;
Estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens.
The computer readable storage medium of the present embodiment may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In summary, the invention uses the deep learning network to automatically extract image features, classify the weight, increase the classification number to achieve weight estimation within the error (50 g), finally calculate the uniformity of chicken flocks in combination with the weight, and provide clients for chicken house managers to use, and is used for submitting image data, obtaining results and management suggestions, and tracking results to make management decisions.
The above-mentioned embodiments are only preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can make equivalent substitutions or modifications according to the technical solution and the inventive concept of the present invention within the scope of the present invention disclosed in the present invention patent, and all those skilled in the art belong to the protection scope of the present invention.
Claims (6)
1. A chicken crowd uniformity estimation method based on a depth image is characterized by comprising the following steps:
obtaining a depth image library of broiler chickens;
constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
Constructing a target segmentation network and a weight classification network, wherein the pre-segmentation for the target segmentation network is a Unet neural network, the re-segmentation uses a watershed segmentation algorithm, the weight classification network is a modified basic image classification network, and the last classifier network of the basic image classification network is replaced by a complete connection layer to output quality class scores; the basic image classification network weight is initialized by training on an ImageNet data set, and the added complete connection weight is initialized randomly;
training a target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
training the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset;
Estimating the image to be detected by using a trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens;
The training of the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset specifically comprises the following steps:
Inputting the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset into a weight classification network, training the weight classification network using CrossEntropy loss functions;
the CrossEntropy loss function specifically includes:
Wherein p (x) is the true probability distribution and q (x) is the predicted probability distribution;
and calculating uniformity of the chicken flocks according to the chicken flock data and the standard daily age body weight of the broiler chickens, wherein the uniformity of the chicken flocks is calculated according to the following formula:
B=[0.1-0.15]
H1=W-W*B
H2=W+W*B
A=P/Q*100%
Wherein, the value of B is in the range of 0.1 to 0.15, W is the standard weight of the day-old chicken species or the average weight of the sampled samples, P is the number between the upper limit and the lower limit of the weight, and A is the weight uniformity.
2. The method of claim 1, wherein training the target segmentation network with the target segmentation dataset to generate the target mask image dataset and the rectangular cropped image dataset comprises:
Processing the image in the target segmentation dataset;
Inputting the processed image into a target segmentation network, training the target segmentation network by using a Dice loss function, and generating a weight file;
A target mask image dataset and a rectangular cropped image dataset are generated from the weight file.
3. The method of claim 2, wherein the Dice loss function is as follows:
d=1-2|X∩Y||X|+|Y|
Wherein, |x n y| refers to the intersection between X and Y; the |x| and |y| denote the number of elements of X and Y, respectively, X denotes the GT-divided image, and Y denotes the Pred-divided image.
4. A depth image-based chicken flock uniformity estimation device, the device comprising:
the acquisition module is used for acquiring a broiler chicken depth image library;
the construction module is used for constructing a target segmentation data set and a weight classification data set according to the broiler depth image library;
The building module is used for building a target segmentation network and a weight classification network, wherein the pre-segmentation for the target segmentation network is Unet neural networks, the re-segmentation uses a watershed segmentation algorithm, the weight classification network is a modified basic image classification network, and the last classifier network of the basic image classification network is replaced by a complete connection layer so as to output quality class scores; the basic image classification network weight is initialized by training on an ImageNet data set, and the added complete connection weight is initialized randomly;
The first training module is used for training the target segmentation network by utilizing the target segmentation data set to generate a target mask image data set and a rectangular clipping image data set;
The second training module is used for training the weight classification network by utilizing the target mask image data set, the rectangular clipping image data set and the weight classification data set;
the estimating module is used for estimating the image to be detected by utilizing the trained weight classification network to obtain weight types and corresponding weight intervals, and calculating the uniformity of the chickens according to the chickens data and the standard daily age body weight of the chickens;
The training of the weight classification network by using the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset specifically comprises the following steps:
Inputting the target mask image dataset, the rectangular clipping image dataset and the weight classification dataset into a weight classification network, training the weight classification network using CrossEntropy loss functions;
the CrossEntropy loss function specifically includes:
Wherein p (x) is the true probability distribution and q (x) is the predicted probability distribution;
and calculating uniformity of the chicken flocks according to the chicken flock data and the standard daily age body weight of the broiler chickens, wherein the uniformity of the chicken flocks is calculated according to the following formula:
B=[0.1-0.15]
H1=W-W*B
H2=W+W*B
A=P/Q*100%
Wherein, the value of B is in the range of 0.1 to 0.15, W is the standard weight of the day-old chicken species or the average weight of the sampled samples, P is the number between the upper limit and the lower limit of the weight, and A is the weight uniformity.
5. The chicken crowd uniformity estimation system based on the depth image is characterized by comprising an image acquisition device, a client and a server, wherein the image acquisition device and the client are respectively connected with the server;
the image acquisition device is used for acquiring depth images of broiler chickens;
The client is used for collecting the image to be detected, uploading the image to be detected and the necessary information of the chicken farm to the server, and receiving the chicken flock uniformity returned by the server;
a server-side for executing the chicken flock uniformity estimation method of any one of claims 1-3.
6. A storage medium storing a program which, when executed by a processor, implements the chicken flock uniformity estimation method of any one of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111062601.1A CN113989538B (en) | 2021-09-10 | 2021-09-10 | Chicken flock uniformity estimation method, device, system and medium based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111062601.1A CN113989538B (en) | 2021-09-10 | 2021-09-10 | Chicken flock uniformity estimation method, device, system and medium based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113989538A CN113989538A (en) | 2022-01-28 |
CN113989538B true CN113989538B (en) | 2024-11-19 |
Family
ID=79735624
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111062601.1A Active CN113989538B (en) | 2021-09-10 | 2021-09-10 | Chicken flock uniformity estimation method, device, system and medium based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113989538B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115410711B (en) * | 2022-08-29 | 2023-04-07 | 黑龙江大学 | White feather broiler health monitoring method based on sound signal characteristics and random forest |
CN118211766B (en) * | 2024-05-17 | 2024-07-12 | 四川王家渡食品股份有限公司 | Low-temperature luncheon meat production monitoring method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728259A (en) * | 2019-10-23 | 2020-01-24 | 南京农业大学 | Chicken group weight monitoring system based on depth image |
CN112861666A (en) * | 2021-01-26 | 2021-05-28 | 华南农业大学 | Chicken flock counting method based on deep learning and application |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881705B (en) * | 2019-09-29 | 2023-12-12 | 深圳数字生命研究院 | Data processing, training and identifying method, device and storage medium |
-
2021
- 2021-09-10 CN CN202111062601.1A patent/CN113989538B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728259A (en) * | 2019-10-23 | 2020-01-24 | 南京农业大学 | Chicken group weight monitoring system based on depth image |
CN112861666A (en) * | 2021-01-26 | 2021-05-28 | 华南农业大学 | Chicken flock counting method based on deep learning and application |
Also Published As
Publication number | Publication date |
---|---|
CN113989538A (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7470203B2 (en) | Analysis and selection in aquaculture | |
Ubina et al. | Digital twin-based intelligent fish farming with Artificial Intelligence Internet of Things (AIoT) | |
US20210368748A1 (en) | Analysis and sorting in aquaculture | |
Noe et al. | Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model | |
CN113989538B (en) | Chicken flock uniformity estimation method, device, system and medium based on depth image | |
CN110728259A (en) | Chicken group weight monitoring system based on depth image | |
US20220067930A1 (en) | Systems and methods for predicting growth of a population of organisms | |
CN108491807B (en) | Real-time monitoring method and system for oestrus of dairy cows | |
CN118658209B (en) | AI-based live pig abnormal behavior monitoring and early warning method and system | |
CN108829762A (en) | The Small object recognition methods of view-based access control model and device | |
Xi et al. | Smart headset, computer vision and machine learning for efficient prawn farm management | |
CN107092891A (en) | A kind of paddy rice yield estimation system and method based on machine vision technique | |
CN108460370A (en) | A kind of fixed poultry life-information warning device | |
Yu et al. | An enhancement algorithm for head characteristics of caged chickens detection based on cyclic consistent migration neural network | |
CN119066432A (en) | Automated control method for intelligent farming of terrestrial sea cucumbers based on AI-assisted decision-making | |
CN114022831A (en) | Binocular vision-based livestock body condition monitoring method and system | |
CN114972477B (en) | Low-cost fish growth monitoring method used in farm | |
Meyer et al. | For5g: Systematic approach for creating digital twins of cherry orchards | |
Nontarit et al. | Shrimp-growth estimation based on resnext for an automatic feeding-tray lifting system used in shrimp farming | |
Mazhar et al. | Precision Pig Farming Image Analysis Using Random Forest and Boruta Predictive Big Data Analysis Using Neural Network and K-Nearest Neighbor | |
Nääs et al. | Machine learning applications in precision livestock farming | |
Thakre et al. | UAV Based System For Detection in Integrated Insect Management for Agriculture Using Deep Learning | |
Yuan et al. | Stress-free detection technologies for pig growth based on welfare farming: A review | |
Zhang et al. | Rapid detection and identification of major vegetable pests based on machine learning | |
Gladju et al. | Potential applications of data mining in aquaculture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |