[go: up one dir, main page]

CN113505630B - Pig farm monitoring model training method, device, computer equipment and storage medium - Google Patents

Pig farm monitoring model training method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113505630B
CN113505630B CN202110392230.7A CN202110392230A CN113505630B CN 113505630 B CN113505630 B CN 113505630B CN 202110392230 A CN202110392230 A CN 202110392230A CN 113505630 B CN113505630 B CN 113505630B
Authority
CN
China
Prior art keywords
image
pig
pig farm
fusion
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110392230.7A
Other languages
Chinese (zh)
Other versions
CN113505630A (en
Inventor
刘旭
万方
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Hope Liuhe Group Co Ltd
New Hope Liuhe Co Ltd
Original Assignee
Shandong New Hope Liuhe Group Co Ltd
New Hope Liuhe Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Hope Liuhe Group Co Ltd, New Hope Liuhe Co Ltd filed Critical Shandong New Hope Liuhe Group Co Ltd
Priority to CN202110392230.7A priority Critical patent/CN113505630B/en
Publication of CN113505630A publication Critical patent/CN113505630A/en
Application granted granted Critical
Publication of CN113505630B publication Critical patent/CN113505630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for training a pig farm monitoring model. The method comprises the following steps: acquiring biological image data, pig outline data and pig farm image data; respectively fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data, wherein the fused image data comprises a fusion image of the biological image or the pig outline and the pig farm image, and corresponding category information and position information; processing the fusion image by adopting a plurality of editing modes to obtain an amplified image and corresponding category information and position information; and taking the amplified images as training samples, class information and position information as training labels, and training the pig farm monitoring model. By adopting the method, a great amount of training data can be obtained from the pig farm monitoring model, so that the generalization capability of the pig farm monitoring model is improved, and the types of targets in and out of the pig farm can be more accurately identified.

Description

Pig farm monitoring model training method, device, computer equipment and storage medium
Technical Field
The application relates to the technical field of pig farm monitoring, in particular to a pig farm monitoring model training method, a device, computer equipment and a storage medium.
Background
Along with the improvement of living standard, the demand of people in China for pork is continuously increased, so that the scale of the live pig farm is gradually enlarged. With the growth of the cultivation scale by orders of magnitude, in order to prevent pigs from accidentally getting lost or from being stolen, various entrances and exits (gate, pig house gate and pig outlet platform) of the pig farm need to be monitored.
In the prior art, by acquiring scene images photographed in real time at all entrances and exits of a pig farm, whether pigs are contained in the images or not is detected by using a target detection technology in deep learning, so that whether pigs enter or exit from all entrances and exits can be judged in real time.
However, abnormal entry and exit of pigs are low-frequency events, scene images containing pigs are deficient, training samples of the deep learning neural network model are insufficient, and the judgment result accuracy of the deep learning neural network model is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a pig farm monitoring model training method, apparatus, computer device and storage medium capable of obtaining a large amount of training data.
A pig farm monitoring model training method, the method comprising: acquiring biological image data, pig outline data and pig farm image data, wherein the biological image data comprises a biological image of living beings in a pig farm, corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image; respectively fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data, wherein the fused image data comprises a fused image of the biological image or the pig outline and the pig farm image, and corresponding category information and position information; processing the fusion image by adopting a plurality of editing modes to obtain an amplified image, and corresponding category information and position information; and taking the amplified image as a training sample, and taking corresponding category information and position information as training labels to train the pig farm monitoring model.
In one embodiment, the acquiring biological image data, pig outline data, and pig farm image data comprises: acquiring images of cats, dogs, people and vehicles shot outside the pig farm, and category information and position information of corresponding marks to form biological image data; acquiring a pig image shot in the pig farm and category information and position information of a corresponding mark to form biological image data; acquiring the outline of the pig and angle information of the corresponding mark, which are cut from the pig image, so as to form pig outline data; and acquiring scene images shot at a pig farm entrance and a pig farm exit, and forming pig farm image data.
In one embodiment, the pig farm image comprises a biological image, and the pig farm image data further comprises category information and position information corresponding to the biological image.
In one embodiment, the fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data includes: each pig farm image data is processed as follows: selecting one from the biological image data and the pig outline data for fusion; if the biological image data is selected, fusing the biological image in the biological image data with the pig farm image in the pig farm image data according to the following formula to obtain a fused image: xij=λ ai j+ (1- λ) bij; wherein xij is the pixel value at the coordinate (i, j) in the fusion image, lambda is a set parameter, aij is the pixel value at the coordinate (i, j) in the biological image, bij is the pixel value at the coordinate (i, j) in the pig farm image; and determining category information corresponding to the fused image according to the following formula: xij=λ aij+ (1- λ) Bij; wherein Xij is a probability matrix of each category of the fusion image, lambda is a set parameter, aij is a probability matrix of each category of the biological image, bij is a probability matrix of each category of the pig farm image; taking the position information corresponding to the biological image as the position information corresponding to the fusion image; if the pig outline data are selected, determining the fusion number of the pig outlines, and the fusion position and fusion angle of each pig outline; the pig outlines conforming to the fusion quantity are fused into the fusion positions corresponding to the pig farm images according to the fusion angles corresponding to the pig outlines, so that fusion images are formed; and taking pigs as category information corresponding to the fusion images, and taking the fusion position of each pig outline as position information corresponding to the fusion images.
In one embodiment, selecting one of the biological image data and the pig outline data, determining the number of fusions of the pig outline, and determining the fusion location of each of the pig outline are all performed randomly.
In one embodiment, the editing mode comprises one or more of flip transformation, random pruning, translation transformation, scale transformation, noise disturbance and rotation transformation.
In one embodiment, the method further comprises: acquiring monitoring images shot at a pig farm entrance and exit in real time; inputting the monitoring image into the pig farm monitoring model to obtain a judging result output by the pig farm monitoring model; if the judging result is that the monitoring image contains pigs, the monitoring image is sent to an administrator; if the judging result is that no pig exists in the monitoring image, recording the monitoring image and the shooting time of the monitoring image into a database.
A pig farm monitoring model training device, comprising:
the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring biological image data, pig outline data and pig farm image data, the biological image data comprises a biological image of living things in a pig farm and corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image;
The image fusion module is used for respectively fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fusion image data, and the fusion image data comprises fusion images of the biological image or the pig outline and the pig farm image, and corresponding category information and position information;
The image enhancement module is used for processing the fusion image in a plurality of editing modes to obtain an amplified image and corresponding category information and position information;
And the image training module is used for taking the amplified images as training samples, taking the category information and the position information as training labels and training a pig farm monitoring model.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Acquiring biological image data, pig outline data and pig farm image data, wherein the biological image data comprises a biological image of living beings in a pig farm, corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image;
respectively fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data, wherein the fused image data comprises a fused image of the biological image or the pig outline and the pig farm image, and corresponding category information and position information;
Processing the fusion image by adopting a plurality of editing modes to obtain an amplified image, and corresponding category information and position information;
and taking the amplified image as a training sample, and taking corresponding category information and position information as training labels to train the pig farm monitoring model.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring biological image data, pig outline data and pig farm image data, wherein the biological image data comprises a biological image of living beings in a pig farm, corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image;
respectively fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data, wherein the fused image data comprises a fused image of the biological image or the pig outline and the pig farm image, and corresponding category information and position information;
Processing the fusion image by adopting a plurality of editing modes to obtain an amplified image, and corresponding category information and position information;
and taking the amplified image as a training sample, and taking corresponding category information and position information as training labels to train the pig farm monitoring model.
According to the training method and the training device for the pig farm monitoring model, the biological image data, the pig outline data and the pig farm image data are obtained, and each pig farm image data is respectively fused with one of the biological image data and the pig outline data to obtain the fused image data, so that a training sample of the deep learning model is created artificially. And then, processing the fused image data by adopting a plurality of image amplification methods, and carrying out some transformation on the fused image to obtain more fused images, so that the diversity of the fused image can be enriched, the number of training samples of the deep learning model is increased, and the generalization capability of the model is stronger. And finally, using the fused image data after image amplification as a training sample to train the pig farm monitoring model, so that a large amount of training data can be provided for the pig farm monitoring model, the generalization capability of the pig farm monitoring model is improved, and the pig farm monitoring model can accurately identify the biological types of the pig farm entering and exiting.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments or the conventional techniques of the present application, the drawings required for the descriptions of the embodiments or the conventional techniques will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flow chart of a method of training a pig farm monitoring model in one embodiment;
FIG. 2 is a flow chart of an image fusion step in one embodiment;
FIG. 3 is a schematic representation of pig profile data in one embodiment;
FIG. 4 is a flow chart of a model determination method in one embodiment;
FIG. 5 is a block diagram of a training device for a pig farm monitoring model in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Just like the background art, in the conventional art, the deep learning neural network model is used for identifying the coming and going living beings at the entrance and exit of the pig farm, and the problems of inaccurate identification and easy false alarm exist. The inventor researches find that the problem is caused by the fact that the phenomenon that pigs enter and exit from the pig farm abnormally belongs to a low-frequency event, so that the training sample size of the deep learning neural network model in the traditional technology is insufficient, the model training effect is poor, the generalization capability is weak, and the categories of organisms appearing at the pig farm entrance are difficult to accurately identify.
Based on the reasons, the invention provides a training method and a training device for a pig farm monitoring model, which are characterized in that biological image data, pig outline data and pig farm image data are obtained, and then each pig farm image data is respectively fused with one of the biological image data and the pig outline data to obtain fused image data, so that a training sample of a deep learning model is artificially created. And then, processing the fused image data by adopting a plurality of image amplification methods, and carrying out some transformation on the fused image to obtain more fused images, so that the diversity of the fused images is enriched, the number of training samples of the deep learning model is increased, and the generalization capability of the model is stronger. And then, using the fused image data after image amplification as a training sample to train the pig farm monitoring model, so that the pig farm monitoring model obtains a large amount of training data, the generalization capability of the pig farm monitoring model is improved, and the types of targets in and out of the pig farm can be accurately identified.
In one embodiment, as shown in fig. 1, a method for training a pig farm monitoring model is provided, the method comprising:
step S100, biological image data, pig outline data and pig farm image data are acquired.
The biological image data comprises a biological image of living things in a pig farm and corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image.
Illustratively, the biometric image includes images of cats, dogs, people, vehicles and category information and location information of the corresponding markers.
Illustratively, the biological image further includes category information and location information of pig images and corresponding markers captured in the pig farm.
Illustratively, the pig image is acquired by mounting an infrared night vision camera above the pig house.
Illustratively, pig outline data is formed from pig outline and angle information of corresponding markers taken from pig images.
Illustratively, a monitoring image taken by an infrared night vision camera installed at a pig farm entrance is taken as pig farm image data.
Illustratively, the doorway includes a gate, a pig house gate, and a pig farm.
Illustratively, the pig farm image comprises a biological image, and the pig farm image data further comprises category information and location information corresponding to the biological image.
The position information, the category information and the pig outline data in the image are obtained by means of marking the image.
The labeled image (may also be referred to as ground truth) may be an image obtained by labeling a feature in the training image, and the feature may be different according to the deep learning network of different purposes. For example, in a deep learning network for identifying animals, the features may be pigs, cows, etc. As another example, in a deep learning network for identifying automobiles, the feature may be an off-road vehicle, a bus, a truck, or the like. The manner of marking the features in the training image may be: the method comprises the steps of marking a feature object in a training image in a picture frame, scribing and other processing modes, taking a deep learning network for identifying automobiles as an example, carrying out picture frame processing on the automobiles in the training image, further marking, and taking the training image after picture frame processing as a marking image.
Illustratively, the pig farm image data includes a monitoring image taken at a pig farm doorway.
Step S110, fusing each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data.
And step S120, processing the fusion image by adopting a plurality of editing modes to obtain an amplified image and corresponding category information and position information.
And step S130, training the pig farm monitoring model by taking the amplified image as a training sample and the category information and the position information as training labels.
According to the training method of the pig farm monitoring model, the biological image data, the pig farm outline data and the pig farm image data are obtained, and then each pig farm image data is respectively fused with one of the biological image data and the pig farm outline data to obtain the fused image data, so that a training sample of the deep learning model is created artificially. And then, processing the fused image data by adopting a plurality of image amplification methods, and carrying out some transformation on the fused image to obtain more fused images, so that the diversity of the fused images is enriched, the number of training samples of the deep learning model is increased, and the generalization capability of the model is stronger. And then, using the fused image data after image amplification as a training sample to train the pig farm monitoring model, so that the pig farm monitoring model obtains a large amount of training data, the generalization capability of the pig farm monitoring model is improved, and the types of targets in and out of the pig farm can be accurately identified.
In one embodiment, as shown in fig. 2, step S110 includes:
Step S200, selecting one from biological image data and pig outline data for fusion of pig farm image data. If the biometric image data is selected, then steps S202 through S208 are performed; if the pig outline data is selected, the steps S210 to S216 are performed.
Step S202, selecting a pig farm image data and a biological image data.
Illustratively, each image in all pig farm image data is numbered sequentially, each image in the biological image data is numbered sequentially, and each image with the same number in pig farm image data and biological image data is selected.
Step S204, according to the formula xij=λ×aij+ (1- λ) ×bij, fusing the biological image in the biological image data with the pig farm image in the pig farm image data to obtain a fused image.
Where xij is the pixel value at coordinate (i, j) in the fused image, λ is the set parameter, aij is the pixel value at coordinate (i, j) in the biological image, bij is the pixel value at coordinate (i, j) in the pig farm image.
Illustratively, the number of pixels per biological image is equal to the number of pixels per pig farm image.
Illustratively, if the number of pixels per biological image and the number of pixels per pig farm image are not equal, then the pig farm image is cropped along the graphical outline of the biological image, leaving a portion of the pig farm image within the outline envelope.
Step S206, according to the formula xij=λ×aij+ (1- λ) ×bij, fusing the category information in the biological image data with the category information in the pig farm image data to obtain category information corresponding to the fused image.
Wherein, xij is the probability matrix of each category of the fusion image, lambda is the set parameter, aij is the probability matrix of each category of the biological image, and Bij is the probability matrix of each category of the pig farm image.
Illustratively, λ is a percentage of the sum of the data amount of the biological image data and the data amount of the pig farm image data.
For example, the probability matrix for each class of the image may be a vector, where each position of the vector represents the probability that a class of living beings appears in the image, e.g., [1, 0] represents a pig, [0,1,0] represents a person, [0, 1] represents a dog, the first position in the vector represents the probability that the image contains a pig, the second position represents the probability that the image contains a person, and the third position represents the probability that the image contains a dog. A vector value of a corresponding location other than 0 indicates that the category of living beings exists, and the vector value of the corresponding location represents the probability that the living beings appear in the image.
Step S208, the position information corresponding to the biological image is used as the position information corresponding to the fusion image.
In this embodiment, the fusion image of the living beings of different categories in the scene of the pig farm image is obtained by substituting the image corresponding pixel values of the biological image data and the pig farm image data into the formula for fusion. And a training sample of the deep learning model is created by means of image fusion.
Step S210, selecting a pig farm image data and a pig outline data.
Step S212, determining the fusion number, fusion position and fusion angle of the pig outline.
Illustratively, the fusion number of pig outlines is a random number in the range of one to ten.
Illustratively, the selection of one of the biological image data and the pig outline data for fusion and the determination of the fusion location of the pig outline are all performed randomly.
Illustratively, the fusion angle of the pig outline is such that the angle of the pig outline in the image is normal, and the normal angle is such that the angle between the pig outline and the horizontal plane in the direction perpendicular to the horizontal plane is between-10 degrees and 10 degrees. For example, as shown in fig. 3, the included angle between the outline of the pig and the horizontal plane is 60 degrees, and the pig needs to be rotated to have an included angle between-10 degrees and 10 degrees with the horizontal plane, so that the fused image accords with normal.
And step S214, fusing the pig outlines meeting the fusion quantity into fusion positions in the pig farm image according to the fusion angle to form a fusion image.
Step S216, the pig is used as category information corresponding to the fusion image, and the fusion position is used as position information corresponding to the fusion image.
In the embodiment, the outline image of the pig is cut out, then the outline image of the pig is randomly attached to the pig farm image, and the angle of the outline image of the pig is adjusted to enable the fused image to be reasonable. In this way, a fused image of pigs only appearing in the scene of pig farm images is obtained, thereby creating a training sample of the deep learning model.
In one embodiment, step S120 includes:
And processing the fused image by adopting one or more image amplification modes of turnover transformation, random pruning, translation transformation, scale transformation, noise disturbance and rotation transformation to obtain an amplified image and corresponding category information and position information.
Illustratively, the flip conversion is to rotate the image 90 degrees clockwise or 90 degrees counterclockwise or left-right or up-down.
Illustratively, the random cropping is a random cropping of a portion of the image, the cropped image having a size of 60% to 100% of the original image size.
Illustratively, the translation transformation does not change the image size, only changes the position of the image, and moves each pixel point on the image to a new pixel point correspondingly.
Illustratively, the scaling is performed to scale up and down the image with the center point of the image as the center.
Illustratively, the noise disturbance is the addition of noise to the image that is used to detect the noise immunity of the model simulated algorithm.
Illustratively, rotation transforms into rotating the image by a certain angle, either clockwise or counterclockwise, centered at a certain point.
In this embodiment, the fusion image is processed by adopting a plurality of image amplification modes, and one or more of the above transformations is performed on the fusion image under the data volume of the existing fusion image, so as to create more fusion image data, increase the data volume, enrich the diversity of the data, and obtain a large amount of data trained by the deep learning model, thereby improving the generalization capability of the pig farm monitoring model and being capable of more accurately identifying the types of targets coming in and going out of the pig farm.
In one embodiment, as shown in fig. 4, a pig farm monitoring model trained by using a training sample obtained by the method of the present invention determines a captured image of a pig farm entrance, including:
step S400, acquiring shooting images of the pig farm entrances and exits in real time.
Step S410, inputting the shot image into the pig farm monitoring model to obtain a judgment result output by the pig farm monitoring model.
Step S420, if the judgment result is that the shot image contains pigs, the shot image is sent to an administrator.
And step S430, if the judgment result is that no pig exists in the shot image, inputting the shot image and the shooting time into a database.
In this embodiment, images of pig farm entrances and exits are acquired in real time through cameras of the pig farm entrances and exits, and then a pig farm monitoring model is trained by using training samples obtained by the method of the invention, so that the type of organisms in the photographed images is judged. If the pig is judged to exist in the image, the image is sent to an administrator for confirmation, so that the administrator can check and confirm whether the pig is in normal or abnormal access. If no pig is judged in the image, the image and the shooting time are recorded into a database for storage, so that information tracking can be conveniently carried out on personnel, vehicles or other organisms coming in and going out of the pig farm in the later period.
The first shot image of the pig farm entrance is subjected to single Gaussian modeling to obtain a modeling background image of the pig farm entrance, the distance and angle information of the feature points on the shot image and the distance and angle information of the same points on the modeling background image are matched in real time, and if the distance and angle information are inconsistent, the camera is judged to be moved manually.
It should be understood that, although the steps in the flowcharts of fig. 1, 2, and 4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1, 2, and 4 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 5, there is provided a training apparatus for a pig farm monitoring model, comprising: an image acquisition module 901, an image fusion module 902, an image enhancement module 903, and an image training module 904, wherein:
The image acquisition module 901 is configured to acquire biological image data, pig outline data and pig farm image data, where the biological image data includes a biological image of a living being in a pig farm and corresponding category information and position information, the biological image includes at least a pig image, the pig outline data includes a pig outline and corresponding angle information, and the pig farm image data includes at least a pig farm image;
The image fusion module 902 is configured to fuse each pig farm image data with one of the biological image data and the pig outline data to form corresponding fused image data, where the fused image data includes a fusion image of the biological image or the pig outline and the pig farm image, and corresponding category information and location information;
the image enhancement module 903 is configured to process the fused image in multiple editing modes to obtain an amplified image and corresponding category information and position information;
The image training module 904 is configured to train the pig farm monitoring model by using the amplified image as a training sample and the category information and the position information as training labels.
For specific limitations on the training apparatus of the pig farm monitoring model, reference may be made to the above limitations on the training method of the pig farm monitoring model, and no further description is given here. The modules in the training device of the pig farm monitoring model can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by the processor to implement a method of training a pig farm monitoring model.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of training a pig farm monitoring model, the method comprising:
Acquiring biological image data, pig outline data and pig farm image data, wherein the biological image data comprises a biological image of living beings in a pig farm, corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image;
Each pig farm image data is processed as follows:
selecting one from the biological image data and the pig outline data for fusion;
if the biological image data is selected, fusing the biological image in the biological image data with the pig farm image in the pig farm image data according to the following formula to obtain a fused image:
xij=λ*aij+(1-λ)*bij;
wherein xij is the pixel value at the coordinate (i, j) in the fusion image, lambda is a set parameter, aij is the pixel value at the coordinate (i, j) in the biological image, bij is the pixel value at the coordinate (i, j) in the pig farm image;
And determining category information corresponding to the fused image according to the following formula:
Xij=λ*Aij+(1-λ)*Bij;
wherein Xij is a probability matrix of each category of the fusion image, lambda is a set parameter, aij is a probability matrix of each category of the biological image, bij is a probability matrix of each category of the pig farm image;
taking the position information corresponding to the biological image as the position information corresponding to the fusion image;
if the pig outline data are selected, determining the fusion number of the pig outlines, and the fusion position and fusion angle of each pig outline;
Merging the pig outlines meeting the fusion quantity into the corresponding fusion positions in the pig farm images according to the corresponding fusion angles to form fusion images;
The pig is used as category information corresponding to the fusion image, and the fusion position of each pig outline is used as position information corresponding to the fusion image;
Processing the fusion image by adopting a plurality of editing modes to obtain an amplified image, and corresponding category information and position information;
and taking the amplified image as a training sample, and taking corresponding category information and position information as training labels to train the pig farm monitoring model.
2. The method of claim 1, wherein the acquiring biological image data, pig outline data, and pig farm image data comprises:
Acquiring images of cats, dogs, people and vehicles shot outside the pig farm, and category information and position information of corresponding marks to form biological image data;
acquiring a pig image shot in the pig farm, and category information and position information of corresponding marks to form biological image data;
Acquiring the outline of the pig and angle information of the corresponding mark, which are cut from the pig image, so as to form pig outline data;
and acquiring scene images shot by the pig farm entrances and exits to form pig farm image data.
3. The method of claim 2, wherein the pig farm image comprises a biological image, and wherein the pig farm image data further comprises category information and location information corresponding to the biological image.
4. The method of claim 2, wherein λ is a percentage of the sum of the amount of biological image data and the amount of pig farm image data.
5. A method according to any one of claims 1 to 3, wherein the selection of one of the biological image data and the pig profile data, the determination of the number of fusions of the pig profile, and the determination of the location of the fusion of each of the pig profiles are all performed randomly.
6. A method according to any one of claims 1 to 3, wherein the editing means comprises one or more of a flip transform, a random pruning, a translational transform, a scale transform, a noise disturbance, a rotational transform.
7. A method according to any one of claims 1 to 3, further comprising:
Acquiring monitoring images shot at a pig farm entrance and exit in real time;
inputting the monitoring image into the pig farm monitoring model to obtain a judging result output by the pig farm monitoring model;
If the judging result is that the monitoring image contains pigs, the monitoring image is sent to an administrator;
if the judging result is that no pig exists in the monitoring image, recording the monitoring image and the shooting time of the monitoring image into a database.
8. A device for training a pig farm monitoring model, the device comprising:
the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring biological image data, pig outline data and pig farm image data, the biological image data comprises a biological image of living things in a pig farm and corresponding category information and position information, the biological image at least comprises a pig image, the pig outline data comprises a pig outline and corresponding angle information, and the pig farm image data at least comprises a pig farm image;
The image fusion module is used for processing each pig farm image data in the following mode: selecting one from the biological image data and the pig outline data for fusion; if the biological image data is selected, fusing the biological image in the biological image data with the pig farm image in the pig farm image data according to the following formula to obtain a fused image:
xij=λ*aij+(1-λ)*bij;
wherein xij is the pixel value at the coordinate (i, j) in the fusion image, lambda is a set parameter, aij is the pixel value at the coordinate (i, j) in the biological image, bij is the pixel value at the coordinate (i, j) in the pig farm image;
And determining category information corresponding to the fused image according to the following formula:
Xij=λ*Aij+(1-λ)*Bij;
wherein Xij is a probability matrix of each category of the fusion image, lambda is a set parameter, aij is a probability matrix of each category of the biological image, bij is a probability matrix of each category of the pig farm image;
taking the position information corresponding to the biological image as the position information corresponding to the fusion image;
if the pig outline data are selected, determining the fusion number of the pig outlines, and the fusion position and fusion angle of each pig outline;
Merging the pig outlines meeting the fusion quantity into the corresponding fusion positions in the pig farm images according to the corresponding fusion angles to form fusion images;
The pig is used as category information corresponding to the fusion image, and the fusion position of each pig outline is used as position information corresponding to the fusion image;
The image enhancement module is used for processing the fusion image in a plurality of editing modes to obtain an amplified image and corresponding category information and position information;
And the image training module is used for taking the amplified images as training samples, taking the category information and the position information as training labels and training a pig farm monitoring model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202110392230.7A 2021-04-13 2021-04-13 Pig farm monitoring model training method, device, computer equipment and storage medium Active CN113505630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110392230.7A CN113505630B (en) 2021-04-13 2021-04-13 Pig farm monitoring model training method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110392230.7A CN113505630B (en) 2021-04-13 2021-04-13 Pig farm monitoring model training method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113505630A CN113505630A (en) 2021-10-15
CN113505630B true CN113505630B (en) 2024-07-09

Family

ID=78008358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110392230.7A Active CN113505630B (en) 2021-04-13 2021-04-13 Pig farm monitoring model training method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113505630B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399717A (en) * 2022-01-21 2022-04-26 湖北中新开维现代牧业有限公司 Method and system for monitoring the movement of pigs in large stalls in nursery and fattening

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945456A (en) * 2017-12-18 2018-04-20 翔创科技(北京)有限公司 Livestock monitoring system
CN110889824A (en) * 2019-10-12 2020-03-17 北京海益同展信息科技有限公司 Sample generation method and device, electronic equipment and computer readable storage medium
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4109748A1 (en) * 1991-03-25 1992-10-01 Bockisch Franz Josef Dipl Ing Stall system for cattle - includes individual or combined facilities to which access is gained by identification device on beast and stall
GB0822580D0 (en) * 2008-12-11 2009-01-14 Faire Ni Ltd An animal monitoring system and method
CN105809711B (en) * 2016-03-02 2019-03-12 华南农业大学 A method and system for extracting big data of pig movement based on video tracking
CN108090841A (en) * 2017-12-18 2018-05-29 翔创科技(北京)有限公司 Livestock asset monitor method, computer program, storage medium and electronic equipment
CN109632059B (en) * 2018-12-13 2021-05-14 北京小龙潜行科技有限公司 Intelligent pig raising method and system, electronic equipment and storage medium
CN109658414A (en) * 2018-12-13 2019-04-19 北京小龙潜行科技有限公司 A kind of intelligent checking method and device of pig
CN110022379A (en) * 2019-04-23 2019-07-16 翔创科技(北京)有限公司 A kind of livestock monitoring system and method
CN210442661U (en) * 2019-10-10 2020-05-01 河北农业大学 A Lora-based remote monitoring system for cowshed
CN110595547B (en) * 2019-10-24 2022-02-22 重庆小富农康农业科技服务有限公司 Pig farm abnormal operation monitoring devices
CN111161214B (en) * 2019-12-09 2023-05-05 江苏大学 A system and method for pig weight measurement and drinking behavior recognition based on binocular vision
CN111460729A (en) * 2020-03-20 2020-07-28 淮阴工学院 An intelligent detection system for bridge deformation
CN112084917B (en) * 2020-08-31 2024-06-04 腾讯科技(深圳)有限公司 Living body detection method and device
CN112232349B (en) * 2020-09-23 2023-11-03 成都佳华物链云科技有限公司 Model training method, image segmentation method and device
CN112348765A (en) * 2020-10-23 2021-02-09 深圳市优必选科技股份有限公司 Data enhancement method and device, computer readable storage medium and terminal equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945456A (en) * 2017-12-18 2018-04-20 翔创科技(北京)有限公司 Livestock monitoring system
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium
CN110889824A (en) * 2019-10-12 2020-03-17 北京海益同展信息科技有限公司 Sample generation method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113505630A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
US8744125B2 (en) Clustering-based object classification
CN111325769B (en) Target object detection method and device
CN111753609A (en) Target identification method and device and camera
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
Li et al. PETS 2015: datasets and challenge
Gupta et al. Computer vision based animal collision avoidance framework for autonomous vehicles
CN115240168A (en) Perception result obtaining method and device, computer equipment and storage medium
CN113592902A (en) Target tracking method and device, computer equipment and storage medium
CN113505630B (en) Pig farm monitoring model training method, device, computer equipment and storage medium
CN112084892B (en) Road abnormal event detection management device and method thereof
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
Sirisha et al. Nam-yolov7: An improved yolov7 based on attention model for animal death detection
CN115830399A (en) Classification model training method, apparatus, device, storage medium, and program product
Nam Loitering detection using an associating pedestrian tracker in crowded scenes
CN117218109B (en) Vehicle lateral mosaic image integrity detection method, system, equipment and medium
US20240185605A1 (en) System and Method for Detecting and Explaining Anomalies in Video of a Scene
CN117351364A (en) Automatic identification method and system for artificial interference behavior AI of wetland protection zone
CN112297011B (en) Obstacle avoidance method and device for agriculture and forestry robot, computer equipment and storage medium
CN113793250B (en) Pose evaluation method, pose determination method, corresponding device and electronic equipment
CN113392678A (en) Pedestrian detection method, device and storage medium
CN113963502B (en) All-weather illegal behavior automatic inspection method and system
Chahal In Situ Detection of Road Lanes Using Raspberry Pi
CN118658143B (en) Vehicle identification method, system and computer readable storage medium
US20190096045A1 (en) System and Method for Realizing Increased Granularity in Images of a Dataset
CN112927455B (en) Intelligent monitoring method for parking lot and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant