[go: up one dir, main page]

CN114005011B - Method, system, equipment and medium for determining in-loop state of live stock - Google Patents

Method, system, equipment and medium for determining in-loop state of live stock Download PDF

Info

Publication number
CN114005011B
CN114005011B CN202111289090.7A CN202111289090A CN114005011B CN 114005011 B CN114005011 B CN 114005011B CN 202111289090 A CN202111289090 A CN 202111289090A CN 114005011 B CN114005011 B CN 114005011B
Authority
CN
China
Prior art keywords
initial
target
live stock
subsequent
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111289090.7A
Other languages
Chinese (zh)
Other versions
CN114005011A (en
Inventor
李强
庞殊杨
田君仪
吴凯尧
刘凯然
郭婧
封真雨
赵莹
李颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Saidi Yinong Data Technology Co ltd
CISDI Chongqing Information Technology Co Ltd
Original Assignee
Chongqing Saidi Yinong Data Technology Co ltd
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Saidi Yinong Data Technology Co ltd, CISDI Chongqing Information Technology Co Ltd filed Critical Chongqing Saidi Yinong Data Technology Co ltd
Priority to CN202111289090.7A priority Critical patent/CN114005011B/en
Publication of CN114005011A publication Critical patent/CN114005011A/en
Application granted granted Critical
Publication of CN114005011B publication Critical patent/CN114005011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Animal Husbandry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Mining & Mineral Resources (AREA)
  • General Business, Economics & Management (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Economics (AREA)
  • Agronomy & Crop Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The method, the system, the equipment and the medium for determining the in-loop state of the live stock provided by the invention are characterized in that a plurality of sample images comprising sample live stock are obtained, the position of the sample live stock in the sample images is marked, a data set is constructed, a preset basic model is trained by the data set, a trained live stock position detection model is obtained, the in-loop state of the live stock in a given image can be detected based on the trained live stock position detection model, the automatic in-loop detection of the live stock is realized by a machine mode, the position information and the in-loop state of the live stock can be accurately detected even if the number of the live stock is large and the movement speed is high, the cost is reduced, the standard for determining the in-loop state is unified, and the efficient operation of animal industry is facilitated.

Description

Method, system, equipment and medium for determining in-loop state of live stock
Technical Field
The invention relates to the technical field of livestock raising, in particular to a method, a system, equipment and a medium for determining the state of a living animal in a ring.
Background
In the livestock breeding industry, "free-range breeding" is a relatively common breeding mode, by placing live animals out of the rings, allowing them to find food outside the rings, etc., when a certain time or a condition considered suitable by relevant staff is reached, the live animals need to be recovered into the rings. In addition, the gear inlet and outlet of live animals is also one of the key links in the operation of the livestock market.
The method is characterized in that the method is that the target live stock enters and exits the loop usually in a manual management mode, and the live stock quantity is counted manually when the live stock enters or exits the loop, but sometimes the live stock quantity is huge and the moving speed is high, whether the live stock is in the loop is determined manually, the difficulty is high, the cost is high, errors can exist in the standard of the loop state due to cognition of different personnel, and the efficient operation of animal husbandry cannot be realized.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a method, system, device and medium for determining the status of a live animal in the field, so as to solve the above-mentioned technical problems.
The invention provides a method for determining the state of a live stock in a ring, which comprises the following steps:
Acquiring a plurality of sample images comprising sample live stock, and carrying out position marking on the sample live stock in the sample images to construct a data set;
Inputting the data set into a preset basic model, and training to obtain a trained live animal position detection model;
Acquiring an initial image to be recognized including a first moment of a target live stock, and inputting the initial image to be recognized into the trained live stock position detection model to obtain an initial target position of the target live stock;
and determining a first ring state of the target live stock at a first moment according to the preset gear position and the initial target position.
Optionally, the determining the first in-loop state of the first moment of the target live stock according to the preset notch position and the initial target position includes:
acquiring a preset gear identification interval;
If the target live stock is located in the preset gear identification interval and the initial target position exceeds the preset gear position, the first in-loop state comprises in-loop;
And if the target live stock is located outside the preset gear identification interval or the initial target position does not exceed the preset gear position, the first in-loop state comprises out-of-loop.
Optionally, acquiring a subsequent image to be identified including a second moment of the target live stock, and inputting the subsequent image to be identified into the trained live stock position detection model to obtain a subsequent target position of the target live stock, wherein the second moment is later than the first moment;
determining a second in-loop state of the target live stock at a second moment according to the subsequent target position and a preset notch position;
And determining the moving direction of the target live stock according to the initial target position and the subsequent target position.
Optionally, the determining the moving direction of the target live stock according to the initial target position and the subsequent target position includes any one of the following:
If the initial target position and the subsequent target position exceed the preset notch position, respectively acquiring an initial distance and a subsequent distance, if the initial distance is larger than the subsequent distance, the movement direction comprises a circle-out direction, and if the initial distance is smaller than the subsequent distance, the movement direction comprises a circle-in direction, the initial distance comprises a distance between the initial target position and the preset notch position, and the subsequent distance comprises a distance between the subsequent target position and the preset notch position;
if the initial target position exceeds the preset notch position and the subsequent target position does not exceed the preset notch position, the movement direction comprises a loop-out direction;
if the initial target position does not exceed the preset notch position and the subsequent target position exceeds the preset notch position, the movement direction comprises a looping direction;
And respectively acquiring an initial coordinate and a subsequent coordinate, wherein if the initial coordinate is larger than the subsequent coordinate, the movement direction comprises a circle-out direction, and if the initial coordinate is smaller than or equal to the subsequent coordinate, the movement direction comprises a circle-in direction, the initial coordinate is the coordinate of the initial target position, the subsequent coordinate is the coordinate of the subsequent target position, and the coordinate value increases along the circle-in direction.
Optionally, the method further comprises:
and determining the in-out state of the target live stock according to the first in-loop state and the second in-loop state.
Optionally, if at least one of the initial image to be identified and the subsequent image to be identified includes at least two target live stock, the determining method of the same target live stock in the initial image to be identified and the subsequent image to be identified includes:
Carrying out identity identification information identification on each target live stock in the initial to-be-identified image;
Respectively acquiring an initial motion track of each target live stock in the initial image to be identified and a subsequent motion track of each target live stock in the subsequent image to be identified;
Determining the matching degree between the target live stock in the subsequent image to be identified and each target live stock in the initial image to be identified according to the initial motion track and the subsequent motion track;
the identity identification information identification of the target live stock in the initial image to be identified with the matching degree higher than a preset matching degree is endowed to the target live stock in the subsequent image to be identified;
And the identity identification information in the initial image to be identified and the subsequent image to be identified is the same to identify that the two target live animals are the same target live animal.
Optionally, the method further comprises:
Acquiring the number of live animals in the history;
and determining the current number of live animals in the circle according to the in-and-out circle state and the historical number of live animals in the circle.
The invention also provides a live stock in-loop state determining system, which comprises:
The sample image acquisition module is used for acquiring a plurality of sample images comprising the sample live stock, and carrying out position labeling on the sample live stock in the sample images to construct a data set;
the training module is used for inputting the data set into a preset basic model and training to obtain a trained live animal position detection model;
The initial target position determining module is used for acquiring an initial image to be recognized at a first moment of a target live stock, inputting the initial image to be recognized into the trained live stock position detection model and obtaining an initial target position of the target live stock;
the ring state determining module is used for determining a first ring state of the target live stock at a first moment according to a preset notch position and the initial target position.
The invention also provides an electronic device, which comprises a processor, a memory and a communication bus;
The communication bus is used for connecting the processor and the memory;
the processor is configured to execute a computer program stored in the memory to implement the method according to any one of the embodiments described above.
The present invention also provides a computer-readable storage medium, having stored thereon a computer program,
The computer program is configured to cause the computer to perform the method according to any one of the embodiments described above.
The invention has the beneficial effects that: the method, the system, the equipment and the medium for determining the in-loop state of the live stock provided by the invention are characterized in that a plurality of sample images comprising sample live stock are obtained, the position of the sample live stock in the sample images is marked, a data set is constructed, a preset basic model is trained by the data set, a trained live stock position detection model is obtained, the in-loop state of the live stock in a given image can be detected based on the trained live stock position detection model, the automatic in-loop detection of the live stock is realized by a machine mode, the position information and the in-loop state of the live stock can be accurately detected even if the number of the live stock is large and the movement speed is high, the cost is reduced, the standard for determining the in-loop state is unified, and the efficient operation of animal industry is facilitated.
Drawings
Fig. 1 is a schematic flow chart of a method for determining the status of a live stock in a ring according to a first embodiment of the present invention;
FIG. 2 is a schematic view of a gear provided in a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a preset notch identification interval and a preset notch position according to a first embodiment of the present invention;
Fig. 4 is a schematic flow chart of a method for determining the status of a live animal in a loop according to the first embodiment of the present invention;
fig. 5 is a schematic structural diagram of a live stock in-loop status determination system according to a second embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present invention, it will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a method for determining the state of a living animal in a loop, which includes:
Step S101: and acquiring a plurality of sample images comprising the sample live stock, and carrying out position labeling on the sample live stock in the sample images to construct a data set.
Optionally, the position labeling of the sample live stock can be realized by adopting a rectangular target recognition frame, such as labeling the upper left corner and the lower right corner of the sample live stock. For example, the center point of the living animal is marked (the center point may be the center point of the longest line segment in the transverse direction and the center point of the longest line segment in the longitudinal direction, or may be the positions set by those skilled in the art). For another example, the lower left corner and the upper right corner of a sample live stock are labeled.
The living animal sample may be of the same species or of different species.
The position of the live stock can be marked by manual marking and/or marking by means of target detection of a machine, such as by using example segmentation and other technologies.
In one embodiment, an image acquisition device may be provided in advance just above the collar (notch), and an image acquisition area may be set, and the image acquisition device may also be used to acquire an initial image to be identified and a subsequent image to be identified. Of course, the image acquisition device of the initial image to be identified and the subsequent image to be identified may also be different from the image acquisition device of the sample image. The image acquisition area may be set by a person skilled in the art.
Step S102: inputting the data set into a preset basic model, and training to obtain a trained live animal position detection model.
The preset basic model may be a model preset by a person skilled in the art, such as a preset neural network model.
The training of the preset basic model by the data set may be implemented by using an existing training method, which is not limited herein.
Step S103: and acquiring an initial image to be identified at a first moment of the target live stock, and inputting the initial image to be identified into a trained live stock position detection model to obtain an initial target position of the target live stock.
At this time, the initial target position includes, but is not limited to, position coordinate information in the initial image to be recognized, which may be position point information of a detection frame of the target live stock.
When the initial image to be identified comprises a plurality of target live animals, the initial target positions of the target live animals can be obtained through the trained live animal position detection model.
Optionally, a preset notch recognition interval may be preset, before the initial image to be recognized is input into the trained live stock position detection model, the initial image to be recognized is cut according to the preset notch recognition interval, the image in the preset notch recognition interval is reserved, and the cut image is input into the trained live stock position detection model.
Step S104: and determining a first ring state of the target live stock at a first moment according to the preset notch position and the initial target position.
The preset notch position can be determined according to the position of the current notch in the initial image to be identified.
If the initial image to be identified and the subsequent image to be identified are acquired by the image acquisition device with the same position and the same angle, a preset gear position can be directly set at the moment, and the preset gear position does not need to be identified each time.
The initial image to be identified and the subsequent image to be identified can be acquired by image acquisition devices at different positions and/or different angles, and at the moment, a corresponding preset notch position is set for each image acquisition device.
In one embodiment, determining the first in-loop condition of the first moment of the target live stock based on the preset notch position and the initial target position comprises:
acquiring a preset gear identification interval;
If the target live stock is located in the preset notch identification interval and the initial target position exceeds the preset notch position, the first in-loop state comprises in-loop;
if the target live stock is located outside the preset gear identification interval or the initial target position does not exceed the preset gear position, the first in-loop state comprises the out-of-loop state.
The preset notch identification interval may be located in an image acquisition area of an image acquisition device of an image to be identified initially, that is, the image acquisition area.
Sometimes, in order to control the speed of the live stock entering and exiting the ring, the width of the notch may be smaller, at this time, more live stock will exist to move around the notch, in order to reduce the influence of the live stock moving around the notch, the overlap ratio of the preset notch identification interval and the position where the notch is located can be improved by setting the position of the preset notch identification interval, so as to reduce the influence of the live stock moving around the notch.
It should be noted that the preset notch identification interval at least includes the position of the notch.
Referring to fig. 2 and 3, fig. 2 shows an example of a range, and fig. 3 shows an example of a preset range identification section (identification section shown in the drawing) and a preset range position (preset discrimination position shown in the drawing). It should be noted that the preset gear identification interval and the preset gear position may be other positions set by those skilled in the art, and the drawing is only an example.
Fig. 2 also gives an example of the position of an image acquisition device (camera in the figure). Of course, the image acquisition device may also be located at other locations as deemed appropriate by the person skilled in the art. The photographing region of the image pickup device should include a notch.
The camera is arranged right above the in-out notch, so that the image acquisition area completely covers the notch area.
Alternatively, the determination of whether the initial target position exceeds the preset notch position may be determined by determining a vertical distance between the initial target position and the preset notch position and a rear portion of the ring, where the rear portion of the ring is a side of the ring opposite to the notch. If the vertical distance of the initial target position is larger than the vertical distance of the preset notch position, the initial target position is indicated to not exceed the preset notch position, otherwise, the initial target position is indicated to exceed the preset notch position.
Optionally, the determining whether the initial target position exceeds the preset notch position may be determined by the initial coordinate and the preset notch coordinate, and by setting that the coordinate values in the direction from the outside of the coil to the inside of the coil increase in sequence, it may be determined whether the initial target position exceeds the preset notch position according to the size of the coordinate.
In one embodiment, one representation of the initial target position is as follows:
livestock (n) = [ y n1,xn1,yn2,xn2 ] equation (1);
Wherein livestock (n) is the identified nth live stock location; (x n1,yn1) is the lower left angular position of the live stock position identification frame and (x n2,yn2) is the upper right angular position thereof.
In this embodiment, the preset range identification interval is obtained by setting the identification area according to the actual position of the range, and by comparing the initial target position of the target live stock with the preset range identification interval, it is determined whether the target live stock is within the preset range identification interval, where the logic discriminant is:
Roi= [ y min,xmin,ymax,xmax ] equation (2);
The ROI is a preset notch identification section after completion of setting, the coordinates (x min,ymin) are the lower left corner coordinates of the identification frame of the preset notch identification section, (x max,ymax) are the upper right corner coordinates of the identification frame of the preset notch identification section, and the enter_roi (n) is the judgment of whether the target live stock is in the preset notch identification section.
In this embodiment, the logic discriminant of the first ring state at the first moment of determining the target live stock according to the preset notch position and the initial target position is:
Wherein y threshold is a set discrimination threshold. The enter_region is a determination (in the ring state) of whether the target live stock is in the ring, and the determination basis is that only when the coordinate value of the position frame of the target live stock completely exceeds y threshold, the determination is true, otherwise, the determination is false.
In one embodiment, the method further comprises:
Acquiring a subsequent image to be identified including a second moment of the target live stock, and inputting the subsequent image to be identified into a trained live stock position detection model to obtain a subsequent target position of the target live stock, wherein the second moment is later than the first moment;
Determining a second in-loop state of the target live stock at a second moment according to the subsequent target position and the preset notch position;
And determining the moving direction of the target live stock according to the initial target position and the subsequent target position.
Wherein the time difference between the second time instant and the first time instant may be set by a person skilled in the art, the setting of the time difference may refer to the moving speed of the live stock, the average length of the live stock, etc., in other words, the time difference between the first time instant and the second time instant may be determined according to the average length of the live stock, the moving speed, etc. Through accurate regulation and control of the time difference, the image processing amount can be reduced, and the identification accuracy of live animals can be ensured.
The determination of the second ring state may be similar to the determination of the first ring state in the above-described embodiment, and is not limited herein.
In one embodiment, determining the direction of motion of the target live stock from the initial target position and the subsequent target position comprises any one of:
If the initial target position and the subsequent target position exceed the preset notch position, respectively acquiring an initial distance and a subsequent distance, if the initial distance is larger than the subsequent distance, the moving direction comprises a circle-out direction, and if the initial distance is smaller than the subsequent distance, the moving direction comprises a circle direction, the initial distance comprises a distance between the initial target position and the preset notch position, and the subsequent distance comprises a distance between the subsequent target position and the preset notch position;
if the initial target position exceeds the preset notch position and the subsequent target position does not exceed the preset notch position, the moving direction comprises a circle-out direction;
If the initial target position does not exceed the preset notch position and the subsequent target position exceeds the preset notch position, the movement direction comprises the circle direction;
And respectively acquiring an initial coordinate and a subsequent coordinate, wherein if the initial coordinate is larger than the subsequent coordinate, the movement direction comprises a circle-out direction, and if the initial coordinate is smaller than or equal to the subsequent coordinate, the movement direction comprises a circle-in direction, the initial coordinate is the coordinate of the initial target position, the subsequent coordinate is the coordinate of the subsequent target position, and the coordinate value increases along the circle-in direction.
In one embodiment, the manner of determining the direction of motion includes:
(u, v, t) = [ v n1,t,un1,t ] formula (5);
(u, v, t+1) = [ v n1,t+1,un1,t+1 ] equation (6);
Wherein, (u, v, t) is the position coordinate of the nth target live stock at the t time, and (u, v, t+1) is the position coordinate of the nth target live stock at the t+1 time.
In one embodiment, the method further comprises:
And determining the in-out state of the target live stock according to the first in-loop state and the second in-loop state.
The method for determining the ring entering and exiting state comprises the following steps:
if the first in-loop state is in the loop, the second in-loop state is out of the loop, and the in-loop and out-loop state is out of the loop;
if the first in-loop state is out of the loop, the second in-loop state is in the loop, and the in-loop and out-loop state is in the loop;
if the first in-loop state is in-loop, the second in-loop state is in-loop, and the in-loop and out-loop state is in-loop;
If the first ring state is out of the ring, the second ring state is out of the ring, and the entering and exiting state is out of the ring and wander.
Optionally, the ring entering and exiting state of the target live stock with the ring entering and exiting state being the ring exiting and entering state can be further verified according to the moving direction, and if the moving direction is the ring entering direction and the ring entering and exiting state is the ring entering state, the method is normal; if the movement direction is the out-of-loop direction and the in-and-out state is the in-loop, the abnormal state is generated; if the movement direction is the loop-out direction and the loop-in and loop-out direction is the loop-out direction, the method is normal; if the moving direction is the out-of-loop direction and the in-and-out-of-loop direction is the in-loop, the device is abnormal.
In one embodiment, if at least one of the initial image to be identified and the subsequent image to be identified includes at least two target live stock, the determining method of the same target live stock in the initial image to be identified and the subsequent image to be identified includes:
carrying out identity identification information identification on each target live stock in the initial to-be-identified image;
respectively acquiring an initial motion track of each target live stock in an initial image to be identified and a subsequent motion track of each target live stock in a subsequent image to be identified;
Determining the matching degree between the target live stock in the subsequent image to be identified and each target live stock in the initial image to be identified according to the initial motion track and the subsequent motion track;
The identification information identification of the target live stock in the initial image to be identified with the matching degree higher than the preset matching degree is endowed to the target live stock in the subsequent image to be identified;
the two target live animals are identified to be the same target live animal by the same identification information in the initial image to be identified and the subsequent image to be identified.
In one embodiment, the motion profile includes at least one of the following features:
(U, v, γ, H, U, Y, γ, H) formula (8);
wherein, (u, v) is the central coordinate of the current identification frame of the live stock, gamma is the length-width ratio, and h represents the height. (U, Y) is velocity information of (U, v) in the image coordinate system, gamma is the aspect ratio of (U, v) in the image coordinate system, and h is the height of (U, v) in the image coordinate system.
Sometimes, the initial image to be identified and the subsequent image to be identified may each include a plurality of target live stock, and at this time, since the judgment on the motion state and the in-out state is performed based on the same target live stock, it is important how to determine the same target live stock in the two images. At this time, an alternative way is to train a recognition model of the same target live stock in advance, and determine whether the two target live stocks are the same target live stock based on the positions of the target live stocks in the front and rear images, where the recognition model of the same target live stock may be trained by labeling a plurality of target live stock images with motion trajectories in advance. Another alternative way may be to obtain the moving distance of one target live stock M in the subsequent image to be identified and each target live stock in the initial image to be identified by respectively obtaining the moving distance, which may be the euclidean distance, and take one target live stock in the initial image to be identified with the smallest moving distance as the same target live stock of the target live stock M.
In one embodiment, the initial image to be identified is an initial camera image captured by a camera, the subsequent image to be identified is the next camera image captured by the camera, and one exemplary way to individually track the target live stock is:
acquiring an initial camera image, and acquiring an identification frame of a live stock individual positioned in a preset file identification interval as an initial identification frame;
Labeling the initial identification frames of the target live stock;
And acquiring a next camera image, and regarding a next recognition frame, the distance between the target live stock in each next camera image and the target live stock in the initial recognition frame is within a preset range, as a candidate recognition frame, extracting the characteristics of the candidate recognition frame and determining the same target live stock, thereby realizing the determination of the same target live stock in the two images.
An alternative way of determining the same target live stock from the candidate recognition frames includes extracting initial features of the target live stock in the initial camera image and next features of the target live stock in the next camera image in each candidate recognition frame, comparing the initial features with the next features, and determining the target live stock in the next camera image with the highest similarity as the same target live stock of the target live stock in the initial camera image.
In one embodiment, the method further comprises:
Acquiring the number of live animals in the history;
and determining the number of live animals in the current range according to the in-and-out range state and the historical number of live animals in the range.
Wherein the historical number of live stock in the pen may be the number of live stock measured previously, and the number may be 0 or the number of live stock in the pen conventionally when no live stock is in the pen in the initial state.
The number of the historical live stock in the circle is updated in real time, and the current number of the live stock in the circle can be used as the number of the live stock in the circle in the past as long as the condition change of entering and exiting the circle once occurs.
One way to determine the current number of live animals in the field based on the in-and-out-of-field state and the number of live animals in the field of history is to add one on the basis of the number of live animals in the field as long as the in-and-out-of-field state of a target live animal is determined to be in-and-field, and subtract one on the basis of the number of live animals in the field as long as the in-and-out-of-field state of a target live animal is determined to be out-of-field.
In one embodiment, whether the target live stock enters the range can be determined according to the state of the ring and the moving direction, and when the target live stock enters the range, the number of live stocks entering the range is accumulated.
In one embodiment, the live stock often has an identity mark, for example, the ears of the cattle and sheep are provided with marks and other characters representing the identity of the live stock, and the identification information arranged on the target live stock can be identified in the initial image to be identified and the subsequent image to be identified by combining image identification, so that the purpose of determining which live stock is in or out of the circle while counting the number of live stock in or out of the circle can be realized. And the statistical management and asset inventory of live stock are more convenient.
The embodiment provides a live stock in-loop state determining method, which comprises the steps of obtaining a plurality of sample images of a sample live stock, marking the positions of the sample live stocks in the sample images, constructing a data set, training a preset basic model by the data set to obtain a trained live stock position detection model, detecting the in-loop state of the live stock in a given image based on the trained live stock position detection model, automatically detecting whether the live stock is in-loop or not in a machine mode, accurately detecting the position information and the in-loop state of the live stock even if the number of the live stocks is large and the movement speed is high, reducing the cost, unifying the standard for determining the in-loop state, and being beneficial to realizing the efficient operation of animal husbandry.
Optionally, the number of the inlet and the outlet of the living animals can be checked by determining the motion state and the in-out state of the target living animals, an image acquisition area (preset baffle identification interval) is defined by arranging image acquisition equipment such as a camera at the baffle of the livestock pen, an image data set (data set) is acquired, a living animal position detection model is obtained according to the training of the data set, the position information of the target living animal position is identified by calling the model, a discrimination area (preset baffle position) in the inlet pen is set, the individual tracking is carried out on the living animals in the discrimination area, the motion direction of the living animals is judged, and the number of the living animals entering the baffle is accumulated. The number of live animals actually entering the notch under the complex condition can be well checked, the working efficiency is improved, and the cost is reduced.
The method for determining the state of a living animal in the ring provided above is described in detail by way of example, referring to fig. 4, and includes:
s401: and a data acquisition device is arranged right above the gear opening to define an image acquisition area.
It should be noted that the data acquisition device may be disposed at other positions of the notch.
S402: and acquiring a plurality of image making data sets of the live stock entering and exiting circles, and training a preset basic model to obtain a live stock position detection model.
S403: setting a notch identification area.
The notch identification area is a preset notch identification interval and is used for identifying whether the position of the live stock is located in the identification area.
S404: setting a preset notch position.
The preset notch position is used for judging the position of the live stock.
S405: and (5) tracking the target live stock individually to obtain the motion information of the target live stock.
S406: and judging the moving direction of the target live stock and whether the moving direction exceeds the preset notch position, and updating the number of the live stock in the current circle.
In step S401 of the present embodiment, by setting a camera right above the in-out port, the image acquisition area completely covers the port area, so as to acquire the image dataset of the in-out port of the live stock.
Alternatively, the camera mounting orientation is shown in FIG. 2.
In step S402 of the present embodiment, a live stock position detection model is trained from the image dataset coordinates. Invoking a live stock position detection model, and acquiring an identification frame of a target live stock at a gear inlet and a gear outlet in real time, wherein the content and the format of the coordinate of the identification frame are expressed as follows:
livestock (n) = [ y n1,xn1,yn2,xn2 ] equation (9);
Wherein livestock (n) is the identified nth live stock location; (x n1,yn1) is the lower left angular position of the live stock position identification frame and (x n2,yn2) is the upper right angular position thereof.
In step S403 of the present embodiment, after setting the notch identification area according to the actual position of the notch, the target live stock position is compared with the notch identification area to determine whether the target live stock is in the notch identification area, where the logic discriminant is:
roi= [ y min,xmin,ymax,xmax ] equation (10);
The ROI is a set notch identification area, the coordinates (x min,ymin) are the left lower corner coordinates of the identification frame, (x max,ymax) are the right upper corner coordinates of the identification frame, and the enter_ROI (n) is the judgment of whether the target live stock is located in the identification area.
In step S404 of the present embodiment, a preset notch position in the entering circle is set in the identification frame area, and the target live stock crossing the preset notch position is counted as entering circle; the logical discriminant for judging the position of the live stock exceeding the preset notch is as follows:
Wherein y threshold is the preset notch position. The enter_region is a position determination result (in a ring state) of whether the target live stock enters the ring, and the determination basis is that only when the identification frame of the target live stock completely exceeds y threshold, the determination is that the target live stock is in the true ring, and otherwise, the determination is that the target live stock is out of the false ring.
In this embodiment, the target live stock is tracked individually, and the main tracking method for each target live stock is as follows:
Acquiring an initial camera image, and acquiring the position of a target live stock in a notch identification area as an initial identification frame;
labeling the initial identification frames of the corresponding target live stock;
And collecting the next camera image, setting all the recognition frames with the distance from the initial recognition frame within a preset range as candidate recognition frames, extracting the characteristics of the recognition frames and selecting targets to realize the judgment of the same target live stock of the front and rear images.
In step S405 of this embodiment, feature extraction is performed on all candidate recognition frames, and the matching degree is calculated by matching the features with the initial recognition frame template to obtain the highest matching degree as the prediction target, and the existing label is selected.
And acquiring the motion trail of the current label individual. The motion characteristic parameters are expressed as follows:
(U, v, γ, H, U, Y, γ, H) equation (13);
Wherein, (u, v) is the central coordinate of the current identification frame of the target live stock, gamma is the length-width ratio, and h represents the height. (U, Y) is velocity information of (U, v) in the image coordinate system, Y is the aspect ratio of the current identification frame in the image coordinate system, and H is the height of the current identification frame in the image coordinate system.
In step S505 of the present embodiment, whether the movement direction is upward (for example, the circle direction is upward in fig. 3, and other cases can be determined by referring to this manner) is determined by the movement information, and the basic logic is:
(u, v, t) = [ v n1,t,un1,t ] equation (14);
(u, v, t+1) = [ v n1,t+1,un1,t+1 ] equation (15);
wherein (u, v, t) is the position coordinate of the nth living animal at the t time, and (u, v, t+1) is the position coordinate of the nth living animal at the t+1 time.
In this embodiment S406, whether or not the live stock enters the notch is determined based on the position determination result enter_region and the direction determination result enter_barrier (movement direction), and when the above determination results are all established, the number of live stocks entering the notch is accumulated.
The preset identification area, the preset notch position and the motion direction of the target live stock are judged as shown in fig. 3.
According to the embodiment, the number of live stock in and out of the stock can be counted based on target detection and individual tracking, the camera is arranged at the stock ring notch, the image acquisition area is defined, the image data set is acquired, the live stock position detection model is obtained according to data set training, the live stock position detection model is called to identify the target live stock position, the judging area (the preset notch position) in the entering ring is arranged, the individual tracking is carried out on the target live stock in the preset identifying area, the moving direction of the target live stock is judged, and the number of live stocks entering the notch is accumulated. The number of live animals actually entering the gear under the complex condition can be well checked, the working efficiency is improved, and the cost is reduced.
Example two
Referring to fig. 5, the present embodiment provides a live stock in-loop status determination system 500, including:
the sample image obtaining module 501 is configured to obtain a plurality of sample images including a sample live stock, and perform position labeling on the sample live stock in the sample images, so as to construct a data set;
the training module 502 is configured to input the data set into a preset basic model, and perform training to obtain a trained live stock position detection model;
An initial target position determining module 503, configured to obtain an initial image to be identified including a first moment of a target live stock, and input the initial image to a trained live stock position detection model to obtain an initial target position of the target live stock;
the loop status determining module 504 is configured to determine a first loop status of the target live stock at a first moment according to the preset notch position and the initial target position.
In this embodiment, the system for determining the status of the living animal in the ring is essentially provided with a plurality of modules for executing the method in the above embodiment, and specific functions and technical effects are only required to refer to the above embodiment, and are not repeated herein.
Referring to fig. 6, an embodiment of the present invention also provides an electronic device 600 comprising a processor 601, a memory 602, and a communication bus 603;
a communication bus 603 for connecting the processor 601 and the memory 602;
The processor 601 is configured to execute a computer program stored in the memory 602 to implement the method as described in one or more of the above embodiments.
The embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored,
The computer program is for causing a computer to execute the method according to any one of the above embodiments.
The embodiment of the application also provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the instructions (instructions) may cause the device to execute the steps included in the embodiment one of the embodiment of the application.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (7)

1. A method for determining the condition of a live animal in the field, the method comprising:
Acquiring a plurality of sample images comprising sample live stock, and carrying out position marking on the sample live stock in the sample images to construct a data set;
Inputting the data set into a preset basic model, and training to obtain a trained live animal position detection model;
Acquiring an initial image to be recognized including a first moment of a target live stock, and inputting the initial image to be recognized into the trained live stock position detection model to obtain an initial target position of the target live stock;
determining a first in-loop state of the target live stock at a first moment according to a preset gear position and the initial target position, wherein determining the first in-loop state of the target live stock at the first moment according to the preset gear position and the initial target position comprises:
acquiring a preset gear identification interval;
If the target live stock is located in the preset gear identification interval and the initial target position exceeds the preset gear position, the first in-loop state comprises in-loop;
If the target live stock is located outside the preset gear identification interval or the initial target position does not exceed the preset gear position, the first in-loop state comprises out-of-loop;
Acquiring a subsequent image to be identified including a second moment of the target live stock, and inputting the subsequent image to be identified into the trained live stock position detection model to obtain a subsequent target position of the target live stock, wherein the second moment is later than the first moment;
determining a second in-loop state of the target live stock at a second moment according to the subsequent target position and a preset notch position;
Determining the moving direction of the target live stock according to the initial target position and the subsequent target position;
Wherein the determining the moving direction of the target live stock according to the initial target position and the subsequent target position comprises:
If the initial target position and the subsequent target position exceed the preset notch position, respectively acquiring an initial distance and a subsequent distance, if the initial distance is larger than the subsequent distance, the movement direction comprises a circle-out direction, and if the initial distance is smaller than the subsequent distance, the movement direction comprises a circle-in direction, the initial distance comprises a distance between the initial target position and the preset notch position, and the subsequent distance comprises a distance between the subsequent target position and the preset notch position;
if the initial target position exceeds the preset notch position and the subsequent target position does not exceed the preset notch position, the movement direction comprises a loop-out direction;
if the initial target position does not exceed the preset notch position and the subsequent target position exceeds the preset notch position, the movement direction comprises a looping direction;
And respectively acquiring an initial coordinate and a subsequent coordinate, wherein if the initial coordinate is larger than the subsequent coordinate, the movement direction comprises a circle-out direction, and if the initial coordinate is smaller than or equal to the subsequent coordinate, the movement direction comprises a circle-in direction, the initial coordinate is the coordinate of the initial target position, the subsequent coordinate is the coordinate of the subsequent target position, and the coordinate value increases along the circle-in direction.
2. The live animal in-loop condition determination method of claim 1, further comprising:
and determining the in-out state of the target live stock according to the first in-loop state and the second in-loop state.
3. The method for determining the in-loop status of live stock according to claim 1, wherein if at least one of the initial image to be recognized and the subsequent image to be recognized includes at least two target live stock, the determining method for determining the same target live stock in the initial image to be recognized and the subsequent image to be recognized includes:
Carrying out identity identification information identification on each target live stock in the initial to-be-identified image;
Respectively acquiring an initial motion track of each target live stock in the initial image to be identified and a subsequent motion track of each target live stock in the subsequent image to be identified;
Determining the matching degree between the target live stock in the subsequent image to be identified and each target live stock in the initial image to be identified according to the initial motion track and the subsequent motion track;
the identity identification information identification of the target live stock in the initial image to be identified with the matching degree higher than a preset matching degree is endowed to the target live stock in the subsequent image to be identified;
And the identity identification information in the initial image to be identified and the subsequent image to be identified is the same to identify that the two target live animals are the same target live animal.
4. The live animal in-loop condition determination method of claim 2, wherein the method further comprises:
Acquiring the number of live animals in the history;
and determining the current number of live animals in the circle according to the in-and-out circle state and the historical number of live animals in the circle.
5. A live stock in-loop condition determination system, the system comprising:
The sample image acquisition module is used for acquiring a plurality of sample images comprising the sample live stock, and carrying out position labeling on the sample live stock in the sample images to construct a data set;
the training module is used for inputting the data set into a preset basic model and training to obtain a trained live animal position detection model;
The initial target position determining module is used for acquiring an initial image to be recognized at a first moment of a target live stock, inputting the initial image to be recognized into the trained live stock position detection model and obtaining an initial target position of the target live stock;
The in-loop state determining module is configured to determine a first in-loop state of the target live stock at a first moment according to a preset notch position and the initial target position, where determining the first in-loop state of the target live stock at the first moment according to the preset notch position and the initial target position includes: acquiring a preset gear identification interval; if the target live stock is located in the preset gear identification interval and the initial target position exceeds the preset gear position, the first in-loop state comprises in-loop; if the target live stock is located outside the preset gear identification interval or the initial target position does not exceed the preset gear position, the first in-loop state comprises out-of-loop; acquiring a subsequent image to be identified including a second moment of the target live stock, and inputting the subsequent image to be identified into the trained live stock position detection model to obtain a subsequent target position of the target live stock, wherein the second moment is later than the first moment; determining a second in-loop state of the target live stock at a second moment according to the subsequent target position and a preset notch position; determining the moving direction of the target live stock according to the initial target position and the subsequent target position; the determining the moving direction of the target live stock according to the initial target position and the subsequent target position comprises the following steps: if the initial target position and the subsequent target position exceed the preset notch position, respectively acquiring an initial distance and a subsequent distance, if the initial distance is larger than the subsequent distance, the movement direction comprises a circle-out direction, and if the initial distance is smaller than the subsequent distance, the movement direction comprises a circle-in direction, the initial distance comprises a distance between the initial target position and the preset notch position, and the subsequent distance comprises a distance between the subsequent target position and the preset notch position; if the initial target position exceeds the preset notch position and the subsequent target position does not exceed the preset notch position, the movement direction comprises a loop-out direction; if the initial target position does not exceed the preset notch position and the subsequent target position exceeds the preset notch position, the movement direction comprises a looping direction; and respectively acquiring an initial coordinate and a subsequent coordinate, wherein if the initial coordinate is larger than the subsequent coordinate, the movement direction comprises a circle-out direction, and if the initial coordinate is smaller than or equal to the subsequent coordinate, the movement direction comprises a circle-in direction, the initial coordinate is the coordinate of the initial target position, the subsequent coordinate is the coordinate of the subsequent target position, and the coordinate value increases along the circle-in direction.
6. An electronic device comprising a processor, a memory, and a communication bus;
The communication bus is used for connecting the processor and the memory;
The processor is configured to execute a computer program stored in the memory to implement the method of any one of claims 1-4.
7. A computer-readable storage medium, having a computer program stored thereon,
The computer program for causing the computer to perform the method of any one of claims 1-4.
CN202111289090.7A 2021-11-02 2021-11-02 Method, system, equipment and medium for determining in-loop state of live stock Active CN114005011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111289090.7A CN114005011B (en) 2021-11-02 2021-11-02 Method, system, equipment and medium for determining in-loop state of live stock

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111289090.7A CN114005011B (en) 2021-11-02 2021-11-02 Method, system, equipment and medium for determining in-loop state of live stock

Publications (2)

Publication Number Publication Date
CN114005011A CN114005011A (en) 2022-02-01
CN114005011B true CN114005011B (en) 2024-06-18

Family

ID=79926470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111289090.7A Active CN114005011B (en) 2021-11-02 2021-11-02 Method, system, equipment and medium for determining in-loop state of live stock

Country Status (1)

Country Link
CN (1) CN114005011B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118365847B (en) * 2023-09-13 2024-11-26 张宇琦 Virtual electronic ear tag adding model training method, ear tag adding method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69619965T2 (en) * 1996-04-30 2002-08-08 Plusmic Corp., Tokio/Tokyo A moving image judging device
CN108021848B (en) * 2016-11-03 2021-06-01 浙江宇视科技有限公司 Passenger flow volume statistical method and device
CN112655019B (en) * 2018-06-25 2024-12-31 农场监测公司 Monitoring livestock in agricultural pens
CN109376584A (en) * 2018-09-04 2019-02-22 湖南大学 A system and method for livestock number statistics for animal husbandry

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Localizing fish in highly turbid underwater images.《International Workshop on Advanced Imaging Technology (IWAIT) 2021》.2021,117661H-1至117661H-6. *
基于图像处理的畜肉产品分级方法研究;曹鹏祥;《中国优秀硕士学位论文全文库 农业科技辑》;20170315;D015-12 *

Also Published As

Publication number Publication date
CN114005011A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN110705405B (en) Target labeling method and device
US10769466B2 (en) Precision aware drone-based object mapping based on spatial pattern recognition
CN109801260B (en) Livestock number identification method and device, control device and readable storage medium
CN109376584A (en) A system and method for livestock number statistics for animal husbandry
CN109685075A (en) A kind of power equipment recognition methods based on image, apparatus and system
CN108447091A (en) Object localization method, device, electronic equipment and storage medium
CN113781526A (en) A livestock counting and identification system
CN111680681B (en) Image post-processing method and system for eliminating abnormal recognition target and counting method
CN115830078A (en) Live pig multi-target tracking and behavior recognition method, computer equipment and storage medium
CN112836683A (en) License plate recognition method, device, equipment and medium for portable camera equipment
CN110222664A (en) A kind of feeding monitoring system of intelligent pigsty based on the analysis of video activity
CN110991222A (en) Object state monitoring and sow oestrus monitoring method, device and system
CN114005011B (en) Method, system, equipment and medium for determining in-loop state of live stock
WO2021051268A1 (en) Machine vision-based tree type identification method and apparatus
CN112631333A (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN111191557B (en) Mark identification positioning method, mark identification positioning device and intelligent equipment
Bastiaansen et al. Continuous real-time cow identification by reading ear tags from live-stream video
CN111079617A (en) Poultry identification method and device, readable storage medium and electronic equipment
CN115410153A (en) Door opening and closing state judging method and device, electronic terminal and storage medium
CN116311008A (en) Identification method, device, system and robot for abnormal chicken cage
WO2022169831A1 (en) Method and system for compiling performance metrics for racing competitors
CN114519400A (en) Segmentation result evaluation method and device, electronic equipment and readable storage medium
CN114140762A (en) Method for automatically identifying vehicle driving direction
JP7384506B1 (en) Insect individual identification device using learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 401329 No. 5-6, building 2, No. 66, Nongke Avenue, Baishiyi Town, Jiulongpo District, Chongqing

Applicant after: MCC CCID information technology (Chongqing) Co.,Ltd.

Applicant after: Chongqing saidI Yinong Data Technology Co.,Ltd.

Address before: 401329 No. 5-6, building 2, No. 66, Nongke Avenue, Baishiyi Town, Jiulongpo District, Chongqing

Applicant before: CISDI CHONGQING INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: Chongqing saidI Yinong Data Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant