Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The application scenario of the present application is described first below.
The method and the device can be used for counting the passenger flow in the target area. For example, when the target area is a store, the present application may be used to count the amount of traffic in the store.
In one implementation, a capture device may be installed at the door of each store, and the capture device may be connected to a computing device, so that the capture device may capture an area in the store, and send the captured video to the computing device, and the computing device may count the amount of traffic in the store according to the video.
Wherein, can dispose the camera among the collection equipment, for example, collection equipment can be rifle bolt camera. The collecting device and the computing device may be connected through a wired network or a wireless network, which is not limited in this embodiment.
Referring to fig. 1, a flow chart of a method for providing a passenger flow statistics method according to an embodiment of the present application is shown, where the passenger flow statistics method can be applied to a computing device. The passenger flow statistical method can comprise the following steps:
step 101, acquiring a video obtained by shooting a target area.
The acquisition device can shoot the target area and send the shot video to the computing device in real time, and the computing device can receive the video.
Step 102, if a target object is detected in the video, when the target object is located at the passenger flow statistical position in the target area, acquiring an imaging head frame area of the target object, wherein the imaging head frame area is an area of a bounding box of the head of the target object in the video.
The computing device can detect whether a target object exists in the video, and if the target object exists in the video, the computing device tracks the target object to obtain a motion track of the target object. Wherein the target object may be an adult or a child.
It should be noted that a passenger flow statistical position is preset in the target area, and when the target object does not cross the passenger flow statistical position, it is determined that the target object does not enter the target area, and the target object is not counted as the passenger flow of the target area; when the target object crosses the passenger flow statistical position, the target object is determined to enter the target area, and the target object needs to be counted as the passenger flow of the target area. The passenger flow statistical position may be a quadrilateral with a direction, or may be a line, and this embodiment is not limited.
If the target object is located at the passenger flow statistical position according to the movement track, the target object may cross the passenger flow statistical position and enter the target area, at this time, the computing device may obtain the area of the imaging head frame of the target object, so that after the target object crosses the passenger flow statistical position, the computing device may determine whether the target object is an adult or a child according to the area of the imaging head frame.
The area of the imaging head frame may be the area of a bounding box drawn for the head of the target object after the head is recognized by the computing device, and the bounding box may also be referred to as the imaging head frame.
And 103, if the target object crosses the passenger flow statistical position, detecting that the target object is a child or an adult according to the area of the imaging head frame.
The computing device may continue to track the trajectory of the target object in the video, and after it is determined that the target object crosses the passenger flow statistical position, the computing device may detect whether the target object is a child or an adult according to the area of the imaging head frame, where a specific detection method is described in detail below and is not described herein again.
In this embodiment, the computing device can identify whether the target object is a child or an adult according to the area of the imaging head frame, and compared with the method of identifying the age of the target object by face recognition in the related art and then determining whether the target object is a child or an adult, this embodiment has the following advantages:
1. the computing equipment only needs to identify the head of the target object in the video without identifying the facial details of the target object, so that the requirement on the image quality is not high, the requirement on the definition of the camera is met, the method can be realized by using the camera with the common shooting precision, and the camera with the high shooting precision is not needed to be configured, so that the cost of passenger flow statistics can be reduced;
2. the computing equipment only needs to identify the head of the target object in the video without identifying the facial details of the target object, so that the requirements on environments such as light, angles and the like during shooting are low, and the video shooting difficulty can be reduced;
3. the algorithm for acquiring the area of the imaging head frame is simple, local computing resources are not increased basically, the method can be realized by only increasing a small amount of acquired data, and resources can be saved.
And 104, counting the passenger flow of the target area according to the detection result.
In this embodiment, the computing device may obtain two detection results, where the first detection result indicates that the target object is an adult, and the second detection result indicates that the target object is a child. After the detection result is obtained, the computing device may count the passenger flow volume of the target area according to the detection result, and the specific statistical method is described in detail below and is not described herein again.
To sum up, according to the passenger flow statistics method provided in the embodiment of the present application, if a target object in a video is located at a passenger flow statistics position in a target area, an area of an imaging head frame of the target object may be obtained; since the area of the imaging head frame is the area of the bounding box of the head of the target object in the video, if the target object passes through the passenger flow statistical position, the target object can be detected to be a child or an adult according to the area of the imaging head frame; and finally, counting the passenger flow of the target area according to the detection result. Therefore, whether the target object is a child or an adult can be detected through the area of the imaging head frame, the requirement on the image quality for obtaining the area of the imaging head frame is not high, so that the requirement on the definition of the camera is low, passenger flow statistics can be realized by using the camera with common shooting precision, and the cost of the passenger flow statistics can be reduced; and the requirements of the area of the imaging head frame on the environments such as light, angles and the like during shooting are low, so that the video shooting difficulty can be reduced. In addition, the algorithm for acquiring the area of the imaging head frame is simple, and only fewer resources are occupied, so that the resources can be saved.
Referring to fig. 2, a flow chart of a method for providing a passenger flow statistics method according to another embodiment of the present application is shown, where the passenger flow statistics method can be applied to a computing device. The passenger flow statistical method can comprise the following steps:
step 201, acquiring a video obtained by shooting a target area.
The acquisition device can shoot the target area and send the shot video to the computing device in real time, and the computing device can receive the video.
Step 202, if a target object is detected in the video, when the target object is located at the passenger flow statistical position in the target area, acquiring an imaging head frame area of the target object, wherein the imaging head frame area is an area of a bounding box of the head of the target object in the video.
In the embodiment, a passenger flow statistical position is preset in the target area, when the target object does not cross the passenger flow statistical position, it is determined that the target object does not enter the target area, and the target object is not counted as the passenger flow of the target area; when the target object crosses the passenger flow statistical position, the target object is determined to enter the target area, and the target object needs to be counted as the passenger flow of the target area. Wherein the target object may be an adult or a child. The passenger flow statistical position may be a quadrilateral with a direction, or may be a line, and this embodiment is not limited.
If the target object is located at the passenger flow statistical position according to the movement track, the target object may cross the passenger flow statistical position and enter the target area, at this time, the computing device may obtain the area of the imaging head frame of the target object, so that after the target object crosses the passenger flow statistical position, the computing device may determine whether the target object is an adult or a child according to the area of the imaging head frame. The area of the imaging head frame may be the area of a bounding box drawn for the head of the target object after the head is recognized by the computing device, and the bounding box may also be referred to as the imaging head frame.
In one possible implementation, when the target object is located at the passenger flow statistical position in the target area, acquiring the area of the imaging head frame of the target object may include the following sub-steps:
step 2021, detecting an imaging head frame of the target object in the video, and generating a moving track of the target object according to the imaging head frame.
The computing device can detect whether a target object exists in the video, and if the target object exists in the video, the computing device tracks the target object to obtain a motion track of the target object.
In one embodiment, the computing device may divide a video into a plurality of video frames, detect each video frame, when an imaging header frame of a target object is detected in a certain video frame, assign a tracking identifier (Trackid) to the imaging header frame, and store the position of the target object corresponding to the tracking identifier; and continuously detecting the imaging head frame of the target object in subsequent video frames, and storing the position of the target object in each video frame corresponding to the tracking identifier, so that the motion track of the imaging head frame of the target object can be obtained, and the motion track of the target object can also be obtained.
Step 2022, when the passenger flow statistical position of the target object in the target area is determined according to the moving track, calculating the area of the imaging head frame to obtain the area of the imaging head frame.
When it is determined that the target object is located at the passenger flow statistical position, the computing device may calculate the area of the imaging head frame according to the proportion of the imaging head frame in the video frame and the area of the video frame, or the computing device may calculate the area of the imaging head frame according to the number of pixels included in the imaging head frame and the area of each pixel, or the computing device may also calculate the area of the imaging head frame through other algorithms, which is not limited in this embodiment.
And 203, if the target object passes through the passenger flow statistical position, calculating target characteristic parameters of the target object according to the area of the imaging head frame, wherein the target characteristic parameters comprise target head circumference parameters and target height parameters of the target object.
According to analysis, at least two influence factors of the head circumference and the height exist in the size of the area of the imaging head frame, and the relationship between the head circumference and the height and the area of the imaging head frame is explained below.
1. Head circumference
When two objects with different head circumference parameters stand at the same position, namely the distance between the two objects and the acquisition equipment is the same, the object with large head circumference parameter is imaged in the video to be large, the object with small head circumference parameter is imaged in the video to be small, the imaging size is in direct proportion to the side length of the imaging head frame, and the area of the imaging head frame is equal to the square of the side length of the imaging head frame2。
2. Height of a person
When the acquisition device shoots in a overlooking mode and two objects with different height parameters are required to be imaged at the same position, the closer the object with the larger height parameter is to the acquisition device, the farther the object with the smaller height parameter is from the acquisition device, as shown in fig. 3, the distance between the acquisition device and the object is proportional to the side length of the imaging head frame, the area of the imaging head frame is equal to the square of the side length of the imaging head frame, and according to the relation, if the height parameter is recorded as H, the area of the imaging head frame is recorded as S, the second coefficient is recorded as β, the S is β× H2。
When the area S of the imaging head frame of the target object is known, if the target head circumference parameter and the target height parameter of the target object are required to be solved, the first coefficient and the second coefficient need to be solved. Wherein the first coefficient and the second coefficient are fixed values.
The method for calculating the target characteristic parameters of the target object according to the area of the imaging head frame comprises the following substeps:
step 2031, a first coefficient and a second coefficient are obtained.
In a possible implementation manner, a specific target object may be located at the passenger flow statistical position in the target area, the specific imaging head frame area of the specific target object is obtained, and then the head circumference parameter and the height parameter of the specific target object are measured, so that the first coefficient and the second coefficient can be calculated.
At this time, acquiring the first coefficient and the second coefficient may include: measuring a specific head circumference parameter and a specific height parameter of a specific subject; when a specific object is located at a passenger flow statistical position in a target area, acquiring the area of a specific imaging head frame of the specific object; dividing the area of the specific imaging head portrait by the square of the specific head circumference parameter to obtain a first coefficient; and dividing the area of the specific imaging head portrait by the square of the specific height parameter to obtain a second coefficient.
Step 2032, performing an evolution operation on the quotient of the area of the imaging head frame divided by the first coefficient to obtain a target head circumference parameter.
The area of the imaging head frame is in direct proportion to the square of the target head circumference parameter, andS=α×He
2the target head circumference parameter can be obtained by conversion
Since the head frame area S and the first coefficient α are known, the target head circumference parameter can be calculated.
And 2033, performing evolution operation on the quotient of the area of the imaging head frame divided by the second coefficient to obtain the target height parameter.
The area of the imaging head frame is in direct proportion to the square of the target height parameter, and the S is β× H
2The target height parameter can be obtained by conversion
Since the head frame area S and the second coefficient β are known, the target height parameter can be calculated.
And step 204, calculating the probability that the target object is the child according to the target characteristic parameters.
In this embodiment, the computing device may preset a child age threshold N, and then calculate the sum of the probabilities that the target object is less than or equal to the age of N, that is, the probability that the target object is a child may be obtained. For example, if N is 14, the computing device may calculate the probability that the target object is a child aged 1, the probability that the target object is a child aged 2, the probability that the target object is a child aged 3, and … …, the probability that the target object is a child aged 14, and then add the 14 probabilities to obtain the probability that the target object is a child.
Wherein, calculating the probability that the target object is a child according to the target characteristic parameters may include the following substeps:
step 2041, acquiring a first characteristic parameter of an adult and N groups of second characteristic parameters of children from a standard table, wherein the first characteristic parameter comprises a head circumference average value and a height average value of the adult, the ith group of second characteristic parameters comprises a head circumference average value and a height average value of the ith child, and i is greater than or equal to 1 and less than or equal to a child age threshold N.
The standard table comprises the average value of the head circumference of the minor and the adult in each age group of the national standard, so the average value of the head circumference of the adult and the average value of the head circumference of the child at each age can be obtained by table look-up. The standard table also comprises the height average values of the minor and the adult of each age according to the national standard, so that the height average value of the adult and the height average value of the child of each age can be obtained by looking up the table.
And forming a first characteristic parameter by using the head circumference average value and the height average value of the adults in the data, and forming a group of second characteristic parameters by using the head circumference average value and the height average value of children at each age to obtain N groups of second characteristic parameters.
Step 2042, calculating the total probability that the target object is less than or equal to N years according to the target characteristic parameters, the first characteristic parameters, the N groups of second characteristic parameters and the posterior probability calculation formula of Bayes.
If a is a specified age and B is a binary set of feature parameters, i.e., (height parameter H, head circumference parameter He), then the total probability formula for calculating the target object to be N years or less is as follows
Step 2043, determine the total probability as the probability that the target object is a child.
In step 205, if the probability is greater than or equal to the preset threshold, a detection result indicating that the target object is a child is generated, and step 207 is executed.
The computing device may preset a preset threshold and determine whether the target object is an adult or a child based on the preset threshold. The preset threshold may be an empirical value or a numerical value calculated by a calculation formula, and this embodiment is not limited.
In step 206, if the probability is smaller than the preset threshold, a detection result indicating that the target object is an adult is generated, and step 207 is executed.
And step 207, counting the passenger flow of the target area according to the detection result.
In this embodiment, the computing device may obtain two detection results, where the first detection result indicates that the target object is an adult, and the second detection result indicates that the target object is a child.
In this embodiment, the computing device may preset a statistical policy, and perform passenger flow statistics according to the statistical policy and the detection result. Two possible statistical strategies are explained below.
When the statistical strategy is to count the passenger flow of adults and children and mark the passenger flow type, the counting the passenger flow of the target area according to the detection result may include: when the detection result indicates that the target object is an adult, updating the passenger flow of the target area, and recording the current passenger flow type as an adult; and when the detection result indicates that the target object is a child, updating the passenger flow of the target area, and recording the current passenger flow type as the child.
For example, when the target object entering the target area is an adult, the passenger flow volume is increased by 1, and the passenger flow type is marked as an adult; when the target object entering the target area is a child, the passenger flow is increased by 1, and the passenger flow type is marked as a child. Therefore, the computing device can count the passenger flow of adults and the passenger flow of children, and can enrich the statistical mode of the passenger flow.
When the statistical strategy is to count the passenger flow of adults, the counting the passenger flow of the target area according to the detection result may include: when the detection result indicates that the target object is an adult, updating the passenger flow volume of the target area; when the detection result indicates that the target object is a child, the passenger flow volume of the target area is not updated.
For example, when the target object entering the target area is an adult, the passenger flow volume is increased by 1; when the target object entering the target area is a child, the passenger flow volume is not updated. Therefore, the computing device can count the passenger flow of the adults and improve the accuracy of the passenger flow.
When the passenger flow statistical method provided by the embodiment is implemented, the testers find that the embodiment has a good recognition effect on the target objects of smaller ages. When defining non-purchasing children at and below age 14 (i.e., N is 14), the recognition accuracy is above 99%, and the recognition accuracy is attached below, as shown in table one.
Watch 1
| Age (age)
|
N≤9
|
N≤13
|
N≤16
|
| Rate of accuracy
|
100%
|
99%
|
92% |
To sum up, according to the passenger flow statistics method provided in the embodiment of the present application, if a target object in a video is located at a passenger flow statistics position in a target area, an area of an imaging head frame of the target object may be obtained; since the area of the imaging head frame is the area of the bounding box of the head of the target object in the video, if the target object passes through the passenger flow statistical position, the target object can be detected to be a child or an adult according to the area of the imaging head frame; and finally, counting the passenger flow of the target area according to the detection result. Therefore, whether the target object is a child or an adult can be detected through the area of the imaging head frame, the requirement on the image quality for obtaining the area of the imaging head frame is not high, so that the requirement on the definition of the camera is low, passenger flow statistics can be realized by using the camera with common shooting precision, and the cost of the passenger flow statistics can be reduced; and the requirements of the area of the imaging head frame on the environments such as light, angles and the like during shooting are low, so that the video shooting difficulty can be reduced. In addition, the algorithm for acquiring the area of the imaging head frame is simple, and only fewer resources are occupied, so that the resources can be saved.
Referring to fig. 4, a block diagram of a passenger flow statistics apparatus provided in an embodiment of the present application is shown, where the passenger flow statistics apparatus can be applied to a computing device. The passenger flow statistics device can comprise:
an obtaining module 410, configured to obtain a video obtained by shooting a target area;
the obtaining module 410 is further configured to, if a target object is detected in the video, obtain an area of an imaging header frame of the target object when the target object is located at a passenger flow statistical position in the target area, where the area of the imaging header frame is an area of a bounding box of a head of the target object in the video;
the detection module 420 is used for detecting whether the target object is a child or an adult according to the area of the imaging head frame if the target object crosses the passenger flow statistical position;
and the counting module 430 is used for counting the passenger flow of the target area according to the detection result.
In an alternative embodiment, the detecting module 420 is further configured to:
calculating target characteristic parameters of the target object according to the area of the imaging head frame, wherein the target characteristic parameters comprise target head circumference parameters and target height parameters of the target object;
calculating the probability that the target object is a child according to the target characteristic parameters;
if the probability is greater than or equal to a preset threshold value, generating a detection result for indicating that the target object is a child;
and if the probability is smaller than a preset threshold value, generating a detection result for indicating that the target object is an adult.
In an alternative embodiment, the detecting module 420 is further configured to:
acquiring a first characteristic parameter of an adult and N groups of second characteristic parameters of children from a standard table, wherein the first characteristic parameter comprises a head circumference average value and a height average value of the adult, the ith group of second characteristic parameters comprises a head circumference average value and a height average value of the ith child, and i is greater than or equal to 1 and less than or equal to a child age threshold N;
calculating the total probability that the target object is less than or equal to N years according to the target characteristic parameters, the first characteristic parameters, the N groups of second characteristic parameters and a posterior probability calculation formula of Bayes;
the full probability is determined as the probability that the target object is a child.
In an alternative embodiment, the detecting module 420 is further configured to:
acquiring a first coefficient and a second coefficient;
performing evolution operation on a quotient obtained by dividing the area of the imaging head frame by a first coefficient to obtain a target head circumference parameter;
and performing evolution operation on the quotient of the area of the imaging head frame divided by the second coefficient to obtain the target height parameter.
In an alternative embodiment, the detecting module 420 is further configured to:
measuring a specific head circumference parameter and a specific height parameter of a specific subject;
when the specific object is located at the passenger flow statistical position in the target area, acquiring the specific imaging head frame area of the specific object;
dividing the area of the specific imaging head portrait by the square of the specific head circumference parameter to obtain a first coefficient;
and dividing the area of the specific imaging head portrait by the square of the specific height parameter to obtain a second coefficient.
In an optional embodiment, the obtaining module 410 is further configured to:
detecting an imaging head frame of a target object in a video, and generating a moving track of the target object according to the imaging head frame;
and when the passenger flow statistical position of the target object in the target area is determined according to the moving track, calculating the area of the imaging head frame to obtain the area of the imaging head frame.
In an alternative embodiment, the statistics module 430 is further configured to:
when the detection result indicates that the target object is an adult, updating the passenger flow of the target area, and recording the current passenger flow type as an adult; when the detection result indicates that the target object is a child, updating the passenger flow volume of the target area, and recording the current passenger flow type as the child; or,
when the detection result indicates that the target object is an adult, updating the passenger flow volume of the target area; when the detection result indicates that the target object is a child, the passenger flow volume of the target area is not updated.
To sum up, with the passenger flow statistics apparatus provided in the embodiment of the present application, if a target object in a video is located at a passenger flow statistics position in a target area, an area of an imaging head frame of the target object may be obtained; since the area of the imaging head frame is the area of the bounding box of the head of the target object in the video, if the target object passes through the passenger flow statistical position, the target object can be detected to be a child or an adult according to the area of the imaging head frame; and finally, counting the passenger flow of the target area according to the detection result. Therefore, whether the target object is a child or an adult can be detected through the area of the imaging head frame, the requirement on the image quality for obtaining the area of the imaging head frame is not high, so that the requirement on the definition of the camera is low, passenger flow statistics can be realized by using the camera with common shooting precision, and the cost of the passenger flow statistics can be reduced; and the requirements of the area of the imaging head frame on the environments such as light, angles and the like during shooting are low, so that the video shooting difficulty can be reduced. In addition, the algorithm for acquiring the area of the imaging head frame is simple, and only fewer resources are occupied, so that the resources can be saved.
One embodiment of the present application provides a computer-readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by a processor to implement a passenger flow statistics method as described above.
One embodiment of the present application provides a computing device comprising a processor and a memory, the memory having stored therein at least one instruction, the instruction being loaded and executed by the processor to implement a passenger flow statistics method as described above.
It should be noted that: in the passenger flow statistics device provided in the above embodiment, only the division of the above functional modules is used for illustration when performing passenger flow statistics, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the passenger flow statistics device is divided into different functional modules to complete all or part of the above described functions. In addition, the passenger flow statistics device provided by the above embodiment and the passenger flow statistics method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.