CN113191353A - Vehicle speed determination method, device, equipment and medium - Google Patents
Vehicle speed determination method, device, equipment and medium Download PDFInfo
- Publication number
- CN113191353A CN113191353A CN202110405781.2A CN202110405781A CN113191353A CN 113191353 A CN113191353 A CN 113191353A CN 202110405781 A CN202110405781 A CN 202110405781A CN 113191353 A CN113191353 A CN 113191353A
- Authority
- CN
- China
- Prior art keywords
- detection area
- vehicle
- determining
- area
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle speed determining method, a device, equipment and a medium, wherein in the method, a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image are identified based on a depth learning model which is trained in advance; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area; determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images; so that the vehicle speed of the vehicle can be accurately determined when a plurality of vehicles are included in the image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a vehicle speed determination method, a vehicle speed determination device, vehicle speed determination equipment and a vehicle speed determination medium.
Background
With the progress of society and the rapid development of economy, the number of automobiles is continuously increased, convenience is provided for the society, the occurrence probability of traffic accidents and the congestion rate of traffic roads are greatly increased, the main reason of traffic accidents and road congestion is that the speed is increased and decreased artificially and randomly, and therefore, the accurate determination of the speed of the automobile is an urgent need.
The speed measuring technology in the prior art comprises radar speed measurement, laser speed measurement, ground induction coil speed measurement and the like. The radar speed measurement is based on the Doppler effect principle, radar waves are transmitted to a vehicle, the radar waves reflected by the vehicle are received, and the vehicle speed of the vehicle is determined according to the frequency of the transmitted radar waves and the frequency of the received radar waves reflected back. However, radar speed measurement can only be applied to mobile and short-distance scenes, and cannot be used for long-distance speed measurement, for example, speed measurement of vehicles on a highway, and radar speed measurement also needs to be installed, so that the cost for determining the speed of the vehicle is high.
The laser speed measurement is based on the principle of laser distance measurement, and the distance measuring instrument transmits laser with set time interval twice to the vehicle and receives the returned laser, and determines the moving distance of the vehicle within the set time interval at intervals, so that the vehicle speed of the vehicle is determined. However, the laser speed measurement method has high requirements for measuring deviation angles, so that the success rate of determining the vehicle speed is low, and the distance meter can only be used in a static state.
The ground induction coil speed measurement is based on the electromagnetic induction principle, when a vehicle passes through a coil area, the magnetic flux of the coil changes, a trigger signal prompts the vehicle to pass through the coil, and the vehicle speed can be determined by combining the time of the vehicle passing through the two coils and the two coils. However, when the ground induction coil is used for testing, the ground induction coil needs to be installed, the construction amount is large, the ground induction coil is easy to damage, and the subsequent maintenance difficulty is high.
Due to the problems of the traditional speed measuring method and the rapid development of the machine vision related technology in recent years, the method is widely applied to various industries, and the computer vision detection technology has the advantages of high automation degree, high efficiency, high precision and the like, so that the vehicle speed measuring technology based on the vision detection technology is provided in the prior art.
In the existing visual detection technology, the position of a moving vehicle in an image is determined by a frame difference method, the vehicles between every two frames are matched, so that the positions of the same vehicle in different frames are detected, the actual distance of the vehicle is determined according to the pixel distance of the vehicle, and the speed of the vehicle is determined according to the actual distance and the time difference between the two frames.
Specifically, when the position of a moving vehicle in an image is determined by a frame difference method, pixel values corresponding to adjacent frame images are subtracted to obtain a difference image, then the difference image is binarized, and under the condition that the environmental brightness is not changed much, if the pixel difference value corresponding to the pixel point is smaller than a predetermined threshold value, the pixel value can be determined as a background pixel point, and if the pixel difference value corresponding to the pixel point is not smaller than the predetermined threshold value, the pixel point of the moving vehicle in the image can be determined. Because the method can accurately determine the position of the vehicle in the image only when the ambient brightness changes little, the position of the vehicle in the image is inaccurate due to the influence of illumination and picture background when the position of the vehicle is actually determined, and the accuracy of the determined vehicle speed is low.
And the existing visual inspection technology can only determine the vehicle speed when only a single vehicle is included in the image, but cannot determine the vehicle speed when a plurality of vehicles are included in the image.
Therefore, how to improve the accuracy of the determined vehicle speed when the image includes a plurality of vehicles becomes an urgent technical problem to be solved.
Disclosure of Invention
The invention provides a vehicle speed determination method, a device, equipment and a medium, which are used for solving the problem of improving the accuracy of the determined vehicle speed when a plurality of vehicles are included in an image.
The invention provides a vehicle speed determination method, which comprises the following steps:
identifying a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model trained in advance;
predicting a first prediction area of the vehicle in a next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
Further, the determining the vehicle speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images comprises:
determining the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as that of the target second detection area;
determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance;
and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
Further, the predicting, according to the first detection region in the current frame image, a first prediction region of the vehicle in the next frame image includes:
and predicting the vehicle corresponding to the first detection area based on a standard Kalman filter of a constant speed model and a linear observation model, and determining the first prediction area of the first detection area in the next frame of image.
Further, the determining a target second detection region matching the first prediction region according to the similarity between the first prediction region and the second candidate detection region includes:
determining a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region;
determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight;
and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
Further, the training process of the deep learning model comprises the following steps:
aiming at any sample image in a sample set, obtaining the sample image and first label information corresponding to the sample image, wherein the first label information identifies a set range area containing a vehicle in the sample image;
inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image;
and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
Accordingly, the present invention provides a vehicle speed determination device, the device comprising:
the recognition module is used for recognizing a first detection area of the vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model which is trained in advance;
the matching module is used for predicting a first prediction area of the vehicle in the next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and the determining module is used for determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
Further, the determining module is specifically configured to determine a pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, where the area ranges of the first detection area and the target second detection area are the same; determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance; and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
Further, the matching module is specifically configured to predict a vehicle corresponding to the first detection region based on a standard kalman filter of a constant velocity model and a linear observation model, and determine a first prediction region of the first detection region in the next frame of image.
Further, the matching module is specifically configured to determine a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region; determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight; and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
Further, the apparatus further comprises:
the training module is specifically used for acquiring a sample image and first label information corresponding to the sample image aiming at any sample image in a sample set, wherein the first label information identifies a set range area containing a vehicle in the sample image; inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image; and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
Accordingly, the present invention provides an electronic device comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of any of the above-described vehicle speed determination methods.
Accordingly, the present invention provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of any one of the above-mentioned vehicle speed determination methods.
The invention provides a vehicle speed determination method, a device, equipment and a medium, wherein in the method, a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image are identified based on a deep learning model which is trained in advance; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area; determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images; therefore, when the plurality of vehicles are included in the image, the first detection area of one vehicle and the corresponding target second detection area can be determined, and the vehicle speed of the vehicle can be accurately determined.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a process diagram illustrating a method for determining vehicle speed according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a similar triangle formed by connecting lines between a vehicle and an actual vehicle in an image and an image acquisition device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a backbone network Darknet-53 of YOLOv3 according to an embodiment of the present invention;
fig. 4 is an overall structural diagram of YOLOv3 according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a basic module DBL of YOLOv3 according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a basic component Res _ unit of YOLOv3 according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a basic component Resblock _ body of YOLOv3 according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a vehicle speed determining apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the accuracy of a determined vehicle speed when a plurality of vehicles are included in an image, embodiments of the present invention provide a vehicle speed determination method, apparatus, device, and medium.
Example 1:
fig. 1 is a schematic process diagram of a vehicle speed determination method according to an embodiment of the present invention, where the process includes the following steps:
s101: and identifying a first detection area of the vehicle in the current frame image and a second candidate detection area of the vehicle in the next frame image adjacent to the current frame image based on the deep learning model trained in advance.
The vehicle speed determination method provided by the embodiment of the invention is applied to electronic equipment, wherein the electronic equipment can be intelligent terminal equipment such as a PC (personal computer), a tablet personal computer, a smart phone and the like, and can also be a server, and the server can be a local server and a cloud server.
In an embodiment of the present invention, the electronic device determines the vehicle speed of the vehicle in the video frame image according to the video frame image in the video captured by the image capturing device, wherein the image capturing device is a device for capturing an image of the vehicle in motion, such as a monitoring camera, a video camera, and the like.
The current frame image in the video acquired by the image acquisition equipment may contain one vehicle or a plurality of vehicles; in order to determine the vehicle speed of the vehicle in the current frame image, it is necessary to identify the detection area of the vehicle in the current frame image.
In order to identify the detection area of the vehicle in the current frame image, the electronic equipment stores a trained deep learning model, the current frame image and the next frame image adjacent to the current frame image are input into the pre-trained deep learning model, the deep learning model processes the vehicle in the current frame image and the next frame image, and a first detection area of the vehicle in the current frame image and a second candidate detection area of the vehicle in the next frame image are identified.
The Deep learning model may be a Convolutional Neural Network (CNN) model, a target detection network (YOLOv3) model, or a Deep Residual Network (DRN).
S102: predicting a first prediction area of the vehicle in a next frame image according to the first detection area in the current frame image; and determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area.
In order to determine the areas of the same vehicle in the current frame image and the next frame image based on the identified first detection area and the second candidate detection area, in the embodiment of the invention, for any vehicle in the current frame image, the electronic device further determines, according to the identified first detection area of the vehicle in the current frame image, the first detection area corresponding to the target second detection area of the vehicle in the next frame image.
Since the types of the existing vehicles include types of cars, trucks, buses and the like, and the similarity of the vehicles of the same type is high, in order to determine a target second detection area of the vehicle corresponding to the first detection area in the next frame image, a first prediction area of the vehicle corresponding to the first detection area in the next frame image is predicted according to the first detection area of the current frame image, and the target second detection area of the vehicle corresponding to the first detection area in the next frame image is determined based on the first prediction area. The method for predicting the first detection area of the first prediction area of the vehicle in the next frame image according to the first detection area of the current frame image belongs to the prior art, and details are not repeated in the embodiment of the invention.
And determining the similarity of the first prediction area and the second candidate detection area according to the first prediction area and the second candidate detection area of the next frame image. The similarity may be determined, for example, by calculating a mahalanobis distance between the first prediction region and the second candidate detection region, the similarity of the first prediction region and the second candidate detection region being higher the smaller the mahalanobis distance is.
And determining a target second detection area matched with the first prediction area according to the similarity of the first prediction area and the second candidate detection area. For example, the second candidate detection region with the highest similarity and larger than a preset threshold may be determined as the target second detection region.
S103: and determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
The method comprises the steps of acquiring a current frame image and a next frame image, acquiring a target second detection area in the next frame image, determining the actual distance of a vehicle in the first detection area moving in two adjacent frame times according to the target second detection area in the current frame image and the target second detection area in the next frame image, and determining the speed of the vehicle corresponding to the first detection area according to the actual distance and the time difference value of the two adjacent frame images.
According to the embodiment of the invention, a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image are identified based on a deep learning model which is trained in advance, and because the deep learning model identifies the detection area containing the vehicle, the influence of illumination and picture background on pixel values does not reduce the accuracy of the determined detection area containing the vehicle, and a first prediction area of the vehicle in the next frame image is predicted according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area; determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images; therefore, when the plurality of vehicles are included in the image, the first detection area of one vehicle and the corresponding target second detection area can be determined, and the vehicle speed of the vehicle can be accurately determined.
Example 2:
in order to determine the vehicle speed of the vehicle, on the basis of the above embodiments, in an embodiment of the present invention, the determining the vehicle speed of the vehicle according to the first detection area, the target second detection area, and a time difference between two adjacent frames of images includes:
determining the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as that of the target second detection area;
determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance;
and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
In order to determine the vehicle speed of the vehicle, firstly, according to the first detection area and the target second detection area, the actual distance of the vehicle moving in the time difference value of two adjacent frames of images corresponding to the first detection area is determined.
And determining the distance between the first coordinate and the second coordinate according to the first coordinate of the preset position of the vehicle in the first detection area and the second coordinate of the preset position of the vehicle in the target second detection area, and taking the distance as the pixel distance between the first detection area and the target second detection area.
The preset vehicle position may be a center position of the first detection area, or a position of any point of the first detection area, and the area range of the first detection area is the same as that of the target second detection area, so that the preset vehicle position is a position of a coordinate point corresponding to the same number of rows and the same number of columns in the first detection area and the target second detection area.
The ratio of the pixel width of the first detection area to a preset width is determined, wherein the preset width is a normal actual width of the vehicle, for example, the preset width is 3 meters, 3.5 meters, and the like. The ratio is the ratio of the pixel width to the actual width, and based on the basic imaging principle of the image acquisition equipment, the connection lines of the vehicle and the actual vehicle in the image and the image acquisition equipment respectively form a similar triangle.
Fig. 2 is a schematic diagram of a similar triangle formed by connecting lines between a vehicle and an actual vehicle in an image and an image capturing device respectively, as shown in fig. 2, a connecting line AB in the diagram represents an actual vehicle, a connecting line CD in the diagram represents a vehicle in the image, the AB and the CD form a triangle OAB and a triangle OCD with a connecting line between the AB and the CD and a focus O of the image capturing device respectively, and the triangle OAB and the triangle OCD are similar triangles.
Therefore, the ratio of the focal length of the image acquisition device to the distance from the vehicle to the image acquisition device is equal to the ratio of the pixel width to the actual width, the distance from the vehicle to the image acquisition device is far, the influence of the movement of the vehicle in the lane on the distance from the vehicle to the image acquisition device can be ignored, therefore, the ratio of the focal length of the image acquisition device to the distance from the vehicle to the image acquisition device can be considered as a fixed value, and the ratio of the pixel width to the actual width can be considered as the ratio of the pixel distance to the actual distance.
And determining a quotient value of dividing the pixel distance by the ratio according to the determined pixel distance between the first detection area and the target second detection area and the ratio of the pixel distance to the actual distance, wherein the quotient value is the actual distance of the vehicle corresponding to the first detection area moving in the time difference value of two adjacent frames of images. And determining a quotient of the actual distance divided by the time difference according to the actual distance of the vehicle moving corresponding to the first detection area and the time difference of the two adjacent frames of images, wherein the quotient is the vehicle speed of the vehicle corresponding to the first detection area.
The process of determining the vehicle speed of a vehicle according to the present invention is described below with reference to a specific embodiment, assuming that the vehicle is first checked in the 1 st frameThe central point position of the detection area is (x1, y1), and the pixel height and the pixel width of the first detection area are h1 and w1 respectively; the center point position of the target second detection region of the vehicle in the 2 nd frame is (x2, y2), and the vehicle moves by the pixel distance DpComprises the following steps:
according to the pixel width w1 of the first detection area and the preset width of 3 m, determining the proportion p of the pixel width to the preset width,determining the actual distance D that the vehicle moves in the time difference value of the two adjacent frames of images,
determining the time difference t of two adjacent frames of images according to the number fps of video frames processed per second,the number of video frames per second fps processed is typically 25, so the time difference t isAccording to formula of vehicle speedThus determining the speed of the vehicle
Example 3:
in order to predict the first prediction region of the vehicle in the next frame image, on the basis of the above embodiments, in an embodiment of the present invention, the predicting the first prediction region of the vehicle in the next frame image according to the first detection region in the current frame image includes:
and predicting the vehicle corresponding to the first detection area based on a standard Kalman filter of a constant speed model and a linear observation model, and determining the first prediction area of the first detection area in the next frame of image.
In order to predict a first prediction region of a vehicle in a next frame image, in the embodiment of the present invention, a vehicle corresponding to a first detection region is predicted according to an existing standard kalman filter based on a constant velocity model and a linear observation model, and a center point coordinate (x, y), a pixel width w, and a pixel height h of the first detection region, and a first prediction region of the first detection region in a next frame image adjacent to a current frame image is determined.
The pixel width and the pixel height of the first prediction area are the same as those of the first detection area, and only the center point coordinate of the first prediction area is different from that of the first detection area.
Example 4:
in order to determine a target second detection region matching a first prediction region, on the basis of the foregoing embodiments, in an embodiment of the present invention, the determining a target second detection region matching the first prediction region according to a similarity between the first prediction region and the second candidate detection region includes:
determining a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region;
determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight;
and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
When the existing target tracking algorithm (Sort) determines a target second detection area matched with the first prediction area, the intersection ratio (IOU) between the first prediction area and a second candidate detection area is used as a measurement standard for judging matching, so that the area of the same vehicle in a current frame image and a next frame image is determined, and the vehicle is tracked.
Because the target tracking algorithm (Sort) ignores the apparent information of the detection area of the vehicle and has low accuracy under the condition that the vehicle is shielded, the embodiment of the invention adopts the multi-target tracking algorithm (Deep Sort) to determine the target second detection area matched with the first prediction area.
Specifically, in order to determine the matching degree between the first prediction region and the second candidate detection region, the mahalanobis distance and the cosine distance between each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region are determined.
And aiming at each first pixel point in the first prediction region, according to the number of rows and the number of columns of the first pixel point in the first prediction region, a second pixel point with the same number of rows and columns in the first candidate detection region can be determined, and the second pixel point is a second pixel point corresponding to the first pixel point in the second candidate detection region.
Determining the Mahalanobis distance sum value of the first prediction region and the second candidate detection region according to the Mahalanobis distance between each first pixel point in the first prediction region and the second pixel point corresponding to the second candidate detection region; and determining the cosine distance and the value of the first prediction region and the second candidate detection region according to the cosine distance between each first pixel point in the first prediction region and the second pixel point corresponding to the second candidate detection region.
According to the Mahalanobis distance sum value, the cosine distance sum value and the corresponding weight values of the first prediction area and the second candidate detection area, the corresponding weight values are multiplied by the Mahalanobis distance sum value to obtain a first product value, the corresponding weight values are multiplied by the cosine distance sum value to obtain a second product value, the sum value of the first product value and the second product value is determined, and the sum value is the weight distance sum value of the first prediction area and the second candidate detection area.
The weighted distance sum is an index for evaluating the similarity between the first prediction region and the second candidate detection region, and when the weighted distance sum is smaller, the weighted distance sum indicates that the first prediction region is more similar to the second candidate detection region, and also indicates that the degree of coincidence between the first prediction region and the second candidate detection region is higher, that is, the predicted first prediction region is more accurate.
In order to determine the target second detection region matching the first prediction region, a preset threshold for determining whether the region matches is also stored in advance, where the preset threshold is set in advance, and if it is desired to improve the accuracy of region matching, the preset threshold may be set to be smaller, and if it is desired to improve the robustness of region matching, the preset threshold may be set to be larger.
And determining the second candidate detection area with the weight distance sum smaller than the preset threshold value as the target second detection area according to the weight distance sum of each second candidate detection area of the first prediction area and the preset threshold value.
Example 5:
in order to train the deep learning model, on the basis of the above embodiments, in an embodiment of the present invention, a training process of the deep learning model includes:
aiming at any sample image in a sample set, obtaining the sample image and first label information corresponding to the sample image, wherein the first label information identifies a set range area containing a vehicle in the sample image;
inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image;
and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
In order to realize the training of the deep learning model, a sample set for training is stored in the application, sample images in the sample set comprise vehicle images of each vehicle, first label information of the sample images in the sample set is manually and pre-labeled, and the first label information is used for identifying a set range area of the vehicle contained in the sample image.
In the application, after any sample image in a sample set and first label information of the sample image are acquired, the sample image is input into an original deep learning model, and the original deep learning model outputs second label information of the sample image. The second label information identifies a set range area containing the vehicle in the sample image identified by the original deep learning model.
After the second label information of the sample image is determined according to the original deep learning model, the original deep learning model is trained according to the second label information and the first label information of the sample image, so that parameter values of all parameters of the original deep learning model are adjusted.
And (3) carrying out the operation on each sample image contained in the sample set for training the original deep learning model, and obtaining the deep learning model after training when the preset condition is met. The preset condition can be that the number of sample images with consistent first label information and second label information obtained after the sample images in the sample set are trained by the original deep learning model is larger than a set number; or the iteration number of training the original deep learning model reaches the set maximum iteration number, and the like. In particular, the present application is not limited thereto.
As a possible implementation manner, when the original deep learning model is trained, the sample images in the sample set may be divided into training sample images and test sample images, the original deep learning model is trained based on the training sample images, and then the reliability of the trained deep learning model is tested based on the test sample images.
When the deep learning model is the target detection model (YOLOv3), the backbone network of YOLOv3 is a deep learning framework (Darknet-53) and is composed of 52 convolutional layers and the last full connection layer. In the YOLOv3 network structure, convolution kernels of 1x1 and 3x3 are adopted, so that the parameter quantity and the calculation quantity during model inference are greatly reduced. The YOLOv3 is an improvement on the basis of YOLOv2, 5 Residual blocks (Residual) are added in a backbone network structure, namely, a principle of a Residual network ResNet is utilized to form an identity mapping form, so that a deep network can realize the same performance as a shallow network, and gradient explosion caused by too deep network layers is avoided.
The inference process of YOLOv3 adopts cross-scale prediction (Predictions errors Scales), and uses a principle of Feature Pyramid Networks (FPN) for reference, and uses multiple Scales to detect targets with different sizes, so that finer grids can detect smaller objects. YOLOv3 provides 3 bounding boxes with different sizes, that is, for each target to be predicted, 3 prediction boxes with different sizes are obtained respectively, and then probability calculation is performed on the prediction boxes, so that the best matching result is screened out. The system uses this idea to extract features of different sizes to form a pyramid network (pyramid). The last convolutional layer prediction of the YOLOv3 network yields a three-dimensional tensor coded prediction box, objects and classes. The resulting tensor under the COCO dataset is: n × N [3 × 4+1+80) ], 4 bounding box offsets, 1 objective prediction and 80 category predictions.
Fig. 3 is a schematic diagram of a backbone network Darknet-53 of YOLOv3 according to an embodiment of the present invention, as shown in fig. 3, where conditional is denoted as a Convolutional layer, Residual is denoted as a Residual block, Type is denoted as a network layer, Filters are identified as Convolutional cores included in the Convolutional layer, Size is denoted as Size, Output is denoted as Output, Avgpool is denoted as an average pooling layer, Connected is denoted as a full connection layer, and Softmax is a function of performing numerical processing.
Fig. 4 is an overall structure diagram of YOLOv3 according to an embodiment of the present invention, as shown in fig. 4, the first row is a minimum-scale yolo layer, 13 × 13 feature maps are input, 1024 channels are total, the size of the feature maps is unchanged after a series of convolution operations, but the number of channels is finally reduced to 75, and 13 × 13 feature maps are finally output, 75 channels are then classified and position regression is performed.
The second row is the mesoscale yolo layer, and feature maps of 13 × 13 and 512 channels of 79 layers are convolved to generate feature maps of 13 × 13 and 256 channels, and then upsampled to generate feature maps of 26 × 26 and 256 channels, and simultaneously merged with the mesoscale feature maps of 26 × 26 and 512 channels. And (3) performing a series of convolution operations, wherein the size of the feature map is unchanged, but the number of channels is finally reduced to 75, finally outputting the feature map with the size of 26 × 26, 75 channels, and then performing classification and position regression.
The third row is a large-scale yolo layer, 91 layers of feature maps of 26 × 26 and 512 channels are convolved to generate feature maps of 26 × 26 and 128 channels, and then upsampled to generate feature maps of 52 × 52 and 128 channels, which are simultaneously merged with feature maps of 52 × 52 and 256 channels of 36 layers. And (3) performing a series of convolution operations, wherein the size of the feature map is unchanged, but the number of channels is finally reduced to 75, finally outputting 52-by-52 feature maps and 75 channels, and then performing classification and position regression.
Fig. 5 is a schematic diagram of a basic component DBL of YOLOv3 according to an embodiment of the present invention, as shown in fig. 5, the DBL includes a convolution layer, BN and leak relu, and for YOLOv3, BN and leak relu are parts inseparable from the convolution layer and together form a minimum component DBL.
Fig. 6 is a schematic diagram of a basic component Res _ unit of YOLOv3 according to an embodiment of the present invention, and as shown in fig. 6, the Res _ unit includes two basic components DBL and an add layer.
Fig. 7 is a schematic diagram of a basic block _ body of YOLOv3 according to an embodiment of the present invention, and as shown in fig. 7, the basic block _ body includes a basic block DBL, a zero padding, and a res unit.
Example 6:
on the basis of the foregoing embodiments, fig. 8 is a schematic structural diagram of a vehicle speed determination device according to an embodiment of the present invention, where the device includes:
the identification module 801 is configured to identify a first detection region of a vehicle in a current frame image and a second candidate detection region of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model trained in advance;
a matching module 802, configured to predict, according to the first detection region in the current frame image, a first prediction region of the vehicle in the next frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
the determining module 803 is configured to determine the vehicle speed of the vehicle according to the first detection area, the target second detection area, and the time difference between two adjacent frames of images.
Further, the determining module is specifically configured to determine a pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, where the area ranges of the first detection area and the target second detection area are the same; determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance; and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
Further, the matching module is specifically configured to predict a vehicle corresponding to the first detection region based on a standard kalman filter of a constant velocity model and a linear observation model, and determine a first prediction region of the first detection region in the next frame of image.
Further, the matching module is specifically configured to determine a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region; determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight; and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
Further, the apparatus further comprises:
the training module is specifically used for acquiring a sample image and first label information corresponding to the sample image aiming at any sample image in a sample set, wherein the first label information identifies a set range area containing a vehicle in the sample image; inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image; and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
Example 7:
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, an electronic device according to an embodiment of the present invention is further provided, where the electronic device includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete communication with each other through the communication bus 904;
the memory 903 has stored therein a computer program which, when executed by the processor 901, causes the processor 901 to perform the steps of:
identifying a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model trained in advance;
predicting a first prediction area of the vehicle in a next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
Further, the processor 901 is specifically configured to determine the vehicle speed of the vehicle according to the first detection area, the target second detection area and a time difference between two adjacent frames of images, where the determining includes:
determining the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as that of the target second detection area;
determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance;
and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
Further, the processor 901 is specifically configured to predict, according to the first detection region in the current frame image, a first prediction region of the vehicle in the next frame image, where the predicting includes:
and predicting the vehicle corresponding to the first detection area based on a standard Kalman filter of a constant speed model and a linear observation model, and determining the first prediction area of the first detection area in the next frame of image.
Further, the processor 901 is specifically configured to determine, according to the similarity between the first prediction region and the second candidate detection region, that the target second detection region matched with the first prediction region includes:
determining a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region;
determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight;
and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
Further, the processor 901, further configured to perform a training process of the deep learning model, including:
aiming at any sample image in a sample set, obtaining the sample image and first label information corresponding to the sample image, wherein the first label information identifies a set range area containing a vehicle in the sample image;
inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image;
and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 902 is used for communication between the electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Example 8:
on the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to perform the following steps:
identifying a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model trained in advance;
predicting a first prediction area of the vehicle in a next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
Further, the determining the vehicle speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images comprises:
determining the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as that of the target second detection area;
determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance;
and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
Further, the predicting, according to the first detection region in the current frame image, a first prediction region of the vehicle in the next frame image includes:
and predicting the vehicle corresponding to the first detection area based on a standard Kalman filter of a constant speed model and a linear observation model, and determining the first prediction area of the first detection area in the next frame of image.
Further, the determining a target second detection region matching the first prediction region according to the similarity between the first prediction region and the second candidate detection region includes:
determining a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region;
determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight;
and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
Further, the training process of the deep learning model comprises the following steps:
aiming at any sample image in a sample set, obtaining the sample image and first label information corresponding to the sample image, wherein the first label information identifies a set range area containing a vehicle in the sample image;
inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image;
and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A vehicle speed determination method, characterized by comprising:
identifying a first detection area of a vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model trained in advance;
predicting a first prediction area of the vehicle in a next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
2. The method of claim 1, wherein determining the vehicle speed of the vehicle based on the first detection area, the target second detection area, and a time difference between two adjacent frames of images comprises:
determining the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of a vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as that of the target second detection area;
determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance;
and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
3. The method according to claim 1, wherein the predicting the first prediction region of the vehicle in the next frame image according to the first detection region in the current frame image comprises:
and predicting the vehicle corresponding to the first detection area based on a standard Kalman filter of a constant speed model and a linear observation model, and determining the first prediction area of the first detection area in the next frame of image.
4. The method of claim 1, wherein determining the target second detection region matching the first prediction region according to the similarity between the first prediction region and the second candidate detection region comprises:
determining a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region;
determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight;
and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
5. The method of claim 1, wherein the training process of the deep learning model comprises:
aiming at any sample image in a sample set, obtaining the sample image and first label information corresponding to the sample image, wherein the first label information identifies a set range area containing a vehicle in the sample image;
inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image;
and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
6. A vehicle speed determination device, characterized by comprising:
the recognition module is used for recognizing a first detection area of the vehicle in a current frame image and a second candidate detection area of the vehicle in a next frame image adjacent to the current frame image based on a deep learning model which is trained in advance;
the matching module is used for predicting a first prediction area of the vehicle in the next frame image according to the first detection area in the current frame image; determining a target second detection area matched with the first prediction area according to the similarity between the first prediction area and the second candidate detection area;
and the determining module is used for determining the speed of the vehicle according to the first detection area, the target second detection area and the time difference value of two adjacent frames of images.
7. The apparatus according to claim 6, wherein the determining module is specifically configured to determine the pixel distance between the first detection area and the target second detection area according to a first coordinate of a preset position of the vehicle in the first detection area and a second coordinate of the preset position of the vehicle in the target second detection area, wherein the area range of the first detection area is the same as the area range of the target second detection area; determining the actual moving distance of the vehicle according to the ratio of the pixel width of the first detection area to the preset width and the pixel distance; and determining the speed of the vehicle according to the actual distance and the time difference value of two adjacent frames of images.
8. The apparatus according to claim 6, wherein the matching module is specifically configured to predict a vehicle corresponding to the first detection region based on a standard kalman filter of a constant velocity model and a linear observation model, and determine a first prediction region of the first detection region in the next frame of image.
9. The apparatus according to claim 6, wherein the matching module is further configured to determine a mahalanobis distance sum and a cosine distance sum of the first prediction region and the second candidate detection region according to each first pixel point in the first prediction region and each corresponding second pixel point in the second candidate detection region; determining the distance weight and value of the first prediction region and the second candidate detection region according to the Mahalanobis distance and value, the cosine distance and value and the corresponding preset weight; and if the distance weight sum value is smaller than the preset threshold value, determining a second candidate detection area as a target second detection area.
10. The apparatus of claim 6, further comprising:
the training module is specifically used for acquiring a sample image and first label information corresponding to the sample image aiming at any sample image in a sample set, wherein the first label information identifies a set range area containing a vehicle in the sample image; inputting the sample image into an original deep learning model, and acquiring second label information of the output sample image; and adjusting parameter values of all parameters of the original deep learning model according to the first label information and the second label information to obtain the deep learning model after training.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110405781.2A CN113191353A (en) | 2021-04-15 | 2021-04-15 | Vehicle speed determination method, device, equipment and medium |
PCT/CN2021/088536 WO2022217630A1 (en) | 2021-04-15 | 2021-04-20 | Vehicle speed determination method and apparatus, device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110405781.2A CN113191353A (en) | 2021-04-15 | 2021-04-15 | Vehicle speed determination method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113191353A true CN113191353A (en) | 2021-07-30 |
Family
ID=76977107
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110405781.2A Pending CN113191353A (en) | 2021-04-15 | 2021-04-15 | Vehicle speed determination method, device, equipment and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113191353A (en) |
WO (1) | WO2022217630A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114005095A (en) * | 2021-10-29 | 2022-02-01 | 北京百度网讯科技有限公司 | Vehicle attribute identification method and device, electronic equipment and medium |
CN114627649A (en) * | 2022-04-13 | 2022-06-14 | 北京魔门塔科技有限公司 | Speed control model generation method, vehicle control method and device |
CN114782500A (en) * | 2022-04-22 | 2022-07-22 | 西安理工大学 | Kart race behavior analysis method based on multi-target tracking |
CN114898585A (en) * | 2022-04-20 | 2022-08-12 | 清华大学 | Vehicle trajectory prediction planning method and system based on intersection multi-view |
CN117031063A (en) * | 2023-07-26 | 2023-11-10 | 东风汽车股份有限公司 | Method, device, equipment and storage medium for measuring speed of vehicle |
CN118015850A (en) * | 2024-04-08 | 2024-05-10 | 云南省公路科学技术研究院 | Multi-target vehicle speed synchronous estimation method, system, terminal and medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116124499B (en) * | 2022-11-25 | 2024-04-09 | 上海方酋机器人有限公司 | Coal mining method, equipment and medium based on moving vehicle |
CN116758732A (en) * | 2023-05-18 | 2023-09-15 | 内蒙古工业大学 | Intersection vehicle detection and bus priority method in fog computing environment |
CN119152702A (en) * | 2024-06-25 | 2024-12-17 | 东北林业大学 | Vehicle speed measuring method based on image instantaneous motion optical flow |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446622A (en) * | 2018-03-14 | 2018-08-24 | 海信集团有限公司 | Detecting and tracking method and device, the terminal of target object |
CN108961315A (en) * | 2018-08-01 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Method for tracking target, device, computer equipment and storage medium |
CN110415277A (en) * | 2019-07-24 | 2019-11-05 | 中国科学院自动化研究所 | Multi-target tracking method, system and device based on optical flow and Kalman filter |
CN111127508A (en) * | 2018-10-31 | 2020-05-08 | 杭州海康威视数字技术股份有限公司 | Target tracking method and device based on video |
CN111523447A (en) * | 2020-04-22 | 2020-08-11 | 北京邮电大学 | Vehicle tracking method, device, electronic equipment and storage medium |
CN111738032A (en) * | 2019-03-24 | 2020-10-02 | 初速度(苏州)科技有限公司 | Vehicle driving information determination method and device and vehicle-mounted terminal |
CN111738033A (en) * | 2019-03-24 | 2020-10-02 | 初速度(苏州)科技有限公司 | Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5434927A (en) * | 1993-12-08 | 1995-07-18 | Minnesota Mining And Manufacturing Company | Method and apparatus for machine vision classification and tracking |
CN107766821B (en) * | 2017-10-23 | 2020-08-04 | 江苏鸿信系统集成有限公司 | Method and system for detecting and tracking full-time vehicle in video based on Kalman filtering and deep learning |
US20200194108A1 (en) * | 2018-12-13 | 2020-06-18 | Rutgers, The State University Of New Jersey | Object detection in medical image |
US12165397B2 (en) * | 2019-01-15 | 2024-12-10 | POSTECH Research and Business Development Foundation | Method and device for high-speed image recognition using 3D CNN |
-
2021
- 2021-04-15 CN CN202110405781.2A patent/CN113191353A/en active Pending
- 2021-04-20 WO PCT/CN2021/088536 patent/WO2022217630A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446622A (en) * | 2018-03-14 | 2018-08-24 | 海信集团有限公司 | Detecting and tracking method and device, the terminal of target object |
CN108961315A (en) * | 2018-08-01 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Method for tracking target, device, computer equipment and storage medium |
CN111127508A (en) * | 2018-10-31 | 2020-05-08 | 杭州海康威视数字技术股份有限公司 | Target tracking method and device based on video |
CN111738032A (en) * | 2019-03-24 | 2020-10-02 | 初速度(苏州)科技有限公司 | Vehicle driving information determination method and device and vehicle-mounted terminal |
CN111738033A (en) * | 2019-03-24 | 2020-10-02 | 初速度(苏州)科技有限公司 | Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal |
CN110415277A (en) * | 2019-07-24 | 2019-11-05 | 中国科学院自动化研究所 | Multi-target tracking method, system and device based on optical flow and Kalman filter |
CN111523447A (en) * | 2020-04-22 | 2020-08-11 | 北京邮电大学 | Vehicle tracking method, device, electronic equipment and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114005095A (en) * | 2021-10-29 | 2022-02-01 | 北京百度网讯科技有限公司 | Vehicle attribute identification method and device, electronic equipment and medium |
CN114627649A (en) * | 2022-04-13 | 2022-06-14 | 北京魔门塔科技有限公司 | Speed control model generation method, vehicle control method and device |
CN114898585A (en) * | 2022-04-20 | 2022-08-12 | 清华大学 | Vehicle trajectory prediction planning method and system based on intersection multi-view |
CN114782500A (en) * | 2022-04-22 | 2022-07-22 | 西安理工大学 | Kart race behavior analysis method based on multi-target tracking |
CN117031063A (en) * | 2023-07-26 | 2023-11-10 | 东风汽车股份有限公司 | Method, device, equipment and storage medium for measuring speed of vehicle |
CN118015850A (en) * | 2024-04-08 | 2024-05-10 | 云南省公路科学技术研究院 | Multi-target vehicle speed synchronous estimation method, system, terminal and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022217630A1 (en) | 2022-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113191353A (en) | Vehicle speed determination method, device, equipment and medium | |
Ma et al. | Automatic detection and counting system for pavement cracks based on PCGAN and YOLO-MF | |
CN109087510A (en) | traffic monitoring method and device | |
US9361702B2 (en) | Image detection method and device | |
CN113468967A (en) | Lane line detection method, device, equipment and medium based on attention mechanism | |
CN112613344B (en) | Vehicle track occupation detection method, device, computer equipment and readable storage medium | |
CN112149503A (en) | Target event detection method and device, electronic equipment and readable medium | |
CN112634368A (en) | Method and device for generating space and OR graph model of scene target and electronic equipment | |
CN115937659A (en) | Mask-RCNN-based multi-target detection method in indoor complex environment | |
CN115546705A (en) | Target identification method, terminal device and storage medium | |
CN113435350A (en) | Traffic marking detection method, device, equipment and medium | |
CN112329886A (en) | Double-license plate recognition method, model training method, device, equipment and storage medium | |
CN116721396A (en) | Lane line detection method, device and storage medium | |
CN114782915B (en) | Intelligent automobile end-to-end lane line detection system and equipment based on auxiliary supervision and knowledge distillation | |
CN113903180B (en) | Method and system for detecting vehicle overspeed on expressway | |
CN118429623B (en) | Urban facility anomaly identification method and device, electronic equipment and storage medium | |
CN113591543B (en) | Traffic sign recognition method, device, electronic equipment and computer storage medium | |
CN113674358B (en) | Calibration method and device of radar equipment, computing equipment and storage medium | |
Song et al. | An accurate vehicle counting approach based on block background modeling and updating | |
CN115965549A (en) | Laser point cloud completion method and related device | |
CN114638947A (en) | Data labeling method and device, electronic equipment and storage medium | |
CN115542271A (en) | Radar coordinate and video coordinate calibration method, equipment and related device | |
Athriyah et al. | Incremental Learning of Deep Neural Network for Robust Vehicle Classification | |
Kamil et al. | Vehicle Speed Estimation Using Consecutive Frame Approaches and Deep Image Homography for Image Rectification on Monocular Videos | |
CN113111732B (en) | High-speed service area dense pedestrian detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210730 |