[go: up one dir, main page]

CN112712708A - Information detection method, device, equipment and storage medium - Google Patents

Information detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN112712708A
CN112712708A CN202011589347.6A CN202011589347A CN112712708A CN 112712708 A CN112712708 A CN 112712708A CN 202011589347 A CN202011589347 A CN 202011589347A CN 112712708 A CN112712708 A CN 112712708A
Authority
CN
China
Prior art keywords
video frame
vehicle
target
frame images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011589347.6A
Other languages
Chinese (zh)
Inventor
王赛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202011589347.6A priority Critical patent/CN112712708A/en
Publication of CN112712708A publication Critical patent/CN112712708A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides an information detection method, an information detection device, information detection equipment and a storage medium, wherein the method comprises the following steps: acquiring a target video, and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images; determining a plurality of second video frame images from the plurality of first video frame images, and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images; and comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image aiming at each vehicle, and if the frame number of the second target video frame image of the vehicle in the outline is greater than a preset frame number, determining that the vehicle is a target vehicle. The embodiment of the application can overcome the problems of low efficiency, low precision and waste of human resources of the detection method in the prior art.

Description

Information detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of images, in particular to an information detection method, an information detection device, information detection equipment and a storage medium.
Background
Along with the increase of motor vehicles, various motor vehicles which run in violation of rules are more and more, the traffic jam and accidents caused by the increase of the motor vehicles are more and more, and great inconvenience is brought to daily travel of people. Therefore, advocate the pedestrian and take bus, not only can save a large amount of pedestrian's time to a certain extent, also reduced motor vehicle exhaust emissions simultaneously, so the bus is a relatively quick and environment-friendly vehicle. However, in reality, a lot of motor vehicles other than buses frequently occupy bus lanes for driving convenience, so that the normal driving of the buses is influenced, and the time of a large number of passengers on the buses is wasted. Therefore, how to quickly catch the motor vehicles illegally occupying the bus lane becomes important.
At present, a camera is installed on a bus, videos of motor vehicles illegally occupying a bus lane can be shot through the camera, then the motor vehicles illegally occupying the bus lane are screened out through manual checking in a large number of videos, and then violation processing is carried out on violation vehicles.
However, the motor vehicles illegally occupying the bus lane are screened out from a large number of videos, so that the labor cost is high, the illegal vehicle grabbing caused by fatigue is missed when people observe for a long time, the detection efficiency and the precision of the illegal vehicle are low, and the human resources are wasted.
Disclosure of Invention
The embodiment of the application provides an information detection method, an information detection device, information detection equipment and a storage medium, and aims to solve the problems that the detection method in the prior art is low in efficiency and precision and wastes human resources.
In a first aspect, an embodiment of the present application provides an information detection method, including:
acquiring a target video, and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images, wherein the first target video frame images comprise position information of each vehicle in the first target video frame images;
determining a plurality of second video frame images from the plurality of first video frame images, and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images, wherein the second target video frame images comprise outlines of the bus lanes in the second target video frame images;
and comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image aiming at each vehicle, and if the frame number of the second target video frame image of the vehicle appearing in the outline is greater than a preset frame number, determining that the vehicle is the target vehicle which occupies the bus lane in violation.
In one possible design, prior to the tracking each vehicle in the plurality of first video frame images in the target video, the method further comprises:
acquiring each frame of image of a target video, wherein each frame of image is a video frame image;
acquiring a plurality of first video frame images from all the video frame images corresponding to a target video through frame skipping processing;
wherein said determining a plurality of second video frame images from said plurality of first video frame images comprises:
determining a plurality of second video frame images from the plurality of first video frame images by the frame skipping process.
In one possible design, the tracking each vehicle in the plurality of first video frame images in the target video to obtain a plurality of first target video frame images includes:
tracking each vehicle in the plurality of first video frame images by using a tracking model to obtain a circumscribed rectangular frame of each vehicle in each first video frame image;
for each vehicle, determining the position information of the vehicle in each first video frame image according to the coordinate information of any vertex in the circumscribed rectangular frame and the width and height of the circumscribed rectangular frame;
and generating the plurality of first target video frame images according to the position information of the vehicles in the first video frame images and the identification information matched with each vehicle.
In one possible design, the segmenting the bus lane in the second video frame images to obtain second target video frame images includes:
dividing each second video frame image in the plurality of second video frame images by using a division model to obtain boundary points of the bus lane;
fitting the boundary points of the bus lane corresponding to each second video frame image to obtain an external closed polygon of the bus lane, wherein the external closed polygon is used for representing the outline of the bus lane;
and generating a plurality of second target video frame images according to the contour of each bus lane.
In a possible design, the comparing the position information of the vehicle in each first target video frame image with the contour of the bus lane in each second target video frame image, and if the number of frames of the second target video frame image in which the vehicle appears in the contour is greater than a preset number of frames, determining that the vehicle is a target vehicle includes:
acquiring a plurality of third target video frame images corresponding to the plurality of second target video frame images from the plurality of first target video frame images;
determining the position of the central point of the vehicle in each third target video frame image according to the position information of the vehicle in the plurality of third target video frame images;
comparing the center point position of the vehicle in each third target video frame image with the corresponding contour in each second target video frame image, judging whether the center point position of the vehicle is in the contour, and if the frame number of the second target video frame image corresponding to the same vehicle in the contour is determined to be greater than the preset frame number according to the identification information of the vehicle, determining that the vehicle is the target vehicle.
In one possible design, after the determining that the vehicle is the target vehicle, the method further includes:
acquiring a first video frame image or a second video frame image containing the target vehicle according to a first preset number of third target video frame images or a first preset number of second target video frame images to which the target vehicle belongs, and acquiring a license plate number of the target vehicle with the highest identification confidence coefficient through image identification according to the first video frame image or the second video frame image containing the target vehicle;
amplifying the vehicle small image corresponding to the license plate number of the target vehicle with the highest recognition confidence coefficient to obtain a target image of the target vehicle, wherein the size of the target image is the same as that of the video frame image;
acquiring a second preset number of second video frame images containing the target vehicle from the plurality of second video frame images, and splicing the second preset number of second video frame images with the target image to obtain a target spliced image, wherein the second preset number of second video frame images contain at least one target vehicle in the second video frame images in the outline of the bus lane;
and reporting the target stitching image and the license plate number of the target vehicle with the highest recognition confidence coefficient to a target terminal, so that the target terminal executes violation processing operation on the target vehicle.
In a possible design, the obtaining, by image recognition, a license plate number of a target vehicle with the highest recognition confidence according to the first video frame image or the second video frame image containing the target vehicle includes:
intercepting the target vehicle in the first video frame image or the second video frame image containing the target vehicle to obtain a plurality of vehicle small images;
intercepting license plates contained in the plurality of small vehicle images to obtain a plurality of small license plate images;
and identifying the license plate information in the plurality of license plate small images to obtain the license plate number with the highest identification confidence coefficient, wherein the license plate number with the highest identification confidence coefficient is the license plate number of the target vehicle.
In a second aspect, an embodiment of the present application provides an information detecting apparatus, including:
the tracking module is used for acquiring a target video and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images, wherein the first target video frame images comprise position information of each vehicle in the first target video frame images;
the segmentation module is used for determining a plurality of second video frame images from the plurality of first video frame images and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images, wherein the second target video frame images comprise outlines of the bus lanes in the second target video frame images;
and the target vehicle determining module is used for comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image aiming at each vehicle, and if the frame number of the second target video frame image of the vehicle appearing in the outline is greater than a preset frame number, determining that the vehicle is a target vehicle which is a vehicle illegally occupying the bus lane.
In a third aspect, an embodiment of the present application provides an information detection apparatus, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the information detection method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the information detection method according to the first aspect and various possible designs of the first aspect are implemented.
The information detection method, the apparatus, the device and the storage medium provided by the embodiment first obtain a target video, obtain a plurality of first video frame images in the target video, track each vehicle appearing in the plurality of video frame images, and obtain position information of each vehicle in the images, that is, generate a plurality of first target video frame images; and then acquiring a plurality of second video frame images from the plurality of taken first video frame images, dividing the bus lanes in the plurality of second video frame images to obtain the outlines of the bus lanes in the images, namely generating a plurality of second target video frame images, judging whether the number of frames of the vehicles appearing in the outlines is greater than a preset number of frames or not based on the position information and the outlines, if so, indicating that the vehicles illegally occupy the bus lanes, namely, the detected target vehicles, and performing image identification, information detection and information comparison through a machine to distinguish from manual detection, thereby improving the detection precision, improving the detection efficiency and saving human resources.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is an application scenario diagram of an information detection method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an information detection method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an information detection method according to yet another embodiment of the present application;
fig. 4 is a schematic flowchart of an information detection method according to another embodiment of the present application;
fig. 5 is a schematic flowchart of an information detection method according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of an information detection apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an information detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, although some buses are provided with cameras, videos of motor vehicles illegally occupying bus lanes can be captured, motor vehicles illegally occupying bus lanes are screened out from a large number of videos, the labor cost is high, manual long-time observation can also result in missed capture due to fatigue, the detection efficiency is low, the precision is low, and meanwhile, human resources are wasted.
In order to solve the problems, the technical idea of the application is to provide a method for judging whether a motor vehicle illegally occupies a bus lane in a mobile scene based on deep learning.
In practical application, referring to fig. 1, fig. 1 is an application scenario diagram of the information detection method provided in the present application, and an execution subject of the present application may be a terminal device, such as a server 10. The server 10 may obtain a target video from the capture device 20 (which may be a camera mounted on a bus) through the interface, where the target video is a video of the surroundings of a bus lane of the bus in a moving scene captured by the camera, for example, a video of a vehicle in front of the bus and a lane. Then, in order to reduce the tracking time of the vehicle, a plurality of first video frame images in the target video are taken, each vehicle appearing in each first video frame image is sequentially tracked, the position information of each vehicle in the image is determined, and then the first target video frame image is generated; then, in order to save resources, a plurality of second video frame images can be obtained from the selected plurality of first video frame images, bus lanes appearing in the obtained plurality of second video frame images are segmented to obtain outlines of the bus lanes in the images, and then a plurality of second target video frame images are generated; and judging the number of frames of the vehicle in the contour through the position information and the contour, if the number of frames is greater than the preset number of frames, indicating that the vehicle is a vehicle which violates the rule and occupies the bus lane, and further carrying out violation processing on the vehicle which violates the rule and occupies the bus lane. Therefore, the information detection method provided by the application executes the image identification, the information detection and the information comparison through machine learning, is different from manual detection, can improve the detection precision, simultaneously improves the detection efficiency, and saves human resources.
It should be noted that the server may not need to directly obtain the video to be detected from the acquisition device, and may also obtain the video to be detected (i.e., the target video) in other manners, such as obtaining the video from a database, which is not specifically limited herein, and fig. 1 is only an example.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flow chart of an information detection method according to an embodiment of the present application. As shown in fig. 2, the method may include:
s201, obtaining a target video, and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images.
And the first target video frame image comprises the position information of each vehicle in the first target video frame image.
In this embodiment, the server may directly obtain the target video from the acquisition device, or the acquisition device uploads the target video to the database through data transmission, and the server searches the target video from the database. The server can read each frame image in the target video, then select a plurality of first video frame images from all the frame images to track each vehicle appearing in each first video frame image, mark the position information of each vehicle in the first video frame images in each first video frame image, and generate the first target video frame image. Therefore, the first target video frame image contains the position information of each appearing vehicle in the first target video frame image.
In a possible design, in order to reduce the time consumption for tracking the vehicle, frame skipping processing may be performed on all frame images corresponding to the target video, and then vehicle tracking may be performed on the frame-skipped video frame images. The method can be realized by the following steps:
step a1, acquiring each frame of image of the target video, wherein each frame of image is a video frame image.
Step a2, acquiring a plurality of first video frame images from all the video frame images corresponding to the target video through frame skipping processing.
In this embodiment, the server obtains each frame image of the target video by reading the target video, performs frame skipping tracking on vehicles in the video frame images in order to reduce time consumption of a tracking model, and obtains a plurality of first video frame images from all the video frame images corresponding to the target video in order to ensure that a frame skipping interval cannot be too large.
For example, one frame may be taken every two frames, i.e., 0 th frame, 2 frames, 4 frames, 6 frames, 8 frames.
In a possible design, considering that the segmentation model does not require a large correlation between frames, and in order to reduce the time consumption for image segmentation, frame skipping processing may be performed on all frame images corresponding to a plurality of first video frame images obtained by frame skipping processing, and then bus lanes in the video frame images after the frame skipping processing are segmented, so that the present embodiment describes in detail how to determine a plurality of second video frame images on the basis of the above embodiments. The method can be realized by the following steps:
determining a plurality of second video frame images from the plurality of first video frame images by the frame skipping process.
In this embodiment, frame skipping is performed when a video frame is segmented, in order to achieve comparison between a video frame image subjected to tracking processing and a video frame image subjected to segmentation processing, and further find out vehicles which violate a bus lane, and meanwhile, since a frame skipping interval corresponding to the segmentation processing may be larger than a frame skipping interval corresponding to the tracking processing, a server may perform frame skipping processing on a plurality of first video frame images to obtain a plurality of second video frame images.
Illustratively, if 8 frame skipping is taken for all video frame images in the target video, i.e., only 0 th, 8 th, 16 frame.. M1 frame images are divided (here, M1 may be a multiple of 8), then 4 frame skipping may be taken for the plurality of first video frame images, i.e., 0 th, 8 th, 16 frame.. M1 frame images are taken as the plurality of second video frame images from the 0 th, 2 nd, 4 th, 6 th, 8 frame.. M frames. It should be noted that the number of skipped frames is merely exemplary, and is not limited in any way. S202, determining a plurality of second video frame images from the plurality of first video frame images, and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images.
And the second target video frame image comprises the outline of the bus lane in the second target video frame image.
In this embodiment, when the image is segmented, a large correlation between frames is not required, and meanwhile, in order to save detection time, the server may select a plurality of second video frame images from the plurality of first video frame images, and then segment each second video frame image, mark the contour of the bus lane in each second video frame image including the bus lane, and generate a second target video frame image. The second target video frame image therefore contains the contour of each bus lane occurring in the second target video frame image.
S203, aiming at each vehicle, comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image, and if the frame number of the second target video frame image of the vehicle appearing in the outline is greater than a preset frame number, determining that the vehicle is the target vehicle, wherein the target vehicle is a vehicle which occupies the bus lane in a violation manner.
In this embodiment, for each tracked vehicle, based on the position information and the contour corresponding to the bus lane, the plurality of first target video frame images and the plurality of second target video frame images are compared to determine whether the vehicle is in the bus lane, that is, whether the vehicle is illegal. The vehicle here may be a motor vehicle, among others.
Specifically, for each tracked vehicle, sequentially comparing the position information of the vehicle in each first target video frame image with a profile corresponding to a bus lane in a corresponding second target video frame image, determining the number of times that the vehicle appears in the profiles of the plurality of second target video frame images (i.e., the number of frames of the second target video frame images that the vehicle appears in the profiles of the plurality of second target video frame images), then comparing the number of frames with a preset number of frames, and if the number of frames is greater than the preset number of frames, indicating that the vehicle is a vehicle that illegally occupies the bus lane, i.e., the detected target vehicle. Therefore, vehicle detection of illegally occupying a bus lane is achieved through machine learning, manual detection is distinguished, detection precision is improved, detection efficiency is improved, and human resources are saved.
In the information detection method provided by the embodiment, a plurality of first video frame images in a target video are obtained by obtaining the target video, and each vehicle appearing in the plurality of video frame images is tracked to obtain position information of each vehicle in the images, that is, a plurality of first target video frame images are generated; and then acquiring a plurality of second video frame images from the plurality of taken first video frame images, dividing the bus lanes in the plurality of second video frame images to obtain the outlines of the bus lanes in the images, namely generating a plurality of second target video frame images, judging whether the number of frames of the vehicles appearing in the outlines is greater than a preset number of frames or not based on the position information and the outlines, if so, indicating that the vehicles illegally occupy the bus lanes, namely, the detected target vehicles, and performing image identification, information detection and information comparison through a machine to distinguish from manual detection, thereby improving the detection precision, improving the detection efficiency and saving human resources.
In a possible design, referring to fig. 3, fig. 3 is a schematic flow chart of an information detection method according to still another embodiment of the present application, and this embodiment describes in detail how to obtain a plurality of first target video frame images in S201 based on the above embodiment, for example, based on the embodiment described in fig. 2. The tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images may include:
s301, tracking each vehicle in the plurality of first video frame images by using a tracking model to obtain a circumscribed rectangular frame of each vehicle in each first video frame image.
S302, determining the position information of each vehicle in each first video frame image according to the coordinate information of any vertex in the circumscribed rectangle frame and the width and height of the circumscribed rectangle frame.
S303, generating the plurality of first target video frame images according to the position information of the vehicles in the first video frame images and the identification information matched with each vehicle.
In this embodiment, all the motor vehicles appearing in the video frame can be tracked by using the tracking model, and the position information of all the motor vehicles in the figure, that is, the circumscribed rectangle of the motor vehicles, is obtained. Specifically, a vehicle (i.e., a motor vehicle) in a plurality of first video frame images taken in a target video is tracked by using a tracking model, and position information of all the motor vehicles in the video frames and corresponding identification information, such as an ID number, are obtained. The position information here may be a circumscribed rectangular frame of the motor vehicle, and the position and size of the vehicle in the figure can be determined based on coordinate information of any one vertex of the circumscribed rectangular frame, such as coordinate information of a point at the upper left corner of the rectangular frame and the width and height of the rectangular frame.
The ID number is used to indicate corresponding information of the motor vehicle, for example, the motor vehicle with the same ID number in the M-frame image represents the same motor vehicle, so that the position of a certain motor vehicle in the 0 th frame, the 2 nd frame, the 4 th frame, the 6 th frame, the 8 th frame, the.
In a possible design, referring to fig. 4, fig. 4 is a schematic flow chart of an information detection method according to another embodiment of the present application, and this embodiment, on the basis of the above embodiment, for example, on the basis of the embodiment described in fig. 3, details are described on segmenting the bus lane in the plurality of second video frame images in S202 to obtain a plurality of second target video frame images. The segmenting the bus lane in the plurality of second video frame images to obtain a plurality of second target video frame images may include:
s401, segmenting each second video frame image in the plurality of second video frame images by utilizing a segmentation model to obtain boundary points of the bus lane.
S402, fitting the boundary points of the bus lane corresponding to each second video frame image to obtain an external closed polygon of the bus lane, wherein the external closed polygon is used for representing the outline of the bus lane.
And S403, generating a plurality of second target video frame images according to the contour of each bus lane.
In this embodiment, the video frame images taken from the plurality of first video frame images are segmented by using the segmentation model to obtain a closed polygon circumscribing the bus lane in the image, that is, the position of the boundary point of the bus lane. Specifically, according to the boundary points of the bus lane in the 0 th frame, the 8 th frame, the 16 th frame, and the M1 frame images obtained by dividing the plurality of second video frame images, an opencv function (opencv is an Intel open source computer vision library, and realizes a plurality of general algorithms in the aspects of image processing and computer vision) is used for fitting a closed polygon, namely a circumscribed closed polygon of the bus lane, so as to form the contour of the bus lane, and the contour is displayed in the figure, so that a second target video frame image is generated.
In a possible design, referring to fig. 5, fig. 5 is a schematic flow chart of a detection method according to another embodiment of the present application, and this embodiment describes S203 in detail based on the above embodiment. Comparing the position information of the vehicle in each first target video frame image with the contour of the bus lane in each second target video frame image, and if the number of frames of the second target video frame image in which the vehicle appears in the contour is greater than a preset number of frames, determining that the vehicle is a target vehicle, which may include:
s501, acquiring a plurality of third target video frame images corresponding to the plurality of second target video frame images from the plurality of first target video frame images.
S502, determining the position of the central point of the vehicle in each third target video frame image according to the position information of the vehicle in the plurality of third target video frame images.
S503, comparing the center point position of the vehicle in each third target video frame image with the corresponding contour in each second target video frame image, judging whether the center point position of the vehicle is in the contour, and if the frame number of the second target video frame image corresponding to the same vehicle in the contour is larger than a preset frame number according to the identification information of the vehicle, determining that the vehicle is the target vehicle.
In this embodiment, in order to determine whether a vehicle in a first target video frame image is within a contour in a second target video frame image, and at the same time, to save comparison time, the first target video frame image and the second target video frame image need to be compared with each other by corresponding video frame images, that is, first, a video frame image corresponding to each second target video frame image, that is, a third target video frame image, is obtained from a plurality of first target video frame images, then, according to the position information of the vehicle in the third target video frame image and the position of the contour in the second target video frame image, it is determined whether the vehicle is within the contour of the corresponding video frame, according to the ID number of each vehicle, the number of frames of the second target video frame images corresponding to the same vehicle (that is, a vehicle with the same ID number) within the contour is counted, and by comparing the number of frames with a preset number, to determine whether the vehicle is a vehicle that is violating a transit lane.
Specifically, the position information of the vehicle in the 0 th frame, the 8 th frame and the 16 th frame are taken from the first target video frame image (or from the first video frame image), the 8 th frame and the 16 th frame are taken from the M1 frame image (wherein, in order to ensure the tracking effect, the number of skipped frames during tracking is small, but when the vehicle which occupies the bus lane in violation is judged, the number of skipped frames is large, and only a part of the first target video frame image needs to be taken), the coordinate position (namely, the central point position) of the central point of the vehicle in the image is obtained, and whether the central point of the vehicle in the corresponding second target video frame image is in the circumscribed closed polygon fitted on the bus lane or not is judged by using the opencv function. If the same vehicle exceeds the preset frame number (for example, the preset frame number is three frames), the judgment result shows that the vehicle is in the bus lane, and the vehicle can be determined to illegally occupy the bus lane, namely the target vehicle.
In one possible design, after the vehicle is determined to be the target vehicle, in order to lock the target vehicle, license plate information of the target vehicle may be identified in a second target video frame image corresponding to a second target video frame image including the target vehicle, and then violation processing operation may be performed on the target vehicle according to the license plate information. Therefore, the present embodiment describes the information detection method in detail based on the above-described embodiment, for example, based on the embodiment described in fig. 2. The information detection method can also comprise the following steps:
step c1, acquiring a first video frame image or a second video frame image containing the target vehicle according to a first preset number of third target video frame images or a first preset number of second target video frame images matched with the target vehicle, and acquiring the license plate number of the target vehicle with the highest identification confidence coefficient through image identification according to the first video frame image or the second video frame image containing the target vehicle.
In this embodiment, after the target vehicle is determined, a first preset number of third target video frame images or a first preset number of second target video frame images to which the target vehicle belongs may be determined according to the identification information, i.e., the ID number, of the target vehicle, and then the first video frame image or the second video frame image to which the target vehicle belongs, i.e., the first video frame image or the second video frame image including the target vehicle, may be found from the plurality of first video frame images or the second video frame images according to the first preset number of third target video frame images or the first preset number of second target video frame images to which the target vehicle belongs. Then, the license plate information of the target vehicle, such as the license plate number, is respectively recognized from the first video frame image or the second video frame image containing the target vehicle, and since there may be situations that the license plate is good or unclear or information is missing in the recognition process, a license plate number with the highest recognition confidence coefficient needs to be selected from all recognized license plate numbers as the license plate number of the target vehicle.
In one possible design, how to determine the license plate number of the target vehicle can be achieved through the following steps d1 to d 3:
and d1, intercepting the target vehicle in the first video frame image or the second video frame image containing the target vehicle to obtain a plurality of vehicle minimaps.
D2, intercepting the license plates contained in the plurality of small vehicle images to obtain a plurality of small license plate images;
and d3, recognizing the license plate information in the plurality of license plate small images to obtain the license plate number of the target vehicle with the highest recognition confidence coefficient.
In this embodiment, if it is determined that a certain motor vehicle is a target vehicle, the tracked vehicle is captured from the first video frame image or the second video frame image to obtain a vehicle thumbnail, and a license plate detection model is used to perform license plate detection on the captured vehicle thumbnail to obtain license plate position information, that is, a circumscribed rectangular frame of the license plate. And intercepting the license plate according to a license plate detection result, namely an external rectangular frame of the license plate to obtain a license plate thumbnail, sequentially identifying the intercepted license plate thumbnails by using a license plate identification model, and taking a result with the highest identification confidence coefficient as the license plate of an illegal vehicle, namely the license plate of the vehicle which illegally occupies the bus lane.
Specifically, the server detects a target vehicle in each first video frame image or second video frame image containing the target vehicle by using a first detection model for locking the position of the vehicle, and intercepts an area image containing the target vehicle to obtain a plurality of vehicle thumbnails; and then the server detects the position of the license plate in each vehicle small picture by using a second detection model for locking the position of the license plate, judges the license plate in the vehicle small picture, intercepts the license plates contained in one or some vehicle small pictures to obtain a plurality of license plate small pictures, then identifies the license plate number on the license plate by using an image identification technology, judges whether the license plate number of the license plate can be identified or not, carries out confidence degree sequencing on the identified license plate numbers, and takes the license plate number with the highest identification confidence degree as the license plate number of the target vehicle. The recognition confidence here can be understood as being determined by whether the license plate numbers are complete and the definition is high or low, for example, the license plate number with the highest recognition confidence is the license plate number with complete license plate numbers and the definition is the highest.
And c2, amplifying the vehicle small image corresponding to the license plate number of the target vehicle with the highest recognition confidence coefficient to obtain a target image of the target vehicle, wherein the size of the target image is the same as that of the video frame image.
In this embodiment, three images including the target vehicle may be extracted from the images of the divided time-hopping frames (that is, the plurality of second video frame images include the plurality of second video frame images, and therefore, the image selected from the plurality of second video frame images may be an image selected from each of the first video frame images corresponding to each of the second video frame images) according to time, and at least one target vehicle in the three images is ensured to be in the bus lane. And then, amplifying the vehicle small image corresponding to the license plate number with the highest recognition confidence coefficient by using an opencv function to the size same as that of the original video frame image (namely the first video frame image or the second video frame image), so as to obtain a target image of the target vehicle, namely a close-up image of the target vehicle.
Step c3, obtaining a second preset number of second video frame images containing the target vehicle from the plurality of second video frame images, and splicing the second preset number of second video frame images with the target image to obtain a target spliced image, wherein the second preset number of second video frame images contain at least one second video frame image in which the target vehicle is in the contour of the bus lane.
And c4, reporting the target spliced image and the license plate number of the target vehicle with the highest recognition confidence coefficient to a target terminal, so that the target terminal executes violation handling operation on the target vehicle.
In this embodiment, the opencv function is used to splice the three maps and the vehicle close-up map obtained in step c2, so as to obtain a map with four-in-one (i.e., a target spliced image).
Illustratively, the first map in the three maps is at the upper left corner of the target splicing map, the second map is at the upper right corner of the target splicing map, the third map is at the lower left corner of the target splicing map, and the vehicle close-up map is at the lower right corner of the target splicing map, and meanwhile, the license plate number of the identified target vehicle is used as a part of the name of the four-in-one map to store the four-in-one map as the basis for executing the violation penalty operation. And reporting the target spliced image and the corresponding license plate number with the highest recognition confidence coefficient to a target terminal, so that the target terminal executes violation processing operation on the target vehicle.
Therefore, according to the embodiment, the method for judging that the motor vehicle illegally occupies the bus lane based on deep learning in the mobile scene is stable and efficient by utilizing the computer vision technology of the deep learning algorithm, the reusability is good, and meanwhile, a large amount of manpower is saved.
Fig. 6 is a schematic structural diagram of an information detecting apparatus according to an embodiment of the present application, corresponding to the information detecting method according to the foregoing embodiment. For convenience of explanation, only portions related to the embodiments of the present application are shown. As shown in fig. 6, the information detection device 60 includes: a tracking module 601, a segmentation module 602, and a target vehicle determination module 603. The tracking module 601 is configured to acquire a target video, and track each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images, where each first target video frame image includes position information of each vehicle in the first target video frame image; a segmentation module 602, configured to determine multiple second video frame images from the multiple first video frame images, and segment a bus lane in the multiple second video frame images to obtain multiple second target video frame images, where the second target video frame images include a contour of the bus lane in the second target video frame images; the target vehicle determination module 603 is configured to compare, for each vehicle, the position information of the vehicle in each first target video frame image with the contour of the bus lane in each second target video frame image, and if the number of frames of the second target video frame image in which the vehicle appears in the contour is greater than a preset number of frames, determine that the vehicle is a target vehicle, where the target vehicle is a vehicle that occupies the bus lane in a violation manner.
In the information detection apparatus provided in this embodiment, by using the tracking module 601, the segmentation module 602, and the target vehicle determination module 603, a target video is first obtained, a plurality of first video frame images in the target video are obtained, and each vehicle appearing in the plurality of video frame images is tracked, so as to obtain position information of each vehicle in the image, that is, a plurality of first target video frame images are generated; and then acquiring a plurality of second video frame images from the plurality of taken first video frame images, dividing the bus lanes in the plurality of second video frame images to obtain the outlines of the bus lanes in the images, namely generating a plurality of second target video frame images, judging whether the number of frames of the vehicles appearing in the outlines is greater than a preset number of frames or not based on the position information and the outlines, if so, indicating that the vehicles illegally occupy the bus lanes, namely, the detected target vehicles, and performing image identification, information detection and information comparison through a machine to distinguish from manual detection, thereby improving the detection precision, improving the detection efficiency and saving human resources.
In one possible design, the information detection apparatus may further include: a frame skipping processing module; the frame skipping processing module is used for acquiring each frame image of the target video before tracking each vehicle in a plurality of first video frame images in the target video, wherein each frame image is a video frame image;
acquiring a plurality of first video frame images from all the video frame images corresponding to a target video through frame skipping processing; or,
determining a plurality of second video frame images from the plurality of first video frame images by the frame skipping process.
In one possible design, the tracking module 601 is specifically configured to:
tracking each vehicle in the plurality of first video frame images by using a tracking model to obtain a circumscribed rectangular frame of each vehicle in each first video frame image; for each vehicle, determining the position information of the vehicle in each first video frame image according to the coordinate information of any vertex in the circumscribed rectangular frame and the width and height of the circumscribed rectangular frame; and generating the plurality of first target video frame images according to the position information of the vehicles in the first video frame images and the identification information matched with each vehicle.
In one possible design, the segmentation module 602 is further specifically configured to:
dividing each second video frame image in the plurality of second video frame images by using a division model to obtain boundary points of the bus lane; fitting the boundary points of the bus lane corresponding to each second video frame image to obtain an external closed polygon of the bus lane, wherein the external closed polygon is used for representing the outline of the bus lane; and generating a plurality of second target video frame images according to the contour of each bus lane.
In one possible design, the target vehicle determination module 503 is specifically configured to:
acquiring a plurality of third target video frame images corresponding to the plurality of second target video frame images from the plurality of first target video frame images; determining the position of the central point of the vehicle in each third target video frame image according to the position information of the vehicle in the plurality of third target video frame images; comparing the center point position of the vehicle in each third target video frame image with the corresponding contour in each second target video frame image, judging whether the center point position of the vehicle is in the contour, and if the frame number of the second target video frame image corresponding to the same vehicle in the contour is determined to be greater than the preset frame number according to the identification information of the vehicle, determining that the vehicle is the target vehicle.
In one possible design, the information detection apparatus may further include: an information processing module; the information processing module is used for acquiring a first video frame image or a second video frame image containing the target vehicle according to a first preset number of third target video frame images or a first preset number of second target video frame images to which the target vehicle belongs after the vehicle is determined to be the target vehicle, and acquiring the license plate number of the target vehicle with the highest identification confidence coefficient through image identification according to the first video frame image or the second video frame image containing the target vehicle; amplifying the vehicle small image corresponding to the license plate number of the target vehicle with the highest recognition confidence coefficient to obtain a target image of the target vehicle, wherein the size of the target image is the same as that of the video frame image; acquiring a second preset number of second video frame images containing the target vehicle from the plurality of second video frame images, and splicing the second preset number of second video frame images with the target image to obtain a target spliced image, wherein the second preset number of second video frame images contain at least one target vehicle in the second video frame images in the outline of the bus lane; and reporting the target stitching image and the license plate number of the target vehicle with the highest recognition confidence coefficient to a target terminal, so that the target terminal executes violation processing operation on the target vehicle.
In one possible design, the information processing module is specifically configured to: intercepting the target vehicle in the first video frame image or the second video frame image containing the target vehicle to obtain a plurality of vehicle small images; intercepting license plates contained in the plurality of small vehicle images to obtain a plurality of small license plate images; and identifying the license plate information in the plurality of license plate small images to obtain the license plate number with the highest identification confidence coefficient, wherein the license plate number with the highest identification confidence coefficient is the license plate number of the target vehicle.
The apparatus provided in the embodiment of the present application may be configured to implement the technical solution of the method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again in the embodiment of the present application.
Fig. 7 is a schematic diagram of a hardware structure of an information detection apparatus according to an embodiment of the present application. As shown in fig. 7, the present embodiment provides an information detection apparatus 70 including: at least one processor 701 and a memory 702. The processor 701 and the memory 702 are connected by a bus 703.
In a specific implementation process, the at least one processor 701 executes the computer-executable instructions stored in the memory 702, so that the at least one processor 701 executes the method in the above-described method embodiment.
For a specific implementation process of the processor 701, reference may be made to the above method embodiments, which implement principles and technical effects similar to each other, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 7, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the information detection method of the embodiment of the method is realized.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An information detection method, comprising:
acquiring a target video, and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images, wherein the first target video frame images comprise position information of each vehicle in the first target video frame images;
determining a plurality of second video frame images from the plurality of first video frame images, and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images, wherein the second target video frame images comprise outlines of the bus lanes in the second target video frame images;
and comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image aiming at each vehicle, and if the frame number of the second target video frame image of the vehicle appearing in the outline is greater than a preset frame number, determining that the vehicle is the target vehicle which occupies the bus lane in violation.
2. The method of claim 1, wherein prior to said tracking each vehicle in a plurality of first video frame images in the target video, the method further comprises:
acquiring each frame of image of a target video, wherein each frame of image is a video frame image;
acquiring a plurality of first video frame images from all the video frame images corresponding to a target video through frame skipping processing;
wherein said determining a plurality of second video frame images from said plurality of first video frame images comprises:
determining a plurality of second video frame images from the plurality of first video frame images by the frame skipping process.
3. The method of claim 1, wherein said tracking each vehicle in a plurality of first video frame images in the target video resulting in a plurality of first target video frame images comprises:
tracking each vehicle in the plurality of first video frame images by using a tracking model to obtain a circumscribed rectangular frame of each vehicle in each first video frame image;
for each vehicle, determining the position information of the vehicle in each first video frame image according to the coordinate information of any vertex in the circumscribed rectangular frame and the width and height of the circumscribed rectangular frame;
and generating the plurality of first target video frame images according to the position information of the vehicles in the first video frame images and the identification information matched with each vehicle.
4. The method according to claim 3, wherein the segmenting the bus lane in the plurality of second video frame images to obtain a plurality of second target video frame images comprises:
dividing each second video frame image in the plurality of second video frame images by using a division model to obtain boundary points of the bus lane;
fitting the boundary points of the bus lane corresponding to each second video frame image to obtain an external closed polygon of the bus lane, wherein the external closed polygon is used for representing the outline of the bus lane;
and generating a plurality of second target video frame images according to the contour of each bus lane.
5. The method according to claim 4, wherein the comparing the position information of the vehicle in each of the first target video frame images with the contour of the bus lane in each of the second target video frame images, and if the number of frames of the second target video frame images in which the vehicle appears within the contour is greater than a preset number of frames, determining that the vehicle is a target vehicle comprises:
acquiring a plurality of third target video frame images corresponding to the plurality of second target video frame images from the plurality of first target video frame images;
determining the position of the central point of the vehicle in each third target video frame image according to the position information of the vehicle in the plurality of third target video frame images;
comparing the center point position of the vehicle in each third target video frame image with the corresponding contour in each second target video frame image, judging whether the center point position of the vehicle is in the contour, and if the frame number of the second target video frame image corresponding to the same vehicle in the contour is determined to be greater than the preset frame number according to the identification information of the vehicle, determining that the vehicle is the target vehicle.
6. The method of any of claims 2-5, wherein after the determining that the vehicle is a target vehicle, the method further comprises:
acquiring a first video frame image or a second video frame image containing the target vehicle according to a first preset number of third target video frame images or a first preset number of second target video frame images to which the target vehicle belongs, and acquiring a license plate number of the target vehicle with the highest identification confidence coefficient through image identification according to the first video frame image or the second video frame image containing the target vehicle;
amplifying the vehicle small image corresponding to the license plate number of the target vehicle with the highest recognition confidence coefficient to obtain a target image of the target vehicle, wherein the size of the target image is the same as that of the video frame image;
acquiring a second preset number of second video frame images containing the target vehicle from the plurality of second video frame images, and splicing the second preset number of second video frame images with the target image to obtain a target spliced image, wherein the second preset number of second video frame images contain at least one target vehicle in the second video frame images in the outline of the bus lane;
and reporting the target stitching image and the license plate number of the target vehicle with the highest recognition confidence coefficient to a target terminal, so that the target terminal executes violation processing operation on the target vehicle.
7. The method according to claim 6, wherein the obtaining, by image recognition, the license plate number of the target vehicle with the highest recognition confidence degree according to the first video frame image or the second video frame image containing the target vehicle comprises:
intercepting the target vehicle in the first video frame image or the second video frame image containing the target vehicle to obtain a plurality of vehicle small images;
intercepting license plates contained in the plurality of small vehicle images to obtain a plurality of small license plate images;
and identifying the license plate information in the plurality of license plate small images to obtain the license plate number with the highest identification confidence coefficient, wherein the license plate number with the highest identification confidence coefficient is the license plate number of the target vehicle.
8. An information detecting apparatus, characterized by comprising:
the tracking module is used for acquiring a target video and tracking each vehicle in a plurality of first video frame images in the target video to obtain a plurality of first target video frame images, wherein the first target video frame images comprise position information of each vehicle in the first target video frame images;
the segmentation module is used for determining a plurality of second video frame images from the plurality of first video frame images and segmenting bus lanes in the plurality of second video frame images to obtain a plurality of second target video frame images, wherein the second target video frame images comprise outlines of the bus lanes in the second target video frame images;
and the target vehicle determining module is used for comparing the position information of the vehicle in each first target video frame image with the outline of the bus lane in each second target video frame image aiming at each vehicle, and if the frame number of the second target video frame image of the vehicle appearing in the outline is greater than a preset frame number, determining that the vehicle is a target vehicle which is a vehicle illegally occupying the bus lane.
9. An information detecting apparatus characterized by comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the information detection method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement the information detection method according to any one of claims 1 to 7.
CN202011589347.6A 2020-12-28 2020-12-28 Information detection method, device, equipment and storage medium Pending CN112712708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011589347.6A CN112712708A (en) 2020-12-28 2020-12-28 Information detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589347.6A CN112712708A (en) 2020-12-28 2020-12-28 Information detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112712708A true CN112712708A (en) 2021-04-27

Family

ID=75546118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589347.6A Pending CN112712708A (en) 2020-12-28 2020-12-28 Information detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112712708A (en)

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447019A (en) * 2007-11-29 2009-06-03 爱信艾达株式会社 Image recognition apparatuses, methods and programs
CN101675442A (en) * 2007-05-25 2010-03-17 爱信艾达株式会社 Lane determination device, lane determination method, and navigation equipment using the same
CN103268471A (en) * 2013-04-17 2013-08-28 深圳市锐明视讯技术有限公司 Vehicle illegal land occupying detection method and device
CN104331691A (en) * 2014-11-28 2015-02-04 深圳市捷顺科技实业股份有限公司 Vehicle logo classifier training method, vehicle logo recognition method and device
JP2016111647A (en) * 2014-12-10 2016-06-20 株式会社日本自動車部品総合研究所 Image processing apparatus and lane borderline recognition system
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
CN107170239A (en) * 2017-06-30 2017-09-15 广东工业大学 A kind of target vehicle follows the trail of grasp shoot method and device
CN107292277A (en) * 2017-06-30 2017-10-24 深圳信路通智能技术有限公司 A kind of double parking stall parking trackings of trackside
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
US20180293684A1 (en) * 2016-09-07 2018-10-11 Southeast University Supervision and penalty method and system for expressway emergency lane occupancy
CN109615868A (en) * 2018-12-20 2019-04-12 北京以萨技术股份有限公司 A kind of video frequency vehicle based on deep learning is separated to stop detection method
CN109711407A (en) * 2018-12-28 2019-05-03 深圳市捷顺科技实业股份有限公司 A kind of method and relevant apparatus of Car license recognition
CN109948416A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal occupancy bus zone automatic auditing method based on deep learning
CN110012350A (en) * 2019-03-25 2019-07-12 联想(北京)有限公司 A kind of method for processing video frequency and device, equipment, storage medium
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110264525A (en) * 2019-06-13 2019-09-20 惠州市德赛西威智能交通技术研究院有限公司 A kind of camera calibration method based on lane line and target vehicle
CN110298837A (en) * 2019-07-08 2019-10-01 上海天诚比集科技有限公司 Fire-fighting road occupying exception object detecting method based on frame differential method
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video
CN110659539A (en) * 2018-06-28 2020-01-07 杭州海康威视数字技术股份有限公司 Information processing method and device
CN110675637A (en) * 2019-10-15 2020-01-10 上海眼控科技股份有限公司 Vehicle illegal video processing method and device, computer equipment and storage medium
CN110765952A (en) * 2019-10-24 2020-02-07 上海眼控科技股份有限公司 Vehicle illegal video processing method and device and computer equipment
CN110929589A (en) * 2019-10-31 2020-03-27 浙江大华技术股份有限公司 Method, device, computer device and storage medium for vehicle feature recognition
CN111145555A (en) * 2019-12-09 2020-05-12 浙江大华技术股份有限公司 Method and device for detecting vehicle violation
CN111161543A (en) * 2019-11-14 2020-05-15 南京行者易智能交通科技有限公司 Automatic snapshot method and system for bus front violation behavior based on image recognition
CN111383460A (en) * 2020-06-01 2020-07-07 浙江大华技术股份有限公司 Vehicle state discrimination method and device and computer storage medium
CN111476245A (en) * 2020-05-29 2020-07-31 上海眼控科技股份有限公司 Vehicle left-turn violation detection method and device, computer equipment and storage medium
CN111627215A (en) * 2020-05-21 2020-09-04 平安国际智慧城市科技股份有限公司 Video image identification method based on artificial intelligence and related equipment
CN111652112A (en) * 2020-05-29 2020-09-11 北京百度网讯科技有限公司 Lane flow direction identification method and device, electronic equipment and storage medium
CN111666853A (en) * 2020-05-28 2020-09-15 平安科技(深圳)有限公司 Real-time vehicle violation detection method, device, equipment and storage medium
CN111862593A (en) * 2020-06-03 2020-10-30 北京百度网讯科技有限公司 Method and device for reporting traffic events, electronic equipment and storage medium
CN111914834A (en) * 2020-06-18 2020-11-10 绍兴埃瓦科技有限公司 Image recognition method and device, computer equipment and storage medium
CN111985356A (en) * 2020-07-31 2020-11-24 星际控股集团有限公司 Evidence generation method and device for traffic violation, electronic equipment and storage medium
US10867190B1 (en) * 2019-11-27 2020-12-15 Aimotive Kft. Method and system for lane detection

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101675442A (en) * 2007-05-25 2010-03-17 爱信艾达株式会社 Lane determination device, lane determination method, and navigation equipment using the same
CN101447019A (en) * 2007-11-29 2009-06-03 爱信艾达株式会社 Image recognition apparatuses, methods and programs
CN103268471A (en) * 2013-04-17 2013-08-28 深圳市锐明视讯技术有限公司 Vehicle illegal land occupying detection method and device
CN104331691A (en) * 2014-11-28 2015-02-04 深圳市捷顺科技实业股份有限公司 Vehicle logo classifier training method, vehicle logo recognition method and device
JP2016111647A (en) * 2014-12-10 2016-06-20 株式会社日本自動車部品総合研究所 Image processing apparatus and lane borderline recognition system
CN107705552A (en) * 2016-08-08 2018-02-16 杭州海康威视数字技术股份有限公司 A kind of Emergency Vehicle Lane takes behavioral value method, apparatus and system
US20180293684A1 (en) * 2016-09-07 2018-10-11 Southeast University Supervision and penalty method and system for expressway emergency lane occupancy
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
CN107170239A (en) * 2017-06-30 2017-09-15 广东工业大学 A kind of target vehicle follows the trail of grasp shoot method and device
CN107292277A (en) * 2017-06-30 2017-10-24 深圳信路通智能技术有限公司 A kind of double parking stall parking trackings of trackside
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110659539A (en) * 2018-06-28 2020-01-07 杭州海康威视数字技术股份有限公司 Information processing method and device
CN109615868A (en) * 2018-12-20 2019-04-12 北京以萨技术股份有限公司 A kind of video frequency vehicle based on deep learning is separated to stop detection method
CN109711407A (en) * 2018-12-28 2019-05-03 深圳市捷顺科技实业股份有限公司 A kind of method and relevant apparatus of Car license recognition
CN109948416A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal occupancy bus zone automatic auditing method based on deep learning
CN110012350A (en) * 2019-03-25 2019-07-12 联想(北京)有限公司 A kind of method for processing video frequency and device, equipment, storage medium
CN110264525A (en) * 2019-06-13 2019-09-20 惠州市德赛西威智能交通技术研究院有限公司 A kind of camera calibration method based on lane line and target vehicle
CN110298837A (en) * 2019-07-08 2019-10-01 上海天诚比集科技有限公司 Fire-fighting road occupying exception object detecting method based on frame differential method
CN110533925A (en) * 2019-09-04 2019-12-03 上海眼控科技股份有限公司 Processing method, device, computer equipment and the storage medium of vehicle illegal video
CN110675637A (en) * 2019-10-15 2020-01-10 上海眼控科技股份有限公司 Vehicle illegal video processing method and device, computer equipment and storage medium
CN110765952A (en) * 2019-10-24 2020-02-07 上海眼控科技股份有限公司 Vehicle illegal video processing method and device and computer equipment
CN110929589A (en) * 2019-10-31 2020-03-27 浙江大华技术股份有限公司 Method, device, computer device and storage medium for vehicle feature recognition
CN111161543A (en) * 2019-11-14 2020-05-15 南京行者易智能交通科技有限公司 Automatic snapshot method and system for bus front violation behavior based on image recognition
US10867190B1 (en) * 2019-11-27 2020-12-15 Aimotive Kft. Method and system for lane detection
CN111145555A (en) * 2019-12-09 2020-05-12 浙江大华技术股份有限公司 Method and device for detecting vehicle violation
CN111627215A (en) * 2020-05-21 2020-09-04 平安国际智慧城市科技股份有限公司 Video image identification method based on artificial intelligence and related equipment
CN111666853A (en) * 2020-05-28 2020-09-15 平安科技(深圳)有限公司 Real-time vehicle violation detection method, device, equipment and storage medium
CN111652112A (en) * 2020-05-29 2020-09-11 北京百度网讯科技有限公司 Lane flow direction identification method and device, electronic equipment and storage medium
CN111476245A (en) * 2020-05-29 2020-07-31 上海眼控科技股份有限公司 Vehicle left-turn violation detection method and device, computer equipment and storage medium
CN111383460A (en) * 2020-06-01 2020-07-07 浙江大华技术股份有限公司 Vehicle state discrimination method and device and computer storage medium
CN111862593A (en) * 2020-06-03 2020-10-30 北京百度网讯科技有限公司 Method and device for reporting traffic events, electronic equipment and storage medium
CN111914834A (en) * 2020-06-18 2020-11-10 绍兴埃瓦科技有限公司 Image recognition method and device, computer equipment and storage medium
CN111985356A (en) * 2020-07-31 2020-11-24 星际控股集团有限公司 Evidence generation method and device for traffic violation, electronic equipment and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LI JINGHONG: "Design and Implementation of Bus Lane Video Image Monitor System Based on FPGA", 《2017 29TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 *
吕正荣: "智能交通系统中视频处理关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
张韬: "基于视频的车流量检测建模与分析", 《计算机与数字工程》 *
胡胜: "基于二次阈值分割和车道宽度匹配的车道线检测算法", 《汽车技术》 *
蓝章礼: "基于SURF和最佳缝合线的车道图像序列拼接研究", 《重庆交通大学学报( 自然科学版)》 *
赵凯迪: "基于无人机的车辆和车道检测系统的设计与实现", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN110569699B (en) Method and device for carrying out target sampling on picture
CN111382704B (en) Vehicle line pressing violation judging method and device based on deep learning and storage medium
Yuan et al. A robust and efficient approach to license plate detection
US20170032514A1 (en) Abandoned object detection apparatus and method and system
CN111325769A (en) Target object detection method and device
CN111079621B (en) Method, device, electronic equipment and storage medium for detecting object
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN113689493A (en) Lens attachment detection method, lens attachment detection device, electronic equipment and storage medium
CN113312949A (en) Video data processing method, video data processing device and electronic equipment
CN113807293B (en) Deceleration strip detection method, deceleration strip detection system, deceleration strip detection equipment and computer readable storage medium
CN111639640B (en) License plate recognition method, device and equipment based on artificial intelligence
CN111402185A (en) Image detection method and device
CN116912517B (en) Method and device for detecting camera view field boundary
CN111062415B (en) Object image extraction method, system and storage medium based on contrast difference
CN112101134A (en) Object detection method and device, electronic device and storage medium
CN112712708A (en) Information detection method, device, equipment and storage medium
CN104809438A (en) Method and device for detecting electronic eyes
CN114419531A (en) Object detection method, object detection system, and computer-readable storage medium
CN119229415A (en) Automatic parking space detection method, device, computer equipment and medium
CN114663469A (en) Target object tracking method and device, electronic equipment and readable storage medium
CN112750312A (en) Information detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20221111

AD01 Patent right deemed abandoned