CN114694060A - Road shed object detection method, electronic equipment and storage medium - Google Patents
Road shed object detection method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114694060A CN114694060A CN202210230541.8A CN202210230541A CN114694060A CN 114694060 A CN114694060 A CN 114694060A CN 202210230541 A CN202210230541 A CN 202210230541A CN 114694060 A CN114694060 A CN 114694060A
- Authority
- CN
- China
- Prior art keywords
- road
- projectile
- area
- determining
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 127
- 238000003860 storage Methods 0.000 title claims description 12
- 238000012544 monitoring process Methods 0.000 claims abstract description 74
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 31
- 239000007921 spray Substances 0.000 claims description 22
- 206010000117 Abnormal behaviour Diseases 0.000 claims description 21
- 230000008859 change Effects 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 20
- 239000000203 mixture Substances 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 238000013145 classification model Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 abstract description 3
- 238000004422 calculation algorithm Methods 0.000 description 35
- 238000009826 distribution Methods 0.000 description 23
- 102100024650 Carbonic anhydrase 3 Human genes 0.000 description 16
- 101000760630 Homo sapiens Carbonic anhydrase 3 Proteins 0.000 description 16
- 102100024633 Carbonic anhydrase 2 Human genes 0.000 description 15
- 101000760643 Homo sapiens Carbonic anhydrase 2 Proteins 0.000 description 15
- 230000006399 behavior Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000012549 training Methods 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 6
- 238000005286 illumination Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 239000000126 substance Substances 0.000 description 4
- 102100024644 Carbonic anhydrase 4 Human genes 0.000 description 3
- 101000760567 Homo sapiens Carbonic anhydrase 4 Proteins 0.000 description 3
- 101100129590 Schizosaccharomyces pombe (strain 972 / ATCC 24843) mcp5 gene Proteins 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 101100517651 Caenorhabditis elegans num-1 gene Proteins 0.000 description 2
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000005574 cross-species transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Landscapes
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The method comprises the steps of carrying out target identification on road area images acquired by video image acquisition equipment in a road monitoring area, namely detecting the acquired road area images, judging whether the road area images contain road objects, namely determining the condition of a first road object in the vision field range of the video image acquisition equipment, analyzing and processing motion attribute data of each vehicle in the road monitoring area, and determining the condition of a second road object outside the vision field range of the video image acquisition equipment. Therefore, the scheme combines the target identification of the road area image acquired by the video image acquisition equipment and the analysis and the processing of the motion attribute data of each vehicle in the road monitoring area, so that the detection limit of the video image acquisition equipment can be broken through, and the detection of the road sprinkled objects on the whole road section where the road monitoring area is located can be realized.
Description
Technical Field
The application relates to the technical field of vehicle and road cooperation, in particular to a road shed object detection method, electronic equipment and a storage medium.
Background
Road sprinklers are one of the important causes of road traffic accidents. For example, road spills tend to slow down the speed of vehicles, thereby causing traffic accidents. In this regard, road throwing behavior, as a traffic incident affecting driving safety, needs to be detected in a timely manner.
At the present stage, detection of the road sprinkled objects is usually realized by adopting a manual inspection mode or a detection mode based on a video detection algorithm, but the manual inspection mode consumes a large amount of manpower and material resources and cannot monitor the road sprinkled behavior in real time; and the detection mode based on the video detection algorithm has the defects of low detection precision, short detection distance and the like.
In summary, there is a need for a road spill detection method for detecting road spill over all road segments.
Disclosure of Invention
The application provides a road spill object detection method, an electronic device and a storage medium, which are used for realizing road spill object detection for all road sections.
In a first aspect, an exemplary embodiment of the present application provides a road spray detection method, comprising:
carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, and determining a first road object throwing condition in the view field range of the video image acquisition equipment;
determining a second road projectile condition outside the field of view of the video image capture device based on the motion attribute data of each vehicle within the road monitoring area; the first and second road projectile conditions are indicative of a road projectile condition within the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
In the technical scheme, the technical scheme of the application fully integrates the video image acquisition equipment and the motion attribute data of each vehicle, which is reported by the vehicle or acquired by the radar equipment in the road monitoring area, wherein, the sensing equipment (various sensors and the like) arranged on the vehicle can collect and report the motion attribute data of the vehicle in real time without being influenced by environmental factors such as weather, illumination and the like, or the target (such as a vehicle) is positioned by emitting electromagnetic waves and the like based on the radar equipment arranged in the road monitoring area, and the target is not influenced by environmental factors such as weather, illumination and the like, the method can track and monitor targets at a long distance, so that the detection range limitation of the video image acquisition equipment can be broken (the detection range of the video image acquisition equipment is smaller), and the detection of the road sprinkled objects on the whole road section can be realized. Specifically, for any road monitoring area, the target identification is carried out on the road area image collected by the video image collecting device in the road monitoring area, that is, the detection and identification are carried out on the collected road area image, so that whether the road sprinkled object exists in the road area image or not is judged, and the first road sprinkled object condition in the view field range of the video image collecting device can be obtained. However, since the detection range of the video image capturing device is limited, it is possible to obtain the second road object throwing situation outside the field of view of the video image capturing device by analyzing the acquired motion attribute data of each vehicle in the road monitoring area with respect to whether there is a road object throwing in the area outside the detection range (i.e., outside the field of view) of the video image capturing device. Therefore, the scheme combines the target identification of the road area image acquired by the video image acquisition equipment and the analysis and the processing of the motion attribute data of each vehicle in the road monitoring area, so that the detection limit of the video image acquisition equipment can be broken through, the detection of the road sprinkles on the whole road section where the road monitoring area is located can be realized, and the effective support can be provided for ensuring the driving safety of the vehicles.
In some exemplary embodiments, the target identifying the road area image captured by the video image capturing device within the road monitoring area, determining the first road projectile condition within the field of view of the video image capturing device, comprises:
dividing a road projectile to-be-detected area from the road area image;
carrying out foreground target detection on the area to be detected of the road spray, and determining at least one first candidate object from the area to be detected of the road spray;
performing target feature extraction processing on the area to be detected of the road spray, and determining at least one second candidate object from the area to be detected of the road spray; each second candidate object is marked with a projectile attribute or a non-projectile attribute;
determining a first road projectile condition within a field of view of the video image acquisition device based on the at least one first candidate object and the at least one second candidate object.
In the technical scheme, the candidate object detected by performing foreground target detection on the area to be detected of the road sprinkled object and the candidate object determined by performing target feature extraction processing on the area to be detected of the road sprinkled object are superposed and fused, so that the object of the suspected road sprinkled object is preliminarily screened out, some non-sprinkled object (such as people, vehicles, non-motor vehicles and the like) is eliminated, the object of the suspected road sprinkled object is further confirmed, whether the object of the suspected road sprinkled object actually belongs to the road sprinkled object can be accurately determined, and detection misinformation caused by light shadow can be effectively eliminated.
In some exemplary embodiments, the performing foreground object detection on the area to be detected of the road spray, and determining at least one first candidate object from the area to be detected of the road spray, includes:
determining at least one foreground target in the road projectile to-be-detected area through a Gaussian mixture model; each foreground object is a first candidate object;
the target feature extraction processing is carried out on the area to be detected of the road projectile, and at least one second candidate object is determined from the area to be detected of the road projectile, and the method comprises the following steps:
determining the at least one second candidate object in the area to be detected of the road sprinkled object through a target detection model; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets.
Among the above-mentioned technical scheme, detect to the road jettison thing and wait the detection area through mixing the gaussian model, relatively more comprehensively discern a plurality of prospect targets that exist in the road jettison thing waits the detection area promptly, and detect to the road jettison thing through the target detection model and wait the detection area, can detect out the attribute and the coordinate position of the jettison target that exists and the non-jettison object target in the road jettison thing waits the detection area more accurately. The number of the targets detected by the mixed Gaussian model from the area to be detected of the road sprinkled object is larger than that of the targets detected by the target detection model from the area to be detected of the road sprinkled object, and the detection accuracy of the target detection model is higher than that of the mixed Gaussian model. Therefore, the recall rate of the road sprinkled object detection can be effectively improved by combining the Gaussian mixture model with the target detection model.
In some exemplary embodiments, said determining a first road projectile condition within a field of view of said video image capture device based on said at least one first candidate object and said at least one second candidate object comprises:
carrying out deduplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the intersection ratio larger than or equal to a first set threshold value as the same candidate object;
and determining whether the candidate object marked as the attribute of the projectile or the unmarked candidate object belongs to the projectile target or not through a target classification model after the de-duplication processing, so as to obtain the first road projectile condition in the visual field range of the video image acquisition equipment.
In the above technical solution, at least one first candidate object and at least one second candidate object are subjected to deduplication processing, that is, at least one first candidate object and at least one second candidate object are subjected to superposition and fusion processing, and a non-projectile target detected by combining a target detection model is used, so that a candidate object marked as a projectile attribute or a candidate object without a mark can be preliminarily screened out, that is, a non-projectile target in a road area image is excluded. And then, the target classification model is used for further confirming the candidate object marked as the attribute of the throwing object or the unmarked candidate object, so that whether the candidate object marked as the attribute of the throwing object or the unmarked candidate object actually belongs to the road throwing object or not can be determined, the detection false alarm caused by light shadow can be effectively eliminated, and the detection accuracy of the road throwing object can be improved.
In some exemplary embodiments, said determining a second road projectile condition outside the field of view of said video image capturing device based on motion attribute data of vehicles within said road monitoring area comprises:
acquiring motion attribute data of each vehicle, the acquisition time of which belongs to a preset time period, outside the visual field range of the video image acquisition equipment;
for any road surface position, determining a first number of vehicles passing through the road surface position in the preset time period and a second number of vehicles passing through the road surface position and having abnormal behaviors based on motion attribute data of the vehicles corresponding to the road surface position in the preset time period; the abnormal behavior comprises any one of deceleration, braking or lane change;
determining a second road projectile condition of the road surface location based on the first number and the second number.
In the above technical solution, after acquiring the motion attribute data of each vehicle whose acquisition time belongs to the preset time period, the detection of whether a road object exists at a road position outside the visual field range can be realized by the abnormal behaviors of deceleration, braking, lane change, and the like of each vehicle corresponding to the road position within the preset time period.
In some exemplary embodiments, determining the second number of vehicles having abnormal behavior comprises:
for a vehicle with abnormal behaviors, determining whether the vehicle overtakes or emergently avoids a front vehicle condition at the road surface position based on motion attribute data of the vehicle at each acquisition time within the preset time period; if so, the second number is decremented by 1.
In the above technical solution, in order to eliminate braking, deceleration, and lane change behaviors caused by forward collision of a vehicle, overtaking of a vehicle, and the like, motion attribute data of each vehicle having an abnormal behavior is screened out, so that accuracy of the counted second quantity is ensured, and a false alarm rate of road spill detection can be effectively reduced.
In some exemplary embodiments, said determining a second road projectile condition of said road surface location based on said first number and said second number comprises:
if the road surface position is not at a fork, determining that the second road object throwing condition is that the road object throwing exists in the road surface position when the ratio of the second quantity to the first quantity is greater than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road projectile condition is that no road projectile exists in the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold and a third set threshold; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold, determining that the second road projectile condition is that a road projectile exists at the road surface position; the third set threshold is greater than the second set threshold.
In the above technical solution, in order to eliminate vehicle braking, deceleration or lane change caused by a road intersection (such as a road ramp exit), when determining a road spill object condition at a certain road position, it is determined whether the road position is at the intersection, and if the road position is not in the intersection area, it may be determined that there is a road spill object in the road position area when a ratio of the second number to the first number is greater than or equal to a second set threshold, or it may be determined that there is no road spill object in the road position area when a ratio of the second number to the first number is less than the second set threshold. Or if the road surface position is in the fork area, determining that the road surface position area has the road throwing object when the ratio of the second number to the first number is greater than or equal to a third set threshold value, or determining that the road throwing object does not exist in the road surface position area when the ratio of the second number to the first number is between the second set threshold value and the third set threshold value. Thus, the scheme can effectively reduce the false alarm rate of road projectile detection.
In some exemplary embodiments, further comprising:
and broadcasting the first road projectile state and the second road projectile state through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located avoids the road projectile.
Among the above-mentioned technical scheme, after determining the road jettison situation (for example jettison detection time, road jettison place lane information and road jettison position information etc.) in a certain road monitoring area (for example a certain traffic route), can broadcast the road jettison situation in this road monitoring area through at least one trackside equipment in this road monitoring area to each vehicle in this at least one trackside equipment place monitoring range can in time receive the road jettison situation in this road monitoring area, and carry out deceleration or change the lane and dodge in advance to the road jettison, thereby ensure the safety of traveling of vehicle effectively.
In a second aspect, an electronic device is provided in an exemplary embodiment of the present application, which includes a processor and a memory, the processor being connected to the memory, the memory storing a computer program, which when executed by the processor, causes the electronic device to perform: carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, and determining a first road object throwing condition in the view field range of the video image acquisition equipment; determining a second road projectile condition outside the field of view of the video image capture device based on the motion attribute data of each vehicle within the road monitoring area; the first and second road projectile conditions are indicative of a road projectile condition within the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
In some exemplary embodiments, the electronic device is specifically configured to perform:
dividing a road projectile to-be-detected area from the road area image;
carrying out foreground target detection on the area to be detected of the road spray, and determining at least one first candidate object from the area to be detected of the road spray;
performing target feature extraction processing on the area to be detected of the road spray, and determining at least one second candidate object from the area to be detected of the road spray; each second candidate object is marked with a projectile attribute or a non-projectile attribute;
determining a first road projectile condition within a field of view of the video image acquisition device based on the at least one first candidate object and the at least one second candidate object.
In some exemplary embodiments, the electronic device is specifically configured to perform:
determining at least one foreground target in the road projectile to-be-detected area through a Gaussian mixture model; each foreground object is a first candidate object;
the electronic device is specifically configured to perform:
determining the at least one second candidate object in the area to be detected of the road sprinkled object through a target detection model; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets.
In some exemplary embodiments, the electronic device is specifically configured to perform:
carrying out deduplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the intersection ratio larger than or equal to a first set threshold value as the same candidate object;
and determining whether the candidate object marked as the attribute of the projectile or the unmarked candidate object belongs to the projectile target or not through a target classification model after the de-duplication processing, so as to obtain the first road projectile condition in the visual field range of the video image acquisition equipment.
In some exemplary embodiments, the electronic device is specifically configured to perform:
acquiring motion attribute data of each vehicle, the acquisition time of which belongs to a preset time period, outside the visual field range of the video image acquisition equipment;
for any road surface position, determining a first number of vehicles passing through the road surface position in the preset time period and a second number of vehicles passing through the road surface position and having abnormal behaviors based on motion attribute data of the vehicles corresponding to the road surface position in the preset time period; the abnormal behavior comprises any one of deceleration, braking or lane change;
determining a second road projectile condition of the road surface location based on the first number and the second number.
In some exemplary embodiments, the electronic device is specifically configured to perform:
for a vehicle with abnormal behaviors, determining whether the vehicle overtakes or emergently avoids a front vehicle condition at the road surface position based on motion attribute data of the vehicle at each acquisition time within the preset time period; if so, the second number is decremented by 1.
In some exemplary embodiments, the electronic device is specifically configured to perform:
if the road surface position is not at a fork, determining that the second road object throwing condition is that the road object throwing exists in the road surface position when the ratio of the second quantity to the first quantity is greater than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road projectile condition is that no road projectile exists in the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold and a third set threshold; when the ratio of the second quantity to the first quantity is larger than or equal to the third set threshold, determining that the second road projectile condition is that a road projectile exists in the road surface position; the third set threshold is greater than the second set threshold.
In some exemplary embodiments, the electronic device is further configured to perform:
and broadcasting the first road projectile condition and the second road projectile condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located avoids the road projectile.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by an electronic device, the program, when run on the electronic device, causing the electronic device to perform the method for detecting a road spill as described in any of the first aspects above.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow diagram of a road spray detection method according to some embodiments of the present disclosure;
FIG. 2 is a schematic view of a road spill detection within a field of view of a video image capture device according to some embodiments of the present application;
FIG. 3 is a schematic view of a road spill detection outside the field of view of a video image capture device according to some embodiments of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Fig. 1 schematically illustrates a flow of a road spray detection method provided by an embodiment of the present application, where the flow may be executed by an electronic device. The electronic device may be a server, or a component (such as a chip or an integrated circuit) capable of supporting the server to implement the functions required by the method, or may also be other devices having the functions required by the method, such as a traffic control platform.
As shown in fig. 1, the process specifically includes:
In this application embodiment, all can be provided with video image acquisition equipment (for example video surveillance camera head) and radar equipment (for example millimeter wave radar) on highway or urban road usually, for example in a certain road monitoring area, this road monitoring area can be provided with video surveillance camera head, or be provided with the millimeter wave radar, or be provided with video surveillance camera head and millimeter wave radar simultaneously. Or the motion attribute data of the vehicle can be collected by means of a sensor and the like configured in the vehicle running on the road, and the collected vehicle motion attribute data is reported to the road side equipment through vehicle-mounted equipment installed on the vehicle. However, considering that the detection range of the video image capturing device is limited, for example, the optimal detection distance of the video image capturing device is 50-70 meters, the video image capturing device can only capture the video image within the view field range, and in the case of poor external environment quality (such as weather of heavy fog, heavy rain, etc.), the video image within a smaller distance range may only be captured. However, the radar device collects the motion attribute data of the vehicle by emitting electromagnetic wave signals, so that the motion attribute data cannot be influenced by the quality of an external environment, the detection distance range is large, such as 100 meters or more than 100 meters, and compared with the video image collection device, the radar device can collect the motion attribute data of the vehicle in a longer distance range, such as a millimeter wave radar which is a radar with a working frequency band in a millimeter wave frequency band. The millimeter wave radar can actively transmit electromagnetic wave signals, receive echoes and obtain the relative distance, the relative speed and the relative direction of the vehicle target according to the time difference of transmitting and receiving the electromagnetic wave signals. For example, a traffic control platform is taken as an execution main body for executing the technical scheme of the embodiment of the present application and described as an example, the traffic control platform acquires a video image acquired by a video monitoring camera in a certain road monitoring area in real time and motion attribute data of each vehicle acquired by a millimeter wave radar, and performs target identification on the video image acquired by the video monitoring camera, so that a first road object throwing condition in a view field range of a video image acquisition device can be determined, and analysis processing is performed on the motion attribute data of each vehicle acquired by the millimeter wave radar, so that a second road object throwing condition outside the view field range of the video image acquisition device can be determined, and thus the road object throwing condition in the road monitoring area can be determined more comprehensively.
Specifically, for a certain road monitoring area, after a road area image acquired by video image acquisition equipment in the road monitoring area is acquired, the road area image can be detected, and a road projectile to-be-detected area is divided from the road area image. And performing foreground target detection on the area to be detected of the road sprinklers, namely determining at least one candidate object from the area to be detected of the road sprinklers, and performing target feature extraction processing on the area to be detected of the road sprinklers, namely determining at least one second candidate object from the area to be detected of the road sprinklers, wherein each second candidate object is marked with a sprinkler attribute or a non-sprinkler attribute. Then, a first road projectile condition within the field of view of the video image acquisition device may be determined by being based on the at least one first candidate object and the at least one second candidate object.
When the area to be detected for the road spill objects is divided, lane line detection is carried out on the road area image, the position of each lane line in the road area image is identified, and the area to be detected for the road spill objects can be divided from the road area image through the position of each lane line. Illustratively, a deep learning algorithm (such as a lane line detection algorithm) is used for detecting lane line targets in the road area image, so that the road projectile detection area is divided from the road area image according to the coordinate position of each lane line. Lane line detection algorithms are generally classified into two types: one is semantic segmentation or instance segmentation based on visual features, such as lanonet and scnn (spatial relational Neural networks); the other is to predict the point where the Lane line is located by visual features, such as Ultra-Fast-Lane-Detection. The embodiment of the application finishes the Detection of the Lane lines in the road area image by adopting an Ultra-Fast-Lane-Detection algorithm. The Ultra-Fast-Lane-Detection model structure is divided into three parts, namely a Backbone network part, an Auxiliary part and a Group Classification part. The Backbone part adopts a smaller ResNet18 network to extract image features, the Auxiliary part carries out concat and upsampling on three-layer shallow features to enhance the extraction capability of visual features, and the Group Classification part carries out index calculation on global features to calculate candidate points so as to complete the selection of lane line candidate points. Therefore, the division of the area to be detected for the road sprinkled objects in the Lane can be finished through the Lane line coordinate position in the road area image detected by the Ultra-Fast-Lane-Detection algorithm. Or, in the embodiment of the application, under a vehicle-road cooperation scene, the coordinate position and the lane width of the lane can be acquired through high-precision map information, so that the division of the to-be-detected area of the road sprinkled objects in the lane can be completed.
Moreover, the embodiment of the application detects the foreground target in the road throwing substance detection area based on a mixed Gaussian model (such as a mixed Gaussian background modeling algorithm), detects the target in the road throwing substance detection area based on a target detection algorithm, and performs superposition fusion on the targets detected by the two algorithms, so that preliminary screening of the suspected road throwing substance area can be completed, and non-throwing substance targets such as people, vehicles and non-motor vehicles in the road area image can be eliminated. Namely, at least one foreground target can be determined by mixing a Gaussian model with a to-be-detected area of the road spreading object; wherein each foreground object is a first candidate object. Determining at least one second candidate object in the area to be detected of the road sprinkled object through the target detection model; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets. So, wait to detect the region through mixing the gaussian model to the road sprinkle thing, relatively more comprehensively discern a plurality of prospect targets that exist in the road sprinkle thing waits to detect the region promptly, and wait to detect the region to detect to the road sprinkle thing through the target detection model, can detect out the road sprinkle thing more accurately and wait to detect the object target and the non-object target of throwing that exist in the region. The number of the targets detected by the mixed Gaussian model from the area to be detected of the road sprinkled object is larger than that of the targets detected by the target detection model from the area to be detected of the road sprinkled object, and the detection accuracy of the target detection model is higher than that of the mixed Gaussian model. Therefore, the recall rate of road sprinkled object detection can be effectively improved by combining the Gaussian mixture model and the target detection model. Exemplarily, foreground target detection is performed on a road spill object to-be-detected area, that is, a pixel point set which does not belong to a background in the road spill object to-be-detected area is found, and in the embodiment of the present application, a "background" refers to a road surface of an expressway. Therefore, before foreground object detection is performed, the pixel values of the background need to be determined, i.e. the background needs to be modeled. The method and the device for detecting the foreground target in the area to be detected of the road sprinkled object are used for detecting the foreground target in the area to be detected based on a mixed Gaussian background modeling algorithm. In which, the gaussian mixture modeling essentially describes the range of background pixels in some form based on the change in the video pixel values over a period of time. Firstly, K Gaussian distributions are distributed to each pixel point in a video image to serve as background models, and each Gaussian model comprises a pixel mean value, a variance and a weight. Wherein, the background model of each pixel point is:
wherein x isj,tThe pixel value of the jth pixel point in the video image (namely the area to be detected of the road sprinkled object is taken as a video image) at the time t is represented, in the embodiment of the application, the image for background modeling is a color image which comprises a plurality of channels, and then x isj,tRepresents a vector, P (x)j,t) And representing the background distribution condition of the pixel point, namely a background model of the jth pixel point at the moment t.And the weight of the ith Gaussian distribution in the mixed Gaussian background model at the time t is represented, namely the specific gravity of the ith Gaussian distribution in the mixed Gaussian model.Represents the mean value in the ith Gaussian distribution of the jth pixel point at time t,and expressing the covariance in the ith Gaussian distribution of the jth pixel point at the time t, and expressing the probability density function of the Gaussian distribution by delta.
And comparing the value of the pixel point in the video image at the moment of t +1 with the mean value of the Gaussian mixture model, and if the value is within the variance range, determining the value as the background, otherwise, determining the value as the foreground. Therefore, the foreground and background classification of the pixel points in the video image can be realized. Namely:
|Xi,t+1-μi,t|≤D×σi,t
wherein Xi,t+1Represents the pixel value, μ, at time t +1i,tRepresenting the mean, σ, of the ith Gaussian distribution at time ti,tThe variance of the ith Gaussian distribution at the time t is shown, D is a constant, 3 can be taken in the embodiment of the application, and if a pixel point X isi,t+1If the formula is satisfied, the pixel point is considered as a background point, otherwise, the pixel point is considered as a foreground point.
Therefore, the Gaussian mixture model realizes the dynamic modeling of the background by setting the learning rate and continuously updating the background model according to the matching result of the Gaussian mixture model. For example, if K in the background model formula is 4, each pixel represents the background model of the pixel with 4 gaussian distributions at each time (i.e., each frame). Here, the mean and variance of the 4 gaussian distributions at the initial time are randomly set, and it is assumed that the mean of the first gaussian distribution is 10, the mean of the second gaussian distribution is 19, the third mean is 30, and the fourth mean is 40. Assuming that the variances of the four gaussian distributions are all 2, the weight ω of each gaussian distribution is 0.25.If the pixel value at a certain time is 20, the pixel value is in the variance range of the second Gaussian distribution, i.e. | Xi,t+1-μi,t|≤D×σi,tIf the pixel is a background pixel, the variance, the mean and the weighted value of the second gaussian distribution are updated by using the pixel value 20, and the first, the third and the fourth gaussian distributions are not updated. This has the effect that the weight value of the gaussian distribution closest to the pixel value becomes larger, P (x)j,t) Gradually approaching the true background value of the pixel point. If the pixel value at a certain time is 80, the pixel value is not in the variance range of the Gaussian distribution, i.e. | Xi,t+1-μi,t|>D×σi,tIf the pixel point is a foreground pixel point, deleting the weighted value which is the smallest in the first, second, third and fourth Gaussian distributions, and newly establishing a Gaussian distribution with the mean value of 80 to replace the deleted Gaussian distribution, so that a background model of the pixel can be dynamically established in real time. Therefore, foreground pixel points in the video image can be obtained through the Gaussian mixture model, and a foreground target area in the video image, namely a suspected area of a road spill object, can be obtained through denoising the pixel points.
Meanwhile, the target detection algorithm is used for detecting the target in the area to be detected of the road sprinkled object. The target detection algorithm is different from the Gaussian mixture background modeling algorithm, and the target detection and positioning are carried out by extracting image features. Namely, the area to be detected of the road sprinkled object is subjected to target detection, and at least one second candidate object can be determined; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets. Illustratively, the embodiment of the present application is described by taking a target detection algorithm as YOLOV5 (young only look once version 5) algorithm as an example, that is, the detection and positioning of the sprinkled object in the visual field range of the video image acquisition device are accomplished by the YOLOV5 algorithm. The structure of the YOLOV5 algorithm is mainly composed of three major parts, namely, Backbone, Neck, and Head. Backbone of the YOLOV5 algorithm is CSPDarknet53, which can extract rich information features from the input image. The hack is a series of network layers that mix and combine image features and pass the image features to a prediction layer. The Neck layer of the YOLOV5 algorithm is PANET, can finish feature extraction of a feature pyramid from bottom to top and from top to bottom, is a method for aggregating parameters in different training stages of a backbone network, and can improve the extraction capability of target features of a projectile. Head is a detection Head used for predicting image characteristics and generating a projectile detection bounding box. For example, taking an expressway as an example, road sprinkles on the expressway are various in types, common sprinkles such as cartons, packages, tires, iron blocks, stones, water bottles and traffic cones and non-sprinkles such as pedestrians, motor vehicles and non-motor vehicles are labeled based on monitoring video images acquired by video image acquisition equipment on the expressway, a sprinkler detection training data set is constructed, and iterative training for the YOLOV5 algorithm is completed based on the sprinkler detection training data set. In the inference process of the YOLOV5 algorithm, a video image to be detected (namely, a road projectile detection area is used as a video image) is input into the YOLOV5 algorithm, and the attribute (or called type) of a projectile target, the coordinate position of the projectile target, the attribute (or called type) of a non-projectile target and the coordinate position of the non-projectile target detected in the video image can be directly returned.
Taking fig. 2 as an example for illustration, 2-a in fig. 2 is a video image to be detected, and 4 cars Car1, Car2, Car3, Car4, 2 trucks Trunk1, Trunk2, 2 sprinklers Object1, and Object2 exist in the video image to be detected. Detecting the video image to be detected by adopting a mixed Gaussian background modeling algorithm, wherein in the view field, the foreground objects in the video image to be detected can be detected as Car2, Car3, Car4, Trunk2, Object1, Object2 and tree shadows enclosed by dotted lines shown as 2-b in FIG. 2. The Gaussian mixture background modeling algorithm is susceptible to the influence of environmental factors such as illumination and shadow, and at this time, the inverted image of the tree shown in 2-b in FIG. 2 is mistakenly detected as a foreground target. The video image to be detected is detected by adopting an Object detection algorithm, and the objects in the video image to be detected, such as Car2, Car3, Car4, Trunk2 which are surrounded by dotted lines shown as 2-c in fig. 2 and marked with non-projectile attributes, and the Object2 marked with projectile attributes, can be detected in the view field.
In addition, at least one first candidate object and at least one second candidate object are subjected to de-duplication processing, namely at least one first candidate object and at least one second candidate object are subjected to superposition fusion processing, and the first candidate object and the second candidate object with the intersection ratio larger than or equal to a first set threshold value are determined to be the same candidate object. The first set threshold may be set according to experience of a person skilled in the art, or may be set according to results obtained from multiple experiments, or may be set according to an actual application scenario, which is not limited in the embodiment of the present application. Then, the candidate object marked as the attribute of the projectile or the candidate object without the mark after the duplicate removal processing is determined whether the candidate object marked as the attribute of the projectile or the candidate object without the mark belongs to the projectile object through the object classification model, so that the first road projectile condition in the visual field range of the video image acquisition equipment is obtained, namely the candidate object marked as the attribute of the projectile or the candidate object without the mark can be preliminarily screened out by combining the non-projectile object detected by the object detection model, and the non-projectile object in the road area image can be eliminated. For example, the objects detected by the gaussian mixture background modeling algorithm and the object detection algorithm are fused, and an IOU (Intersection) of the object frames detected by the two algorithms is calculated, so as to determine whether the objects detected by the two algorithms are the same object. And when the IOU is larger than the fusion threshold value, the IOU is regarded as the same target, otherwise, the IOU is regarded as a different target. And then according to the coordinate positions of the non-projectile targets such as people, vehicles, non-motor vehicles and the like and the non-projectile targets detected by the target detection algorithm, the non-projectile targets in the fusion targets can be removed, and the rest are road projectile suspected areas surrounded by dotted lines shown as 2-d in fig. 2. However, since the gaussian mixture background modeling algorithm is very susceptible to light variation, false detection may occur in the suspected road spill area. Specifically, the embodiment of the application completes secondary verification of the suspected road spill area based on Resnet50 (Residual Network). The Resnet50 network has two basic blocks, Conv Block and Identity Block, wherein Conv Block has different input and output dimensions for changing the dimensions of the network; the input dimension and the output dimension of the Identity Block are the same, and a plurality of Identity blocks are connected in series to deepen the network. The deep residual error network can overcome the problems that the learning efficiency becomes low and the accuracy cannot be effectively improved due to deepening of the network depth, and can achieve a better feature extraction effect. The method comprises the steps of intercepting a projectile image and a non-projectile image in a monitoring video image acquired by video image acquisition equipment on a highway based on the monitoring video image, carrying out type labeling on the intercepted projectile and non-projectile to construct a target classification training data set, and finishing iterative training on a Resnet50 network based on the target classification training data set. Then the target images (i.e. the objects marked as the nature of the objects) in the suspected road area are intercepted and resize is fixed. Then, inputting the zoomed target image to be detected into the target classification model, performing algorithm reasoning, and directly returning the probability that the zoomed target image to be detected belongs to different sprinklers and different non-sprinklers, so that the false detection alarm caused by tree shadows can be eliminated, that is, the false alarm rate of road sprinkler detection can be reduced, and the accurate detection of the road sprinklers can be completed.
And 102, determining a second road object throwing condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area.
In the embodiment of the application, because the visual field range of the video image acquisition equipment is limited, a detection algorithm adopted for video image detection has a large error outside the visual field range of the video image acquisition equipment, and the detection algorithm cannot effectively detect the road sprinkled object. Therefore, in order to ensure real-time detection of road sprinklers over the road section, it is required to detect whether the road sprinklers exist outside the visual field range of the video image acquisition device, that is, motion state information such as speed, heading angle and lane of a Vehicle target is acquired based on a C-V2X (Cellular Vehicle-to-electrical networking) technology, and prediction of the sprinklers outside the visual field range is realized through abnormal driving behaviors such as braking, deceleration and lane change of a plurality of vehicles within a period of time. And the behaviors of braking, decelerating and changing lanes caused by forward collision, vehicle overtaking and the like are eliminated through the motion state information of adjacent vehicle targets, so that the false alarm rate of road sprinkled object detection can be reduced. And by combining high-precision map information, vehicle braking, deceleration and lane change behaviors caused by the ramp exit of the expressway can be eliminated, so that the false alarm rate of road sprinkled object detection can be reduced. Specifically, the motion attribute data of each vehicle whose collection time belongs to a preset time period outside the visual field range of the video image collection device is firstly acquired, and for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behaviors in the preset time period can be determined by means of the motion attribute data of each vehicle corresponding to the road surface position in the preset time period. Wherein the abnormal behavior comprises any one of deceleration, braking or lane change. Then, a second road projectile condition for the road surface location is determined based on the first number and the second number.
For example, when analyzing a road throwing event, it may be found that, if a road throwing event occurs, a driver of a vehicle may often perform abnormal driving behaviors such as braking, deceleration, lane change, etc. when finding that a throwing object exists on a road ahead. For example, taking fig. 3 as an example, when the vehicle Car2 shown in fig. 3-a is located in the third lane from the left at time t and the road projectile Object1 is found ahead, the vehicle Car2 brakes and decelerates, and the vehicle Car2 depresses the brake pedal at time t + 1, and the vehicle speed decreases compared with time t. At time t + 2, the vehicle Car2 changes lane to the second lane to the left. At time t, the lane in which the vehicle Car2 is located changes.
For example, the vehicles Car1, Car2, Car3, Trunk1, and Trunk2 shown in fig. 3-a are all intelligent networked vehicles, and the intelligent networked vehicles package real-time states such as the vehicle speed, the high-precision positioning, and the brake pedal stepping condition into Basic Safety Messages (BSM) of the vehicles, broadcast the Basic Safety messages to the outside, and notify surrounding vehicles and roadside devices rsu (road Side unit) in real time. Suppose that, within a certain period of Time0, the roadside device receives real-Time state information of Num1 vehicles in total, wherein the Num0 vehicles generate deceleration, braking or lane change behaviors at a certain coordinate position (Longitude1, Latitude1), and when Num0/Num1 is greater than or equal to a certain set threshold, it can be determined that a road throwing event occurs in the road. It should be noted that, if the vehicle is a non-intelligent networked vehicle, at this time, the roadside device RSU cannot acquire the motion state information of the non-intelligent networked vehicle through the C-V2X technology, then the edge computing terminal (MEC) deployed on the roadside may acquire the vehicle speed, the high-precision positioning and other information of the non-intelligent networked vehicle through the detection information of the sensors such as the millimeter wave radar and the laser radar, and transmit the relevant information to the roadside device RSU, so that the real-time detection of the road throwing event may be ensured.
In order to eliminate braking, deceleration and lane change caused by forward collision, overtaking and the like of the vehicle, the motion attribute data of each vehicle with abnormal behavior is screened out, so that the accuracy of the counted second quantity is ensured. Therefore, when the second number of vehicles with abnormal behaviors is determined, for each vehicle with abnormal behaviors, whether the vehicle overtakes or emergently avoids a preceding vehicle at the road surface position can be determined by the motion attribute data of the vehicle at each collection time within the preset time period, and if so, the second number is subtracted by 1. For example, the vehicle Car3 shown in 3-b in fig. 3 is in the second lane from the left at time t, is in the first lane from the left at time t + 1, and the relative speed of the vehicle Car3 exceeds that of the vehicle Car2, and the relative position of the vehicle Car3 exceeds that of the vehicle Car2 at time t + 2, then it can be determined that the lane change behavior of the vehicle Car3 is caused by passing, and this lane change data is not counted. Or, the roadside apparatus RSU receives the real-time status information of the vehicles Car2, Car3, when it is found that the vehicle Car3 performs braking, deceleration, and other behaviors at time t, and simultaneously analyzes the relative distance between the vehicle Car2 and the vehicle Car3, when it is found that the relative distance between the vehicle Car2 and the vehicle Car3 at time t is reduced compared with the relative distance at time t-1, it may be determined that the braking behavior of the vehicle Car3 at this time is caused by emergency avoidance of the front vehicle Car2, and this braking data is not counted.
In addition, in order to eliminate vehicle braking, deceleration or lane change caused by a road intersection (such as a road ramp exit), when determining a road object throwing condition at a certain road position, it is determined whether the road position is at the intersection, and if the road position is not at the intersection, when a ratio of the second number to the first number is greater than or equal to a second set threshold, it may be determined that the road object throwing condition at the road position is that the road object throwing exists at the road position, or when the ratio of the second number to the first number is less than the second set threshold, it may be determined that the road object throwing does not exist at the road position. Or if the road surface position is at the fork, determining that the second road object throwing condition of the road surface position is that the road object throwing exists in the road surface position when the ratio of the second number to the first number is greater than or equal to a third set threshold, or determining that the second road object throwing condition of the road surface position is that the road object throwing does not exist in the road surface position when the ratio of the second number to the first number is between the second set threshold and the third set threshold. Thus, the scheme can effectively reduce the false alarm rate of road projectile detection. Wherein the third set threshold is greater than the second set threshold; the second set threshold and the third set threshold may be set according to experience of a person skilled in the art or may be set according to results obtained by multiple experiments or according to an actual application scenario, which is not limited in the embodiment of the present application.
For example, the vehicle Car3 shown in fig. 3-c is in the second lane from the right at time t, changes lane to the first lane from the right at time t + 1, and changes lane in which the vehicle Car3 is located from time t. The roadside device RSU receives real-Time state information of the vehicle Car3, and assumes that, within a certain period of Time0, the roadside device receives real-Time state information of num1 vehicles located in a second lane from the right, wherein the number of the num0 vehicles finally changes lanes to the first lane from the right, and according to high-precision map information, the roadside device RSU detects that the current vehicle is located in a high-speed ramp area, and then when a ratio num0/num1 is always within a normal numerical range (between a second set threshold and a third set threshold), it is determined that lane change behavior of the vehicle belongs to normal behavior, because the vehicle needs to change lanes to the first lane from the right so as to drive away from the current road from a ramp mouth, it can be determined that no road spill event occurs in the second lane from the right. If the ratio num0/num1 is not within the normal range of values (i.e., greater than the third set threshold), then it may be determined that a road casting event occurred in the second right lane.
Finally, after determining the road projectile conditions (including the first road projectile conditions and the second road projectile conditions, such as the projectile detection time, the lane information where the road projectile is located, the road projectile position information, and the like) in a certain road monitoring area (for example, a certain traffic section), the road projectile conditions in the road monitoring area can be broadcasted through at least one roadside device in the road monitoring area, so that each vehicle in the monitoring range where the at least one roadside device is located can receive the road projectile conditions in the road monitoring area in time through an On Board Unit (OBU) installed by the vehicle-mounted device, and can be decelerated or lane-change avoided in advance for the road projectile, thereby effectively ensuring the driving safety of the vehicle. Meanwhile, the RSU can upload the detected road sprinkle information to a road supervision platform, so that inspection personnel can timely know the road sprinkle information and correspondingly process the road sprinkle information.
The above embodiments show that, in the technical scheme of the application, by fully utilizing the video image acquisition device and the motion attribute data of each vehicle acquired by the vehicle reporting device or the radar device in the road monitoring area, wherein, the sensing equipment (various sensors and the like) arranged on the vehicle can collect and report the motion attribute data of the vehicle in real time without being influenced by environmental factors such as weather, illumination and the like, or the target (such as a vehicle) is positioned by emitting electromagnetic waves and the like based on the radar equipment arranged in the road monitoring area, and the target is not influenced by environmental factors such as weather, illumination and the like, the method can track and monitor targets at a long distance, so that the detection range limitation of the video image acquisition equipment can be broken (the detection range of the video image acquisition equipment is smaller), and the detection of the road sprinkled objects on the whole road section can be realized. Specifically, for any road monitoring area, the target identification is carried out on the road area image collected by the video image collecting device in the road monitoring area, that is, the detection and identification are carried out on the collected road area image, so that whether the road sprinkled object exists in the road area image or not is judged, and the first road sprinkled object condition in the view field range of the video image collecting device can be obtained. However, since the detection range of the video image capturing device is limited, it is possible to obtain the second road object throwing situation outside the field of view of the video image capturing device by analyzing the acquired motion attribute data of each vehicle in the road monitoring area with respect to whether there is a road object throwing in the area outside the detection range (i.e., outside the field of view) of the video image capturing device. Therefore, the scheme can break through the detection range of the video monitoring camera by carrying out target recognition on the road area image acquired by the video image acquisition equipment and analyzing, processing and fusing the motion attribute data of each vehicle in the road monitoring area, namely by fully fusing the advantages of the video monitoring camera, the millimeter wave radar, the edge computing terminal, the C-V2X technology and other vehicle-road cooperative technical equipment, and being not influenced by the environmental factors such as weather, illumination, background pixels and the like, the automatic detection of the road sprinkled object can be realized, so that the labor cost and the material cost can be effectively reduced, the detection limit of the video image acquisition equipment can be broken through, therefore, the detection of the road sprinkled objects on the whole road section where the road monitoring area is located is realized, and effective support can be provided for ensuring the driving safety of the vehicle.
Based on the same technical concept, fig. 4 exemplarily shows an electronic device provided by an embodiment of the present application, which can execute a flow of a road spray detection method. The electronic device may be a server, or a component (such as a chip or an integrated circuit) capable of supporting the server to implement the functions required by the method, or may be other devices having the functions required by the method, such as a traffic control platform.
As shown in fig. 4, the electronic device includes a processor 401 and a memory 402. In the embodiment of the present application, a specific connection medium between the processor 401 and the memory 402 is not limited, and fig. 4 illustrates an example in which the processor 401 and the memory 402 are connected by a bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 402 stores a computer program that, when executed by the processor 401, causes the electronic device to perform: carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, and determining a first road object throwing condition in the view field range of the video image acquisition equipment; determining a second road projectile condition outside the field of view of the video image capture device based on the motion attribute data of each vehicle within the road monitoring area; the first and second road projectile conditions are indicative of a road projectile condition within the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
In some exemplary embodiments, the electronic device is specifically configured to perform:
dividing a road projectile to-be-detected area from the road area image;
carrying out foreground target detection on the area to be detected of the road projectile, and determining at least one first candidate object from the area to be detected of the road projectile;
performing target feature extraction processing on the area to be detected of the road spray, and determining at least one second candidate object from the area to be detected of the road spray; each second candidate object is marked with a projectile attribute or a non-projectile attribute;
determining a first road projectile condition within a field of view of the video image capture device based on the at least one first candidate object and the at least one second candidate object.
In some exemplary embodiments, the electronic device is specifically configured to perform:
determining at least one foreground target in the road projectile to-be-detected area through a Gaussian mixture model; each foreground object is a first candidate object;
the electronic device is specifically configured to perform:
determining the at least one second candidate object in the area to be detected of the road sprinkled object through a target detection model; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets.
In some exemplary embodiments, the electronic device is specifically configured to perform:
carrying out deduplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the intersection ratio larger than or equal to a first set threshold value as the same candidate object;
and determining whether the candidate object marked as the attribute of the projectile or the unmarked candidate object belongs to the projectile target or not through a target classification model after the de-duplication processing, so as to obtain the first road projectile condition in the visual field range of the video image acquisition equipment.
In some exemplary embodiments, the electronic device is specifically configured to perform:
acquiring motion attribute data of each vehicle, the acquisition time of which belongs to a preset time period, outside the visual field range of the video image acquisition equipment;
for any road surface position, determining a first number of vehicles passing through the road surface position in the preset time period and a second number of vehicles passing through the road surface position and having abnormal behaviors based on motion attribute data of the vehicles corresponding to the road surface position in the preset time period; the abnormal behavior comprises any one of deceleration, braking or lane change;
determining a second road projectile condition of the road surface location based on the first number and the second number.
In some exemplary embodiments, the electronic device is specifically configured to perform:
for a vehicle with abnormal behaviors, determining whether the vehicle overtakes or emergently avoids a front vehicle condition at the road surface position based on motion attribute data of the vehicle at each acquisition time within the preset time period; if so, the second number is decremented by 1.
In some exemplary embodiments, the electronic device is specifically configured to perform:
if the road surface position is not at a fork, determining that the second road object throwing condition is that the road object throwing exists in the road surface position when the ratio of the second quantity to the first quantity is greater than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road projectile condition is that no road projectile exists in the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold and a third set threshold; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold, determining that the second road projectile condition is that a road projectile exists at the road surface position; the third set threshold is greater than the second set threshold.
In some exemplary embodiments, the electronic device is further configured to perform:
and broadcasting the first road projectile state and the second road projectile state through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located avoids the road projectile.
In the embodiment of the present application, the memory 402 stores instructions executable by the at least one processor 401, and the at least one processor 401 may execute the steps included in the method for detecting a road spray by executing the instructions stored in the memory 402.
The processor 401 is a control center of the electronic device, and may be connected to various parts of the electronic device through various interfaces and lines, so as to execute or execute the instructions stored in the memory 402 and call the data stored in the memory 402, thereby implementing data processing. Optionally, the processor 401 may include one or more processing units, and the processor 401 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes an issued instruction. It will be appreciated that the modem processor described above may not be integrated into the processor 401. In some embodiments, processor 401 and memory 402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 401 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the disclosed method in connection with the embodiment of the road spray detection method may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A method of detecting a road spill, comprising:
carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, and determining a first road object throwing condition in the view field range of the video image acquisition equipment;
determining a second road projectile condition outside the field of view of the video image capture device based on the motion attribute data of each vehicle within the road monitoring area; the first and second road projectile conditions are indicative of a road projectile condition within the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
2. The method of claim 1, wherein said performing object recognition on an image of a roadway area captured by a video image capture device within a roadway surveillance area to determine a first roadway projectile condition within a field of view of said video image capture device comprises:
dividing a road projectile to-be-detected area from the road area image;
carrying out foreground target detection on the area to be detected of the road spray, and determining at least one first candidate object from the area to be detected of the road spray;
performing target feature extraction processing on the area to be detected of the road spray, and determining at least one second candidate object from the area to be detected of the road spray; each second candidate object is marked with a projectile attribute or a non-projectile attribute;
determining a first road projectile condition within a field of view of the video image acquisition device based on the at least one first candidate object and the at least one second candidate object.
3. The method of claim 2, wherein the performing foreground object detection on the area to be detected of the roadway projectile and determining at least one first candidate object from the area to be detected of the roadway projectile comprises:
determining at least one foreground target in the road projectile to-be-detected area through a Gaussian mixture model; each foreground object is a first candidate object;
the target feature extraction processing is carried out on the area to be detected of the road projectile, and at least one second candidate object is determined from the area to be detected of the road projectile, and the method comprises the following steps:
determining the at least one second candidate object in the area to be detected of the road projectile through a target detection model; the target detection model is used for identifying the attributes and coordinate positions of the projectile targets and the non-projectile targets.
4. The method of claim 2, wherein determining a first road projectile condition within a field of view of the video image capturing device based on the at least one first candidate object and the at least one second candidate object comprises:
carrying out deduplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the intersection ratio larger than or equal to a first set threshold value as the same candidate object;
and determining whether the candidate object marked as the attribute of the projectile or the unmarked candidate object belongs to the projectile target or not through a target classification model after the de-duplication processing, so as to obtain the first road projectile condition in the visual field range of the video image acquisition equipment.
5. The method of claim 1, wherein determining a second road projectile condition outside the field of view of the video image capturing device based on the motion attribute data of each vehicle within the road monitoring area comprises:
acquiring motion attribute data of each vehicle, the acquisition time of which belongs to a preset time period, outside the visual field range of the video image acquisition equipment;
for any road surface position, determining a first number of vehicles passing through the road surface position in the preset time period and a second number of vehicles passing through the road surface position and having abnormal behaviors based on motion attribute data of the vehicles corresponding to the road surface position in the preset time period; the abnormal behavior comprises any one of deceleration, braking or lane change;
determining a second road projectile condition of the road surface location based on the first number and the second number.
6. The method of claim 5, wherein determining the second number of vehicles having abnormal behavior comprises:
for a vehicle with abnormal behaviors, determining whether the vehicle overtakes or emergently avoids a front vehicle condition at the road surface position based on motion attribute data of the vehicle at each acquisition time within the preset time period; if so, the second number is decremented by 1.
7. The method of claim 5, wherein determining a second road projectile condition of the road surface location based on the first number and the second number comprises:
if the road surface position is not at a fork, determining that the second road object throwing condition is that the road object throwing exists in the road surface position when the ratio of the second quantity to the first quantity is greater than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road projectile condition is that no road projectile exists in the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold and a third set threshold; when the ratio of the second quantity to the first quantity is larger than or equal to the third set threshold, determining that the second road projectile condition is that a road projectile exists in the road surface position; the third set threshold is greater than the second set threshold.
8. The method of claim 1, further comprising:
and broadcasting the first road projectile condition and the second road projectile condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located avoids the road projectile.
9. An electronic device comprising a processor and a memory, the processor being coupled to the memory, the memory storing a computer program that, when executed by the processor, causes the electronic device to perform: carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, and determining a first road object throwing condition in the view field range of the video image acquisition equipment; determining a second road projectile condition outside the field of view of the video image capture device based on the motion attribute data of each vehicle within the road monitoring area; the first and second road projectile conditions are indicative of a road projectile condition within the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
10. A computer-readable storage medium, storing a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210230541.8A CN114694060B (en) | 2022-03-10 | 2022-03-10 | Road casting detection method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210230541.8A CN114694060B (en) | 2022-03-10 | 2022-03-10 | Road casting detection method, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114694060A true CN114694060A (en) | 2022-07-01 |
CN114694060B CN114694060B (en) | 2024-05-03 |
Family
ID=82137209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210230541.8A Active CN114694060B (en) | 2022-03-10 | 2022-03-10 | Road casting detection method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114694060B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147441A (en) * | 2022-07-31 | 2022-10-04 | 江苏云舟通信科技有限公司 | Cutout special effect processing system based on data analysis |
CN116434160A (en) * | 2023-04-18 | 2023-07-14 | 广州国交润万交通信息有限公司 | Expressway casting object detection method and device based on background model and tracking |
CN116453065A (en) * | 2023-06-16 | 2023-07-18 | 云途信息科技(杭州)有限公司 | Road surface foreign matter throwing identification method and device, computer equipment and storage medium |
CN117830957A (en) * | 2024-02-23 | 2024-04-05 | 安徽大学 | A method for automatically detecting spilled objects on highways |
CN118072530A (en) * | 2024-02-19 | 2024-05-24 | 安徽大学 | A vehicle abnormal behavior monitoring system for highways |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339677A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Video-based railway wagon dropped object automatic detection method |
CN106781570A (en) * | 2016-12-30 | 2017-05-31 | 大唐高鸿信息通信研究院(义乌)有限公司 | A kind of highway danger road conditions suitable for vehicle-mounted short distance communication network are recognized and alarm method |
CN109212520A (en) * | 2018-09-29 | 2019-01-15 | 河北德冠隆电子科技有限公司 | The road conditions perception accident detection alarm system and method for comprehensive detection radar |
CN109886219A (en) * | 2019-02-26 | 2019-06-14 | 中兴飞流信息科技有限公司 | Shed object detecting method, device and computer readable storage medium |
US20200026302A1 (en) * | 2018-07-19 | 2020-01-23 | Toyota Research Institute, Inc. | Method and apparatus for road hazard detection |
CN111274982A (en) * | 2020-02-04 | 2020-06-12 | 浙江大华技术股份有限公司 | Method and device for identifying projectile and storage medium |
CN112037266A (en) * | 2020-11-05 | 2020-12-04 | 北京软通智慧城市科技有限公司 | Falling object identification method and device, terminal equipment and storage medium |
CN112149649A (en) * | 2020-11-24 | 2020-12-29 | 深圳市城市交通规划设计研究中心股份有限公司 | Road spray detection method, computer equipment and storage medium |
CN112330658A (en) * | 2020-11-23 | 2021-02-05 | 丰图科技(深圳)有限公司 | Sprinkler detection method, device, electronic device, and storage medium |
CN114119653A (en) * | 2021-09-28 | 2022-03-01 | 浙江大华技术股份有限公司 | Sprinkler detection method, device, electronic device, and storage medium |
-
2022
- 2022-03-10 CN CN202210230541.8A patent/CN114694060B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339677A (en) * | 2016-08-23 | 2017-01-18 | 天津光电高斯通信工程技术股份有限公司 | Video-based railway wagon dropped object automatic detection method |
CN106781570A (en) * | 2016-12-30 | 2017-05-31 | 大唐高鸿信息通信研究院(义乌)有限公司 | A kind of highway danger road conditions suitable for vehicle-mounted short distance communication network are recognized and alarm method |
US20200026302A1 (en) * | 2018-07-19 | 2020-01-23 | Toyota Research Institute, Inc. | Method and apparatus for road hazard detection |
CN109212520A (en) * | 2018-09-29 | 2019-01-15 | 河北德冠隆电子科技有限公司 | The road conditions perception accident detection alarm system and method for comprehensive detection radar |
CN109886219A (en) * | 2019-02-26 | 2019-06-14 | 中兴飞流信息科技有限公司 | Shed object detecting method, device and computer readable storage medium |
CN111274982A (en) * | 2020-02-04 | 2020-06-12 | 浙江大华技术股份有限公司 | Method and device for identifying projectile and storage medium |
CN112037266A (en) * | 2020-11-05 | 2020-12-04 | 北京软通智慧城市科技有限公司 | Falling object identification method and device, terminal equipment and storage medium |
CN112330658A (en) * | 2020-11-23 | 2021-02-05 | 丰图科技(深圳)有限公司 | Sprinkler detection method, device, electronic device, and storage medium |
CN112149649A (en) * | 2020-11-24 | 2020-12-29 | 深圳市城市交通规划设计研究中心股份有限公司 | Road spray detection method, computer equipment and storage medium |
CN114119653A (en) * | 2021-09-28 | 2022-03-01 | 浙江大华技术股份有限公司 | Sprinkler detection method, device, electronic device, and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115147441A (en) * | 2022-07-31 | 2022-10-04 | 江苏云舟通信科技有限公司 | Cutout special effect processing system based on data analysis |
CN116434160A (en) * | 2023-04-18 | 2023-07-14 | 广州国交润万交通信息有限公司 | Expressway casting object detection method and device based on background model and tracking |
CN116453065A (en) * | 2023-06-16 | 2023-07-18 | 云途信息科技(杭州)有限公司 | Road surface foreign matter throwing identification method and device, computer equipment and storage medium |
CN116453065B (en) * | 2023-06-16 | 2023-09-19 | 云途信息科技(杭州)有限公司 | Road surface foreign matter throwing identification method and device, computer equipment and storage medium |
CN118072530A (en) * | 2024-02-19 | 2024-05-24 | 安徽大学 | A vehicle abnormal behavior monitoring system for highways |
CN117830957A (en) * | 2024-02-23 | 2024-04-05 | 安徽大学 | A method for automatically detecting spilled objects on highways |
Also Published As
Publication number | Publication date |
---|---|
CN114694060B (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114694060B (en) | Road casting detection method, electronic equipment and storage medium | |
Zhao et al. | Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors | |
Tian et al. | An automatic car accident detection method based on cooperative vehicle infrastructure systems | |
US11836985B2 (en) | Identifying suspicious entities using autonomous vehicles | |
US11380105B2 (en) | Identification and classification of traffic conflicts | |
US20190333371A1 (en) | Driver behavior monitoring | |
JP6591842B2 (en) | Method and system for performing adaptive ray-based scene analysis on semantic traffic space, and vehicle comprising such a system | |
CN110362077A (en) | Automatic driving vehicle urgent danger prevention decision system, method and medium | |
Abdel-Aty et al. | Using closed-circuit television cameras to analyze traffic safety at intersections based on vehicle key points detection | |
US11314974B2 (en) | Detecting debris in a vehicle path | |
CN113378751A (en) | Traffic target identification method based on DBSCAN algorithm | |
CN114093165A (en) | A method for automatic identification of vehicle-pedestrian conflict based on roadside lidar | |
CN114414259A (en) | Anti-collision test method and device for vehicle, electronic equipment and storage medium | |
Lai et al. | Sensor fusion of camera and MMW radar based on machine learning for vehicles | |
US11555928B2 (en) | Three-dimensional object detection with ground removal intelligence | |
CN112927514B (en) | Prediction method and system for motor vehicle yellow light running behavior based on 3D lidar | |
CN117173666A (en) | Automatic driving target identification method and system for unstructured road | |
Prarthana et al. | A Comparative Study of Artificial Intelligence Based Vehicle Classification Algorithms Used to Provide Smart Mobility | |
KR20240162127A (en) | Instance segmentation in doctor images | |
CN116587978A (en) | A collision warning method and system based on a vehicle-mounted display screen | |
Aron et al. | Current Approaches in Traffic Lane Detection: A minireview | |
CN113128847A (en) | Entrance ramp real-time risk early warning system and method based on laser radar | |
CN116811884B (en) | Intelligent driving environment perception analysis method and system | |
CN119296079B (en) | Point cloud real-time semantic segmentation automatic driving roadblock detection method and system | |
WO2024067174A1 (en) | Prediction method and apparatus, and intelligent driving device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |