CN113808437A - Blind area monitoring and early warning method for automatic driving vehicle - Google Patents
Blind area monitoring and early warning method for automatic driving vehicle Download PDFInfo
- Publication number
- CN113808437A CN113808437A CN202111062752.7A CN202111062752A CN113808437A CN 113808437 A CN113808437 A CN 113808437A CN 202111062752 A CN202111062752 A CN 202111062752A CN 113808437 A CN113808437 A CN 113808437A
- Authority
- CN
- China
- Prior art keywords
- early warning
- data
- target
- detection
- body side
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000001514 detection method Methods 0.000 claims abstract description 238
- 238000012545 processing Methods 0.000 claims abstract description 56
- 230000007613 environmental effect Effects 0.000 claims abstract description 11
- 238000003491 array Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q9/00—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
- B60Q9/008—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/182—Level alarms, e.g. alarms responsive to variables exceeding a threshold
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention relates to a blind area monitoring and early warning method for an automatic driving vehicle, which comprises the following steps: the method comprises the steps that an automatic driving vehicle obtains environment visibility state data; when the environmental visibility state data is in a first state, taking the detection range of the radar on the side of the vehicle body as a first monitoring blind area; a vehicle body side radar is called to carry out radar target object detection processing on the first monitoring blind area, and a corresponding first target object set is generated; when the environmental visibility state data is in a second state, taking the shooting range of the vehicle body side camera as a second detection blind area; calling a vehicle body side camera to carry out real-time video shooting on the second monitoring blind area to generate a corresponding first video; performing frame target object identification and multi-frame target tracking processing on the first video to generate a corresponding first target object set; and carrying out target grading early warning processing according to the first target object set. According to the invention, the blind area monitoring and early warning processing mode is automatically switched based on the visibility, so that the blind area monitoring efficiency and accuracy are ensured.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a blind area monitoring and early warning method for an automatic driving vehicle.
Background
For a conventional manned vehicle, information acquisition of a blind area of a rearview mirror during driving of the vehicle is generally acquired manually by a driver, for example, the driver turns around on his side to acquire more environmental visual information. This mode of operation is not suitable for unmanned or autonomous vehicles.
Disclosure of Invention
The invention aims to provide a blind area monitoring and early warning method, electronic equipment and a computer readable storage medium for an automatic driving vehicle, which aim to overcome the defects of the prior art. According to the invention, the processing mode of blind area monitoring and early warning can be automatically switched based on the visibility, so that the blind area monitoring efficiency and accuracy of the automatic driving vehicle are ensured.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides a blind area monitoring and early warning method for an autonomous vehicle, where the method is applied to an autonomous vehicle, where the autonomous vehicle includes a vehicle-body-side camera and a vehicle-body-side radar; the method comprises the following steps:
the autonomous vehicle obtaining environmental visibility state data;
when the environment visibility state data is in a first state, taking the detection range of the vehicle body side radar as a first monitoring blind area; calling the vehicle body side radar to perform radar target object detection processing on the first monitoring blind area, and generating a corresponding first target object set;
when the environment visibility state data is in a second state, taking the shooting range of the vehicle body side camera as a second detection blind area; calling the vehicle body side camera to carry out real-time video shooting on the second monitoring blind area to generate a corresponding first video; performing frame target object identification and multi-frame target tracking processing on the first video to generate a corresponding first target object set;
and carrying out target grading early warning processing according to the first target object set.
Preferably, before the autonomous vehicle acquires the environmental visibility state data, the method further comprises:
calling a vehicle-mounted camera at preset first time intervals to carry out real-time image shooting on the surrounding environment of the automatic driving vehicle, and generating a corresponding first environment image;
carrying out target object identification processing on the first environment image to generate a plurality of first environment target objects, and counting the number of the first environment target objects to generate a first number;
judging whether the first quantity is lower than a preset first quantity threshold value or not, and if the first quantity is lower than the first quantity threshold value, setting the environment visibility state data to be in a first state; setting the ambient visibility state data to a second state if the first number is not less than the first number threshold.
Preferably, the radar type of the vehicle body side radar comprises a millimeter wave radar, an ultrasonic radar and a laser radar;
the body side radar comprises a left body side radar and a right body side radar; the detection range of the vehicle body side radar comprises a left vehicle body side radar detection range and a right vehicle body side radar detection range; the left vehicle body side radar corresponds to the detection range of the left vehicle body side radar; the right body side radar corresponds to the detection range of the right body side radar;
the shooting range of the vehicle body side camera is a preset range of a blind area of the rearview mirror; the vehicle body side camera comprises a left vehicle body side camera and a right vehicle body side camera; the range of the blind area of the rearview mirror comprises a range of the blind area of the left rearview mirror and a range of the blind area of the right rearview mirror; the left vehicle body side camera corresponds to the range of the blind area of the left rearview mirror; the right vehicle body side camera corresponds to the range of the dead zone of the right rearview mirror;
the detection range of the vehicle body side radar is larger than the range of the blind area of the rearview mirror, and the method specifically comprises the following steps: the detection range of the left vehicle body side radar is larger than the range of the blind area of the left rearview mirror, and the detection range of the right vehicle body side radar is larger than the range of the blind area of the right rearview mirror.
Preferably, the first target object set comprises a plurality of first target object arrays;
the first target object array includes first identification data, first type data, first distance data, and first relative velocity data.
Preferably, the invoking the vehicle body side radar to perform radar target object detection processing on the first monitoring blind area to generate a corresponding first target object set specifically includes:
calling the radar on the side of the vehicle body to perform radar scanning on the first monitoring blind area according to a preset first radar detection frequency to generate corresponding first radar frame data;
performing multi-target detection and target motion track tracking processing on the first radar frame data of the latest specified number to obtain a plurality of first detection targets and corresponding first detection target data groups; the first detection target data group comprises first detection target identification data, first detection target type data and first detection target motion trail data; the first detected target motion trajectory data includes a plurality of first detected target trajectory point data;
calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the first detection target motion trail data corresponding to each first detection target and the current position information of the automatic driving vehicle, and generating corresponding first detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the motion trajectory data of the first detection target corresponding to each first detection target, and generating corresponding first detection target relative speed data;
creating a corresponding first target object array for each first detection target; setting the first identification data of the first target object array as the corresponding first detection target identification data of the first detection target data array; setting the first type data of the first target object array as the first detection target type data of the corresponding first detection target data array; setting the first distance data of the first target object array as corresponding first detection target distance data; setting the first relative speed data of the first target object array as corresponding first detection target relative speed data; and the first target object set is formed by all the first target object arrays which are completely set.
Preferably, the performing the framing target object identification and the multi-frame target tracking processing on the first video to generate the corresponding first target object set specifically includes:
performing frame image extraction processing on the first video to generate a plurality of first image frame data;
performing multi-target detection and target motion trajectory tracking processing on the first image frame data of the latest specified number to obtain a plurality of second detection targets and corresponding second detection target data groups; the second detection target data group comprises second detection target identification data, second detection target type data and second detection target motion track data; the second detected target motion trajectory data includes a plurality of second detected target trajectory point data;
calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target and the current position information of the automatic driving vehicle, and generating corresponding second detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target, and generating corresponding second detection target relative speed data;
creating a corresponding first target object array for each second detection target; setting the first identification data of the first target object array as the second detection target identification data of the corresponding second detection target data array; setting the first type data of the first target object array as the second detection target type data of the corresponding second detection target data array; setting the first distance data of the first target object array as corresponding second detection target distance data; setting the first relative speed data of the first target object array as corresponding second detection target relative speed data; and the first target object set is formed by all the first target object arrays which are completely set.
Preferably, the performing a target grading early warning process according to the first target object set specifically includes:
in the first target object set, according to the first distance data and the first relative speed data in each first target object array, estimating the collision time between a current detection target and the automatic driving vehicle, and generating corresponding first collision time data; recording the first collision time data with the minimum numerical value as first shortest time;
recording the first distance data with the minimum numerical value as a first shortest distance in the first target object set;
according to a preset first early warning distance threshold, a preset second early warning distance threshold, a preset first collision early warning time and a preset second collision early warning time, carrying out early warning level identification processing on the first shortest distance and the first shortest time to generate a corresponding first early warning level; the first early warning distance threshold value is greater than the second early warning distance threshold value; the first collision early warning time is greater than the second collision early warning time; the first early warning level comprises a first-stage early warning level, a second-stage early warning level, a third-stage early warning level, a fourth-stage early warning level and a fifth-stage early warning level, and early warning levels corresponding to the first-stage early warning level and the fifth-stage early warning level are gradually increased;
and performing corresponding grade early warning treatment according to the first early warning grade.
Further, according to a preset first early warning distance threshold, a second early warning distance threshold, a first collision early warning time, and a second collision early warning time, the first shortest distance and the first shortest time are subjected to early warning level identification processing, and a corresponding first early warning level is generated, which specifically includes:
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a primary early warning level;
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold, and the first shortest time is lower than the first collision early warning time but not lower than the second collision early warning time, setting the first early warning level as a secondary early warning level;
when the first shortest distance is lower than the second early warning distance threshold value and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a third-level early warning level;
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is lower than the second collision early warning time, or when the first shortest distance is lower than the second early warning distance threshold and the first shortest time is lower than the first collision early warning time but not lower than the second collision early warning time, setting the first early warning level as a four-level early warning level;
and when the first shortest distance is lower than the second early warning distance threshold value and the first shortest time is lower than the second collision early warning time, setting the first early warning level as a fifth early warning level.
A second aspect of an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a transceiver;
the processor is configured to be coupled to the memory, read and execute instructions in the memory, so as to implement the method steps of the first aspect;
the transceiver is coupled to the processor, and the processor controls the transceiver to transmit and receive messages.
A third aspect of embodiments of the present invention provides a computer-readable storage medium storing computer instructions that, when executed by a computer, cause the computer to perform the method of the first aspect.
The embodiment of the invention provides a blind area monitoring and early warning method of an automatic driving vehicle, electronic equipment and a computer readable storage medium. According to the invention, the processing mode of blind area monitoring and early warning can be automatically switched based on the visibility, so that the blind area monitoring efficiency and accuracy of the automatic driving vehicle are ensured.
Drawings
Fig. 1 is a schematic diagram of a blind area monitoring and early warning method for an automatic vehicle according to an embodiment of the present invention;
fig. 2 is a quadrant schematic diagram of an early warning level according to a first embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the driving process of the automatic driving vehicle, based on the blind area monitoring and early warning method of the automatic driving vehicle provided by the embodiment of the invention, the corresponding vehicle-mounted radar or camera is selected to carry out blind area monitoring and early warning processing based on the environment visibility state, so that the blind area monitoring efficiency and accuracy of the automatic driving vehicle are ensured; fig. 1 is a schematic view of a blind area monitoring and early warning method for an automatic driven vehicle according to an embodiment of the present invention, as shown in fig. 1, the method mainly includes the following steps:
step 1, an automatic driving vehicle acquires environmental visibility state data;
the automatic driving vehicle applicable to the method comprises a vehicle body side camera and a vehicle body side radar.
Here, the environment visibility state data acquired by the autonomous vehicle is a visibility state for identifying the environment around the autonomous vehicle; the environment visibility state data comprises a first state and a second state, and if the environment visibility state data is the first state, the visibility of the environment around the vehicle is poor; and if the environment visibility state data is in the second state, the visibility of the environment around the vehicle is good.
In addition, the vehicle body side camera comprises a left vehicle body side camera and a right vehicle body side camera; the shooting range of the vehicle body side camera is a preset rearview mirror blind area range, and the rearview mirror blind area range comprises a left rearview mirror blind area range and a right rearview mirror blind area range; the left vehicle body side camera corresponds to the range of the blind area of the left rearview mirror; the right vehicle body side camera corresponds to the range of the dead zone of the right rearview mirror;
the radar types of the vehicle body side radar comprise a millimeter wave radar, an ultrasonic radar and a laser radar; the vehicle body side radar comprises a left vehicle body side radar and a right vehicle body side radar; the detection range of the vehicle body side radar comprises a left vehicle body side radar detection range and a right vehicle body side radar detection range; the left vehicle body side radar corresponds to the detection range of the left vehicle body side radar; the right body side radar corresponds to the detection range of the right body side radar;
the detection range of the vehicle body side radar is larger than the range of the preset blind area of the rearview mirror, and the method specifically comprises the following steps: the detection range of the left vehicle body side radar is larger than the range of the blind area of the left rearview mirror, and the detection range of the right vehicle body side radar is larger than the range of the blind area of the right rearview mirror.
Here, the left body side camera is conventionally mounted on the side or bottom of the left rear view mirror of the autonomous vehicle, and may be mounted at a position higher than the left rear view mirror outside the left a-pillar, and similarly, the right body side camera is conventionally mounted on the side or bottom of the right rear view mirror, and may be mounted at a position higher than the right rear view mirror outside the right a-pillar; the preset range of the blind area of the rearview mirror is a general name of left and right blind areas which cannot be covered by an exterior left rearview mirror, an interior rearview mirror and an exterior right rearview mirror of the automatic driving vehicle, wherein the left blind area is the range of the blind area of the left rearview mirror, and the right blind area is the range of the blind area of the right rearview mirror; then, the left vehicle body side camera is actually the device for shooting the blind area range of the left rearview mirror, and the right vehicle body side camera is actually the device for shooting the blind area range of the right rearview mirror;
the left body-side radar may be mounted on the left side of the front and rear bumpers of the autonomous vehicle, and may be mounted at a designated position on the left body, and similarly, the right body-side radar may be mounted on the right side of the front and rear bumpers of the autonomous vehicle, and may be mounted at a designated position on the right body; the left body side radar is actually radar equipment for detecting and scanning the detection range of the left body side radar, and the right body side radar is actually radar equipment for detecting and scanning the detection range of the right body side radar;
it can be known from the subsequent steps that if blind area monitoring is carried out by using the radar, the visibility of the current environment is low, under the condition, the detection range is further expanded, so that the detection range of the radar on the vehicle body side is set to be larger than the range of the blind area of the rearview mirror, the left and right areas are subdivided, naturally, the detection range of the radar on the left vehicle body side is larger than the range of the blind area of the left rearview mirror, and the detection range of the radar on the right vehicle body side is larger than the range of the blind area of the right rearview mirror.
In addition, before the autonomous vehicle acquires the environment visibility state data, the embodiment of the present invention further provides a mechanism for generating the environment visibility state data, which specifically includes:
step A1, calling a vehicle-mounted camera at preset first time intervals to shoot images of the surrounding environment of the automatic driving vehicle in real time, and generating corresponding first environment images;
here, the first time interval is a preset time interval parameter, and can be flexibly set; if the weather of the area where the automatic driving vehicle is located changes rapidly and the weather state is complex, the first time interval can be set to be a shorter interval, so that the inspection frequency of the environmental visibility is improved; on the contrary, if the weather change of the area where the automatic driving vehicle is located is not large and the weather state is simple, the first time interval can be set to be a longer interval, so that the inspection frequency of the high and low environmental visibility is better; when the vehicle-mounted camera is called to shoot the real-time image of the surrounding environment of the automatic driving vehicle, any vehicle-mounted camera (including a vehicle body side camera) can be called to shoot;
step A2, carrying out target object recognition processing on the first environment image to generate a plurality of first environment target objects, and counting the number of the first environment target objects to generate a first number;
performing target detection on the first environment image by using a well-trained two-dimensional image target recognition model or an image semantic segmentation model based on a convolutional neural network structure, so that a plurality of target objects, namely first environment target objects, can be obtained, wherein the number of the target objects is a first number; if the visibility of the surrounding environment of the autonomous vehicle is poor, the number of target objects that can be successfully detected is small;
a step a3 of determining whether the first number is lower than a preset first number threshold, and if the first number is lower than the first number threshold, setting the ambient visibility state data to be in a first state; setting the ambient visibility state data to the second state if the first number is not less than the first number threshold.
Here, the first number threshold is a preset target number threshold, which is often set to a smaller value; when the first quantity is lower than the threshold value, the visibility of the surrounding environment is poor, the recognition efficiency of the two-dimensional image target recognition model or the image semantic segmentation model is low, if the camera is continuously used for blind area monitoring and early warning processing, the undetected rate is increased, and the safety of automatic driving is reduced, so that the environment visibility state data needs to be set to be in a first state, and the radar on the side of the vehicle body is selected as blind area monitoring equipment in the subsequent steps; and when the first number is not lower than the threshold value, setting the environmental visibility state data to be in a second state so as to facilitate the subsequent steps of selecting the vehicle body side camera as the blind area monitoring equipment.
Step 2, when the environment visibility state data is in a first state, taking the detection range of the radar on the side of the vehicle body as a first monitoring blind area; a vehicle body side radar is called to carry out radar target object detection processing on the first monitoring blind area, and a corresponding first target object set is generated;
wherein the first set of target objects comprises a plurality of first arrays of target objects; the first target object array comprises first identification data, first type data, first distance data and first relative speed data;
here, each first target object array corresponds to a target object in one environment; the first identification data is unique identification information distributed to each target object; the first type data is a target type corresponding to each target object, such as a building, a person, an animal, a bicycle, a motorcycle, a car, a truck, a rail train, and the like; the first distance data is a driving distance from each target object in the environment to the autonomous vehicle; the first relative speed data is the relative driving speed of each target object in the environment and the automatic driving vehicle;
the environment visibility state data is in a first state, which indicates that the visibility of the surrounding environment of the automatic driving vehicle is poor, so that the detection range of the vehicle body side radar is used as a first monitoring blind area to achieve the purpose of expanding the monitoring range, and the vehicle body side radar is used for carrying out target detection on the first monitoring blind area to obtain a corresponding first target object set;
the method specifically comprises the following steps: step 21, taking the detection range of the radar on the side of the vehicle body as a first monitoring blind area;
step 22, calling a vehicle body side radar to perform radar target object detection processing on the first monitoring blind area, and generating a corresponding first target object set;
the method specifically comprises the following steps: step 221, calling a radar on the side of the vehicle body to perform radar scanning on a first monitoring blind area according to a preset first radar detection frequency, and generating corresponding first radar frame data;
when the vehicle body side radar is a left vehicle body side radar, the first monitoring blind area is a detection range of the left vehicle body side radar, and the first radar frame data is frame data obtained by scanning the detection range of the left vehicle body side radar by the vehicle body side radar; when the vehicle-side radar is a right-side radar, the first monitoring blind area is a detection range of the right vehicle-side radar, and the first radar frame data is frame data obtained by scanning the detection range of the right vehicle-side radar by the right vehicle-side radar;
step 222, performing multi-target detection and target motion track tracking processing on the first radar frame data of the latest specified number to obtain a plurality of first detection targets and corresponding first detection target data groups;
the first detection target data group comprises first detection target identification data, first detection target type data and first detection target motion track data; the first detected target motion trajectory data includes a plurality of first detected target trajectory point data;
the method specifically comprises the following steps: step B1, forming a latest first radar frame data sequence by the latest first radar frame data with specified quantity;
step B2, performing multi-target detection and target motion track tracking processing of radar point cloud on the first radar frame data sequence to obtain a plurality of first detection targets and corresponding first detection target data groups;
specifically, the method comprises the following steps: step B21, performing point cloud data conversion on each first radar frame data in the first radar frame data sequence to generate a corresponding first frame point cloud set;
after filtering and denoising each first radar frame data, according to the specific type of the current vehicle body side radar, performing data conversion processing from radar coordinates of corresponding types to point cloud coordinates on the first radar frame data subjected to filtering and denoising, so as to obtain a corresponding first frame point cloud set;
step B22, based on a well-trained point cloud target identification model or a point cloud semantic segmentation model, performing target detection processing on each first frame point cloud set to obtain a plurality of first single-frame detection targets;
here, each first single-frame detection target corresponds to one target position information and one target type information;
step B23, grouping the first single-frame detection targets corresponding to the same target object in all the first frame point cloud sets into a group, and marking as a first detection target group;
step B24, assigning a unique identifier to each first detection target group as corresponding first detection target identifier data; taking the target type information corresponding to each first detection target group as corresponding first detection target type data; extracting target position information of a first single-frame detection target of each first detection target group as first detection target track point data, and sequencing all the first detection target track point data according to time sequence to form corresponding first detection target motion track data; a first detection target data group corresponding to each first detection target group is formed by first detection target identification data, first detection target type data and first detection target motion track data;
step 223, calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the first detection target motion trail data corresponding to each first detection target and the current position information of the automatic driving vehicle, and generating corresponding first detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the motion trail data of the first detection target corresponding to each first detection target, and generating corresponding first detection target relative speed data;
step 224, creating a corresponding first target object array for each first detection target; setting first identification data of the first target object array as first detection target identification data of a corresponding first detection target data group; setting first type data of a first target object array as first detection target type data of a corresponding first detection target data array; setting first distance data of a first target object array as corresponding first detection target distance data; setting first relative speed data of a first target object array as corresponding first detection target relative speed data; and forming a first target object set by all the first target object arrays which are completely set.
Step 3, when the environmental visibility state data is in a second state, taking the shooting range of the vehicle body side camera as a second detection blind area; calling a vehicle body side camera to carry out real-time video shooting on the second monitoring blind area to generate a corresponding first video; performing frame target object identification and multi-frame target tracking processing on the first video to generate a corresponding first target object set;
the environment visibility state data is in a second state, which indicates that the visibility of the surrounding environment of the automatic driving vehicle is good, so that the shooting range of the vehicle body side camera is taken as a second monitoring blind area, and target detection is carried out on the second monitoring blind area to obtain a corresponding first target object set;
the method specifically comprises the following steps: step 31, taking the shooting range of the vehicle body side camera as a second detection blind area;
step 32, calling a vehicle body side camera to carry out real-time video shooting on the second monitoring blind area to generate a corresponding first video;
when the vehicle body side camera is a left vehicle body side camera, the second monitoring blind area is a left rearview mirror blind area range, and the first video is video data obtained by shooting the left rearview mirror blind area range by the left vehicle body side camera; when the vehicle body side camera is a right vehicle body side camera, the second monitoring blind area is a right rearview mirror blind area range, and the first video is video data obtained by shooting the right rearview mirror blind area range by the right vehicle body side camera;
step 33, performing framing target object identification and multi-frame target tracking processing on the first video to generate a corresponding first target object set;
the method specifically comprises the following steps: step 331, performing frame image extraction processing on the first video to generate a plurality of first image frame data;
step 332, performing multi-target detection and target motion trajectory tracking processing on the latest first image frame data with the specified number to obtain a plurality of second detection targets and corresponding second detection target data sets;
the second detection target data group comprises second detection target identification data, second detection target type data and second detection target motion track data; the second detected target motion trajectory data includes a plurality of second detected target trajectory point data;
the method specifically comprises the following steps: a step C1 of composing a latest first image frame data sequence from the latest specified number of first image frame data;
step C2, performing multi-target detection and target motion trajectory tracking processing on the two-dimensional image of the first image frame data sequence to obtain a plurality of second detection targets and corresponding second detection target data groups;
specifically, the method comprises the following steps: step C21, performing filtering and noise reduction processing on each first image frame data in the first image frame data sequence;
step C22, performing target detection processing on each first image frame data based on a well-trained two-dimensional image target identification model or a two-dimensional image semantic segmentation model to obtain a plurality of second single-frame detection targets;
here, each second single-frame detection target corresponds to one target position information and one target type information;
step C23, grouping second single-frame detection targets corresponding to the same target object in all first image frame data into a group, and marking as a second detection target group;
step C24, assigning a unique identifier to each second detection target group as corresponding second detection target identifier data; taking the target type information corresponding to each second detection target group as corresponding second detection target type data; extracting target position information of a second single-frame detection target of each second detection target group as second detection target track point data, and sequencing all the second detection target track point data according to time sequence to form corresponding second detection target motion track data; a second detection target data group corresponding to each second detection target group is formed by second detection target identification data, second detection target type data and second detection target motion track data;
step 333, calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target and the current position information of the automatic driving vehicle, and generating corresponding second detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target, and generating corresponding second detection target relative speed data;
step 334, creating a corresponding first target object array for each second detection target; setting first identification data of the first target object array as second detection target identification data of a corresponding second detection target data group; setting the first type data of the first target object array as second detection target type data of a corresponding second detection target data array; setting first distance data of the first target object array as corresponding second detection target distance data; setting first relative speed data of the first target object array as corresponding second detection target relative speed data; and forming a first target object set by all the first target object arrays which are completely set.
Step 4, performing target grading early warning processing according to the first target object set;
the method specifically comprises the following steps: step 41, in the first target object set, according to the first distance data and the first relative speed data in each first target object array, estimating the collision time between the current detection target and the automatic driving vehicle, and generating corresponding first collision time data; recording the first collision time data with the minimum value as the first shortest time;
here, the collision time estimation between the current detection target and the autonomous vehicle is performed in a correspondence relationship of distance/relative speed to collision time;
step 42, recording the first distance data with the minimum numerical value as a first shortest distance in the first target object set;
step 43, performing early warning level identification processing on the first shortest distance and the first shortest time according to a preset first early warning distance threshold, a preset second early warning distance threshold, a preset first collision early warning time and a preset second collision early warning time, and generating a corresponding first early warning level;
wherein the first early warning distance threshold > the second early warning distance threshold; the first collision early warning time is greater than the second collision early warning time; the first early warning level comprises a first-stage early warning level, a second-stage early warning level, a third-stage early warning level, a fourth-stage early warning level and a fifth-stage early warning level, and the early warning levels corresponding to the first-stage early warning level and the fifth-stage early warning level are gradually increased;
here, in the embodiment of the present invention, a two-dimensional coordinate system is established with a horizontal axis and a vertical axis of a two-dimensional coordinate system, and a plurality of warning level quadrants are partitioned in the two-dimensional coordinate system by using a first warning distance threshold, a second warning distance threshold, a first collision warning time, and a second collision warning time, as shown in fig. 2, which is a schematic diagram of the warning level quadrant provided in the first embodiment of the present invention;
the method specifically comprises the following steps: step 431, when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a first-level early warning level;
step 432, when the first shortest distance is lower than a first early warning distance threshold but not lower than a second early warning distance threshold, and the first shortest time is lower than first collision early warning time but not lower than second collision early warning time, setting the first early warning level as a secondary early warning level;
step 433, when the first shortest distance is lower than a second early warning distance threshold value and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a third-level early warning level;
step 434, setting the first early warning level as a fourth-level early warning level when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is lower than the second collision early warning time, or when the first shortest distance is lower than the second early warning distance threshold and the first shortest time is lower than the first collision early warning time but not lower than the second collision early warning time;
step 435, setting the first early warning level as a fifth early warning level when the first shortest distance is lower than the second early warning distance threshold and the first shortest time is lower than the second collision early warning time;
step 44, performing corresponding grade early warning processing according to the first early warning grade;
the method specifically comprises the following steps: and if the first early warning level is a first, second, third, fourth or fifth early warning level, performing corresponding first, second, third, fourth or fifth early warning processing.
When the early warning is specifically processed, multistage early warning can be performed through the buzzer devices with different frequencies and the lighting devices with different colors, different flashing frequencies and different brightness, wherein the higher the early warning level is, the higher the buzzing frequency is, the more obvious the color is, the higher the flashing frequency is and the higher the brightness is; and simultaneously, a first early warning level is sent to a vehicle control module of the automatic driving vehicle, so that the vehicle control module can timely make an adaptive vehicle control action according to the current early warning level.
Fig. 3 is a schematic structural diagram of an electronic device according to a second embodiment of the present invention. The electronic device may be a terminal device or a server for implementing the method of the embodiment of the present invention, or may be a terminal device or a server connected to the terminal device or the server for implementing the method of the embodiment of the present invention. As shown in fig. 3, the electronic device may include: a processor 301 (e.g., a CPU), a memory 302, a transceiver 303; the transceiver 303 is coupled to the processor 301, and the processor 301 controls the transceiving operation of the transceiver 303. Various instructions may be stored in memory 302 for performing various processing functions and implementing the processing steps described in the foregoing method embodiments. Preferably, the electronic device according to an embodiment of the present invention further includes: a power supply 304, a system bus 305, and a communication port 306. The system bus 305 is used to implement communication connections between the elements. The communication port 306 is used for connection communication between the electronic device and other peripherals.
The system bus 305 mentioned in fig. 3 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM) and may also include a Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a Graphics Processing Unit (GPU), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It should be noted that the embodiment of the present invention also provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the method and the processing procedure provided in the above-mentioned embodiment.
The embodiment of the present invention further provides a chip for executing the instructions, where the chip is configured to execute the processing steps described in the foregoing method embodiment.
The embodiment of the invention provides a blind area monitoring and early warning method of an automatic driving vehicle, electronic equipment and a computer readable storage medium. According to the invention, the processing mode of blind area monitoring and early warning can be automatically switched based on the visibility, so that the blind area monitoring efficiency and accuracy of the automatic driving vehicle are ensured.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. The blind area monitoring and early warning method of the automatic driving vehicle is characterized in that the method is applied to the automatic driving vehicle, and the automatic driving vehicle comprises a vehicle body side camera and a vehicle body side radar; the method comprises the following steps:
the autonomous vehicle obtaining environmental visibility state data;
when the environment visibility state data is in a first state, taking the detection range of the vehicle body side radar as a first monitoring blind area; calling the vehicle body side radar to perform radar target object detection processing on the first monitoring blind area, and generating a corresponding first target object set;
when the environment visibility state data is in a second state, taking the shooting range of the vehicle body side camera as a second detection blind area; calling the vehicle body side camera to carry out real-time video shooting on the second monitoring blind area to generate a corresponding first video; performing frame target object identification and multi-frame target tracking processing on the first video to generate a corresponding first target object set;
and carrying out target grading early warning processing according to the first target object set.
2. The blind spot monitoring and warning method of an autonomous vehicle as claimed in claim 1, wherein before the autonomous vehicle acquires the ambient visibility state data, the method further comprises:
calling a vehicle-mounted camera at preset first time intervals to carry out real-time image shooting on the surrounding environment of the automatic driving vehicle, and generating a corresponding first environment image;
carrying out target object identification processing on the first environment image to generate a plurality of first environment target objects, and counting the number of the first environment target objects to generate a first number;
judging whether the first quantity is lower than a preset first quantity threshold value or not, and if the first quantity is lower than the first quantity threshold value, setting the environment visibility state data to be in a first state; setting the ambient visibility state data to a second state if the first number is not less than the first number threshold.
3. The blind area monitoring and early warning method of an autonomous vehicle according to claim 1 or 2,
the radar types of the vehicle body side radar comprise a millimeter wave radar, an ultrasonic radar and a laser radar;
the body side radar comprises a left body side radar and a right body side radar; the detection range of the vehicle body side radar comprises a left vehicle body side radar detection range and a right vehicle body side radar detection range; the left vehicle body side radar corresponds to the detection range of the left vehicle body side radar; the right body side radar corresponds to the detection range of the right body side radar;
the shooting range of the vehicle body side camera is a preset range of a blind area of the rearview mirror; the vehicle body side camera comprises a left vehicle body side camera and a right vehicle body side camera; the range of the blind area of the rearview mirror comprises a range of the blind area of the left rearview mirror and a range of the blind area of the right rearview mirror; the left vehicle body side camera corresponds to the range of the blind area of the left rearview mirror; the right vehicle body side camera corresponds to the range of the dead zone of the right rearview mirror;
the detection range of the vehicle body side radar is larger than the range of the blind area of the rearview mirror, and the method specifically comprises the following steps: the detection range of the left vehicle body side radar is larger than the range of the blind area of the left rearview mirror, and the detection range of the right vehicle body side radar is larger than the range of the blind area of the right rearview mirror.
4. The blind spot monitoring and warning method of an autonomous vehicle as claimed in claim 1,
the first set of target objects comprises a plurality of first arrays of target objects;
the first target object array includes first identification data, first type data, first distance data, and first relative velocity data.
5. The blind area monitoring and early warning method of the autonomous vehicle as claimed in claim 4, wherein the invoking the vehicle-side radar to perform radar target object detection processing on the first monitoring blind area to generate a corresponding first target object set specifically comprises:
calling the radar on the side of the vehicle body to perform radar scanning on the first monitoring blind area according to a preset first radar detection frequency to generate corresponding first radar frame data;
performing multi-target detection and target motion track tracking processing on the first radar frame data of the latest specified number to obtain a plurality of first detection targets and corresponding first detection target data groups; the first detection target data group comprises first detection target identification data, first detection target type data and first detection target motion trail data; the first detected target motion trajectory data includes a plurality of first detected target trajectory point data;
calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the first detection target motion trail data corresponding to each first detection target and the current position information of the automatic driving vehicle, and generating corresponding first detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the motion trajectory data of the first detection target corresponding to each first detection target, and generating corresponding first detection target relative speed data;
creating a corresponding first target object array for each first detection target; setting the first identification data of the first target object array as the corresponding first detection target identification data of the first detection target data array; setting the first type data of the first target object array as the first detection target type data of the corresponding first detection target data array; setting the first distance data of the first target object array as corresponding first detection target distance data; setting the first relative speed data of the first target object array as corresponding first detection target relative speed data; and the first target object set is formed by all the first target object arrays which are completely set.
6. The blind area monitoring and early warning method for the autonomous vehicle according to claim 4, wherein the step of performing frame-by-frame target object recognition and multi-frame target tracking processing on the first video to generate the corresponding first target object set specifically comprises:
performing frame image extraction processing on the first video to generate a plurality of first image frame data;
performing multi-target detection and target motion trajectory tracking processing on the first image frame data of the latest specified number to obtain a plurality of second detection targets and corresponding second detection target data groups; the second detection target data group comprises second detection target identification data, second detection target type data and second detection target motion track data; the second detected target motion trajectory data includes a plurality of second detected target trajectory point data;
calculating the shortest driving distance between the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target and the current position information of the automatic driving vehicle, and generating corresponding second detection target distance data; calculating the relative speed of the current detection target and the automatic driving vehicle according to the second detection target motion trail data corresponding to each second detection target, and generating corresponding second detection target relative speed data;
creating a corresponding first target object array for each second detection target; setting the first identification data of the first target object array as the second detection target identification data of the corresponding second detection target data array; setting the first type data of the first target object array as the second detection target type data of the corresponding second detection target data array; setting the first distance data of the first target object array as corresponding second detection target distance data; setting the first relative speed data of the first target object array as corresponding second detection target relative speed data; and the first target object set is formed by all the first target object arrays which are completely set.
7. The blind area monitoring and early warning method for the autonomous vehicle as claimed in claim 4, wherein the step of performing the target classification early warning process according to the first target object set specifically comprises:
in the first target object set, according to the first distance data and the first relative speed data in each first target object array, estimating the collision time between a current detection target and the automatic driving vehicle, and generating corresponding first collision time data; recording the first collision time data with the minimum numerical value as first shortest time;
recording the first distance data with the minimum numerical value as a first shortest distance in the first target object set;
according to a preset first early warning distance threshold, a preset second early warning distance threshold, a preset first collision early warning time and a preset second collision early warning time, carrying out early warning level identification processing on the first shortest distance and the first shortest time to generate a corresponding first early warning level; the first early warning distance threshold value is greater than the second early warning distance threshold value; the first collision early warning time is greater than the second collision early warning time; the first early warning level comprises a first-stage early warning level, a second-stage early warning level, a third-stage early warning level, a fourth-stage early warning level and a fifth-stage early warning level, and early warning levels corresponding to the first-stage early warning level and the fifth-stage early warning level are gradually increased;
and performing corresponding grade early warning treatment according to the first early warning grade.
8. The blind area monitoring and early warning method for the autonomous vehicle as claimed in claim 7, wherein the generating of the corresponding first early warning level by performing early warning level recognition processing on the first shortest distance and the first shortest time according to a preset first early warning distance threshold, a preset second early warning distance threshold, a preset first collision early warning time, and a preset second collision early warning time specifically comprises:
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a primary early warning level;
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold, and the first shortest time is lower than the first collision early warning time but not lower than the second collision early warning time, setting the first early warning level as a secondary early warning level;
when the first shortest distance is lower than the second early warning distance threshold value and the first shortest time is not lower than the first collision early warning time, setting the first early warning level as a third-level early warning level;
when the first shortest distance is lower than the first early warning distance threshold but not lower than the second early warning distance threshold and the first shortest time is lower than the second collision early warning time, or when the first shortest distance is lower than the second early warning distance threshold and the first shortest time is lower than the first collision early warning time but not lower than the second collision early warning time, setting the first early warning level as a four-level early warning level;
and when the first shortest distance is lower than the second early warning distance threshold value and the first shortest time is lower than the second collision early warning time, setting the first early warning level as a fifth early warning level.
9. An electronic device, comprising: a memory, a processor, and a transceiver;
the processor is used for being coupled with the memory, reading and executing the instructions in the memory to realize the method steps of any one of claims 1 to 8;
the transceiver is coupled to the processor, and the processor controls the transceiver to transmit and receive messages.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111062752.7A CN113808437A (en) | 2021-09-10 | 2021-09-10 | Blind area monitoring and early warning method for automatic driving vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111062752.7A CN113808437A (en) | 2021-09-10 | 2021-09-10 | Blind area monitoring and early warning method for automatic driving vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113808437A true CN113808437A (en) | 2021-12-17 |
Family
ID=78940801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111062752.7A Pending CN113808437A (en) | 2021-09-10 | 2021-09-10 | Blind area monitoring and early warning method for automatic driving vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808437A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114734918A (en) * | 2022-04-28 | 2022-07-12 | 重庆长安汽车股份有限公司 | Blind area detection and early warning method, system and storage medium |
CN114966736A (en) * | 2022-05-26 | 2022-08-30 | 苏州轻棹科技有限公司 | Processing method for predicting target speed based on point cloud data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150002284A1 (en) * | 2013-07-01 | 2015-01-01 | Fuji Jukogyo Kabushiki Kaisha | Driving assist controller for vehicle |
US20170166123A1 (en) * | 2015-12-10 | 2017-06-15 | International Business Machines Corporation | Vehicle accident avoidance system |
US10147320B1 (en) * | 2017-12-27 | 2018-12-04 | Christ G. Ellis | Self-driving vehicles safety system |
CN113276769A (en) * | 2021-04-29 | 2021-08-20 | 深圳技术大学 | Vehicle blind area anti-collision early warning system and method |
-
2021
- 2021-09-10 CN CN202111062752.7A patent/CN113808437A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150002284A1 (en) * | 2013-07-01 | 2015-01-01 | Fuji Jukogyo Kabushiki Kaisha | Driving assist controller for vehicle |
US20170166123A1 (en) * | 2015-12-10 | 2017-06-15 | International Business Machines Corporation | Vehicle accident avoidance system |
US10147320B1 (en) * | 2017-12-27 | 2018-12-04 | Christ G. Ellis | Self-driving vehicles safety system |
CN113276769A (en) * | 2021-04-29 | 2021-08-20 | 深圳技术大学 | Vehicle blind area anti-collision early warning system and method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114734918A (en) * | 2022-04-28 | 2022-07-12 | 重庆长安汽车股份有限公司 | Blind area detection and early warning method, system and storage medium |
CN114966736A (en) * | 2022-05-26 | 2022-08-30 | 苏州轻棹科技有限公司 | Processing method for predicting target speed based on point cloud data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI841695B (en) | Method, on-board computer and non-transitory computer-readable medium for radar-aided single image three-dimensional depth reconstruction | |
CN110764108B (en) | Obstacle detection method and device for port automatic driving scene | |
CN108226951B (en) | Laser sensor based real-time tracking method for fast moving obstacle | |
JP4967666B2 (en) | Image processing apparatus and method, and program | |
CN112329552A (en) | Obstacle detection method and device based on automobile | |
CN107306338A (en) | Panoramic camera system for object detection and tracking | |
US20190065878A1 (en) | Fusion of radar and vision sensor systems | |
US11698459B2 (en) | Method and apparatus for determining drivable region information | |
JP2008172441A (en) | Detection device, method, and program | |
CN106314424B (en) | Householder method of overtaking other vehicles, device and automobile based on automobile | |
CN113808437A (en) | Blind area monitoring and early warning method for automatic driving vehicle | |
CN110682907B (en) | Automobile rear-end collision prevention control system and method | |
CN110008891B (en) | Pedestrian detection positioning method and device, vehicle-mounted computing equipment and storage medium | |
CN111699404A (en) | Driving auxiliary target acquisition method and device, radar, driving system and vehicle | |
CN113255444A (en) | Training method of image recognition model, image recognition method and device | |
CN112896160B (en) | Traffic sign information acquisition method and related equipment | |
CN109887321B (en) | Unmanned vehicle lane change safety judgment method and device and storage medium | |
CN113611008B (en) | Vehicle driving scene acquisition method, device, equipment and medium | |
CN117615945A (en) | Sensor detection method, sensor detection device and vehicle | |
CN117292227A (en) | Lei Dadian cloud data enhancement method and device, intelligent automobile and storage medium | |
CN111717114A (en) | Vehicle-mounted pedestrian early warning system | |
CN110889409A (en) | Automobile radar monitoring optimization method, device, equipment and storage medium | |
CN116299416A (en) | Fuzzy marking of low level electromagnetic sensor data | |
CN116946120A (en) | Anti-collision method, device, storage medium and electronic equipment | |
CN110309741B (en) | Obstacle detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211217 |
|
RJ01 | Rejection of invention patent application after publication |