CN114120275A - Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium - Google Patents
Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114120275A CN114120275A CN202111370834.8A CN202111370834A CN114120275A CN 114120275 A CN114120275 A CN 114120275A CN 202111370834 A CN202111370834 A CN 202111370834A CN 114120275 A CN114120275 A CN 114120275A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- obstacle
- detection
- result
- automatic driving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 111
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000004927 fusion Effects 0.000 claims abstract description 36
- 238000004458 analytical method Methods 0.000 claims abstract description 34
- 238000004590 computer program Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000009434 installation Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 abstract description 28
- 238000010586 diagram Methods 0.000 description 8
- 239000000463 material Substances 0.000 description 8
- 238000005065 mining Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000428 dust Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000003897 fog Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000001579 optical reflectometry Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to the technical field of automatic driving, and provides a method and a device for detecting and identifying automatic driving obstacles, electronic equipment and a storage medium; the method comprises the following steps: detecting an obstacle in front of the vehicle, and performing first deceleration control according to a detection result; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and a detection result, and carrying out second deceleration control according to a fusion modeling result; and detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result. This application detects and discerns the barrier in vehicle the place ahead through hierarchical level for the vehicle still can detect and discern the barrier in vehicle the place ahead under the abominable condition of observation condition, and in time adjust vehicle speed or vehicle state of traveling, not only can reduce the wearing and tearing that the vehicle emergency brake brought, can guarantee moreover that the vehicle effectively keeps away the barrier.
Description
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for detecting and identifying an automatic driving obstacle, an electronic device, and a storage medium.
Background
The mining truck is dangerous to drive, and under the open-pit mine environment, the reliable automatic driving technology is adopted to replace manual driving, so that the harm of raised dust and high temperature to the health of drivers can be effectively reduced, the operation efficiency of the mine truck (the mining truck) can be greatly improved, and the labor cost is reduced. The mine truck has large volume, heavy load and long braking distance, for example, a Hunan heavy duty MCC600D mining dump truck has a dead weight of 180 tons, the total weight reaches 600 tons under the full load condition, the braking distance of the mining dump truck running at the speed of 50Km/h under the full load condition is 100m, in order to ensure the collision safety distance of the dump truck, the manual driving of the mine truck requires an experienced driver to observe far enough to bypass or brake in advance, and the obstacle detection capability of the automatic driving requirement sensor at least reaches over 100 m. Detecting obstacles is an essential link in automatic driving, and the accuracy of the obstacles directly influences the level of automatic driving and even the usability of an unmanned system.
In the prior art, obstacle detection and recognition algorithms applied to automatic driving of automobiles are based on laser and vision, and decision control is performed on vehicles after obstacles are detected. However, the above automatic driving obstacle detection method can be applied only to road automobiles and not to mine trucks. Under the abominable condition of mining area observation condition, for example receive the raise dust interference, conventional laser radar sensor detection distance is restricted, and when laser radar sensor detected the barrier, the distance between ore deposit card and the barrier has been very little, can cause the frequent emergency braking of ore deposit card like this, not only can bring wearing and tearing, the condition that can't avoid the barrier appears in addition easily.
In view of the above problems, no effective technical solution exists at present.
Disclosure of Invention
The application aims to provide an automatic driving obstacle detection and identification method, an automatic driving obstacle detection and identification device, electronic equipment and a storage medium, so that a vehicle can still detect and identify obstacles in front of the vehicle under the severe observation condition, the vehicle speed or the vehicle running state can be adjusted in time, the abrasion caused by emergency braking of the vehicle can be reduced, and the vehicle can be ensured to effectively avoid obstacles.
In a first aspect, the present application provides an automatic driving obstacle detection and identification method, for detecting and identifying an obstacle in front of a vehicle, including the following steps:
detecting an obstacle in front of the vehicle, and performing first deceleration control according to a detection result;
carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and the detection result, and carrying out second deceleration control according to a fusion modeling result;
and detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result.
The application provides an automatic driving obstacle detection and identification method detects and discerns the barrier in vehicle the place ahead through hierarchical level for the vehicle still can detect and discern the barrier in vehicle the place ahead under the abominable condition of observation, in time adjusts vehicle speed or vehicle state of traveling, not only can reduce the wearing and tearing that the vehicle emergency brake brought, can guarantee moreover that the vehicle effectively keeps away the barrier.
Optionally, in the method for detecting and identifying an automatic driving obstacle according to the embodiment of the present application, the detecting and analyzing an obstacle in front of the vehicle after the second deceleration includes:
acquiring an enhanced image of an obstacle in front of the vehicle;
extracting multi-level depth features of the enhanced image;
and carrying out detection analysis on the multi-level depth features.
This application is through carrying out the detection and analysis to the barrier in vehicle the place ahead, confirms the material of barrier, in time adjusts vehicle state according to the material of barrier, avoids the barrier to influence the vehicle and traveles.
Optionally, in the automatic driving obstacle detection and recognition method according to the present application, after the extracting the multi-level depth features of the enhanced image and before the performing the detection analysis on the multi-level depth features, the method further includes the following steps:
and fusing the multi-level depth features to obtain semantic features.
This application is through fusing multilayer level degree of depth characteristic for the size of characteristic map reduces, and the information that contains is abundanter, thereby improves the accuracy of detection and analysis.
Optionally, in the automatic driving obstacle detection and identification method according to the present application, the performing of the second deceleration control according to the fusion modeling result includes the following steps:
and judging whether to perform second deceleration control according to whether the fusion modeling result is within a preset value range.
Optionally, in the automatic driving obstacle detection and identification method according to the present application, the fusion modeling of the data acquisition result and the detection result includes the following steps:
acquiring geographic reference information of the vehicle and relative information of the working environment of the vehicle relative to a coordinate system of a laser scanner;
and performing coordinate conversion on the geographic reference information and the relative information.
Optionally, in the automatic driving obstacle detection and identification method according to the present application, the coordinate transformation of the geographic reference information and the relative information is calculated by the following formula:
wherein,the coordinate of the laser scanning point P in the geocentric rectangular coordinate system is obtained;the coordinate of the IMU/GNSS center in the geocentric rectangular coordinate system is obtained by outputting the position measured by the IMU/GNSS system;forming a rotation matrix from an IMU/GNSS coordinate system to a geocentric rectangular coordinate system by the attitude measured by the IMU/GNSS system;expressing the component of the bias from the scanning center of the laser scanner to the IMU/GNSS center in an IMU/GNSS coordinate system, and obtaining an initial value through manual measurement;a rotation matrix from a laser scanner coordinate system to an IMU/GNSS coordinate system is determined by a specific installation axial direction;the coordinates of the scanning point of the laser scanner in the coordinate system of the laser scanner are output by the laser scanner.
Optionally, in the automatic driving obstacle detection and recognition method according to the present application, before performing detection and analysis on an obstacle in front of the vehicle after the second deceleration, the method further includes the following steps:
and adjusting the position of equipment for detecting and analyzing the obstacle in front of the vehicle.
In a second aspect, the present application further provides an automatic driving obstacle detection and recognition device for detecting and recognizing an obstacle in front of a vehicle, the device including:
the detection module is used for detecting an obstacle in front of the vehicle and carrying out first-time deceleration control according to a detection result;
the data acquisition module is used for acquiring data of an obstacle in front of the vehicle after the vehicle decelerates for the first time, fusing and modeling a data acquisition result and the detection result, and performing deceleration control for the second time according to a fused and modeled result;
and the detection and analysis module is used for detecting and analyzing the obstacle in front of the vehicle after the second deceleration and keeping or changing the running state of the vehicle according to the detection and analysis result.
The application provides an automatic driving obstacle detection recognition device detects and discerns the barrier in vehicle the place ahead through hierarchical level, not only can improve the vehicle and detect and the ability of discerning the barrier under the abominable condition of observation condition, can also adjust vehicle speed, reduces the wearing and tearing that the vehicle emergency brake brought.
In a third aspect, the present application provides an electronic device comprising a processor and a memory, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, perform the steps of the method as provided in the first aspect.
In a fourth aspect, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
As can be seen from the above, the automatic driving obstacle detection and identification method, device, electronic device, and storage medium provided by the present application perform first deceleration control according to a detection result by detecting an obstacle in front of a vehicle; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and the detection result, and carrying out second deceleration control according to a fusion modeling result; detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result; through hierarchical barrier to the vehicle the place ahead detect and discernment for the vehicle still can detect and discern the barrier in vehicle the place ahead under the abominable condition of observation, and in time adjust vehicle speed or vehicle running state, not only can reduce the wearing and tearing that the vehicle emergency brake brought, can guarantee moreover that the vehicle effectively keeps away the barrier.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a flowchart of an automatic driving obstacle detection and identification method according to an embodiment of the present application.
Fig. 2 is a fusion modeling schematic diagram of an automatic driving obstacle detection and identification method provided in the embodiment of the present application.
Fig. 3 is a schematic image diagram before three-dimensional modeling is performed on a road surface in front of a vehicle according to the embodiment of the present application.
Fig. 4 is a schematic image diagram of a road surface in front of a vehicle after three-dimensional modeling according to the present embodiment.
Fig. 5 is a schematic structural diagram of an automatic driving obstacle detection and recognition device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Under the abominable condition of mining area observation condition, conventional laser radar sensor detection range is limited, and barrier detection distance reduces greatly, and moreover, short distance detection can cause the frequent emergency braking of ore deposit card, not only can bring wearing and tearing, the condition that can't keep away the barrier appears in addition easily. Based on the above, the application provides an automatic driving obstacle detection and identification method, an automatic driving obstacle detection and identification device, an electronic device and a storage medium.
Referring to fig. 1, fig. 1 is a flowchart illustrating an automatic driving obstacle detection and identification method according to some embodiments of the present disclosure. The automatic driving obstacle detection and identification method is used for detecting and identifying obstacles in front of a vehicle and controlling the vehicle, and comprises the following steps:
s10, detecting obstacles in front of the vehicle, and performing first deceleration control according to the detection result;
s20, acquiring data of the obstacle in front of the vehicle after the vehicle decelerates for the first time, performing fusion modeling on the data acquisition result and the detection result, and performing second-time deceleration control according to the fusion modeling result;
and S30, detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the vehicle running state according to the detection and analysis result, wherein the keeping of the vehicle running state means that the vehicle continues to run forwards at the current speed, and the changing of the vehicle running state means that the vehicle detours or stops.
In step S10, a millimeter-wave radar may be used to detect an obstacle in front of the vehicle. The millimeter wave has strong capability of penetrating fog, smoke and dust, has the characteristics of all weather (except heavy rainy days) all day long, and can sense whether an obstacle exists in a long-distance range in front of the vehicle (300 meters in front of the vehicle). It should be noted that the millimeter wave radar detects an obstacle in front of the vehicle, including detecting a static obstacle and detecting and tracking a dynamic obstacle. The method comprises the steps that millimeter waves are emitted to the front of a vehicle, whether an obstacle exists in front of the vehicle is judged according to whether a millimeter wave radar receives an echo (the echo is millimeter wave point cloud data), so that the vehicle is subjected to first speed reduction control in time, and if the obstacle exists in front of the vehicle, the first speed reduction control is to control the vehicle to decelerate; if there is no obstacle in front of the vehicle, the first deceleration control is to control the vehicle to keep the current running state and continue running without decelerating.
In step S20, a laser radar may be used to collect data of the obstacle in front of the vehicle after the first deceleration. The method comprises the steps that detection signals are transmitted to an obstacle in front of a vehicle, the signals reflected from the obstacle are received, the reflected signals are laser radar point cloud data, a fusion modeling result, namely parameters of the obstacle, is obtained by fusing and modeling the laser radar point cloud data and millimeter wave point cloud data, the parameters of the obstacle comprise distance, direction, height, speed, posture, shape and the like, the vehicle is subjected to secondary speed reduction control according to the result of comparison between the parameters of the obstacle and a preset value range, and if the difference between the parameters of the obstacle and the preset value range is within a preset allowable range, the vehicle is controlled to keep the current driving state to continue to drive without speed reduction for the second time; and if the difference between the parameter of the obstacle and the preset value range is not within the preset allowable range, controlling the vehicle to decelerate through the second deceleration control.
In step S30, the multispectral camera may be used to detect and analyze the obstacle in front of the vehicle after the second deceleration. The multispectral camera can acquire space and spectral information of an object at the same time, comprehensively detect and identify the characteristics of the object, particularly can accurately identify the types of obstacles and the road surface conditions, even if the road surface is wholly ponded in a large area and frozen, the difference in vision is not large, the multispectral camera can still utilize the difference of various road surface conditions on the light reflectivity of different wavelengths, and the accurate segmentation and identification of the whole road surface conditions are realized.
In some embodiments, step S20 includes the following sub-steps:
acquiring geographic reference information of a vehicle and relative information of a vehicle working environment relative to a laser scanner coordinate system;
and carrying out coordinate conversion on the geographic reference information and the relative information.
The geographic reference information of the vehicle can be directly acquired through an IMU/GNSS system, and the IMU/GNSS system acquires point cloud data; relative information of a vehicle working environment relative to a laser scanner coordinate system can be collected through a laser radar, and point cloud data are obtained through collection of the laser radar; and converting the point cloud data of each frame of the geographic reference information and the relative information from an IMU/GNSS coordinate system and a laser scanner coordinate system to a geocentric rectangular coordinate system through an interpolation algorithm, so as to realize the coordinate conversion of the geographic reference information and the relative information.
Referring to fig. 2, fig. 2 is a schematic diagram of fusion modeling of an automatic driving obstacle detection and identification method according to an embodiment of the present application. Specifically, the coordinate transformation is calculated by the following formula:;
wherein,the coordinate of the laser scanning point P in a geocentric rectangular coordinate system (E system for short);the coordinate of the IMU/GNSS center in the geocentric rectangular coordinate system, namely the geographic reference information of the vehicle, is obtained by outputting the position measured by the IMU/GNSS system;forming a rotation matrix from an IMU/GNSS coordinate system (I system for short) to a ground center rectangular coordinate system by the attitude measured by the IMU/GNSS system;the initial value may be expressed in terms of the components of the IMU/GNSS coordinate system for the laser scanner scan center to IMU/GNSS center biasManually measuring to obtain;the rotation matrix, which is the laser scanner coordinate system to IMU/GNSS coordinate system, is determined by the particular installation axis, that is,determined by the specific installation position of the laser scanner;the coordinates of the scanning point of the laser scanner in a laser scanner coordinate system (S system for short), that is, the relative information of the working environment of the vehicle with respect to the laser scanner coordinate system, are output by the laser scanner.
Preferably, after coordinate conversion, gray level rendering processing can be performed on the point cloud through a visual sensor, high-precision three-dimensional point cloud information with color information, road information and surrounding environment information is constructed, and parameters of the obstacle can be clearly obtained through the three-dimensional point cloud information.
In other embodiments, step S20 further includes the following steps: the road surface in front of the vehicle is detected. The method for detecting the road surface in front of the vehicle specifically comprises the following steps: and acquiring road information parameters in front of the vehicle, and changing or keeping the running state of the vehicle according to the comparison result of the road information parameters and preset parameters. The road information parameters comprise pits, the distance between the pits, the directions and the shapes of the pits and the like, the road information parameters are compared with preset parameters, and if the difference between the road information parameters and the preset parameters is within a preset allowable range, the vehicle continues to keep a running state, namely continues to run forwards at the current speed; and if the difference between the road information parameter and the preset parameter is not within the preset allowable range, changing the running state of the vehicle, namely, the vehicle detours or stops.
In this embodiment, also can adopt laser radar to the road surface in vehicle the place ahead, that is to say, can detect the barrier in vehicle the place ahead and road surface simultaneously through laser radar, so, can reduce enterprise manufacturing cost. When the laser radar detects the road surface in front of the vehicle, the image shown in the figure 3 is obtained firstly, then three-dimensional modeling is carried out, the image shown in the figure 4 is obtained after point cloud display and gray level coloring, and the figure 3 and the figure 4 are compared, so that the hollow part of the road surface is clearer after the three-dimensional modeling. Of course, other road surface information parameter acquiring devices may be used to acquire the road surface information parameter in front of the vehicle, and the above is only one embodiment of the present application, and should not be limited thereto.
In some embodiments, step S30 includes the following sub-steps:
acquiring an enhanced image of an obstacle in front of a vehicle;
extracting multi-level depth features of the enhanced image;
and detecting and analyzing the multi-level depth characteristics to obtain a detection and analysis result.
The method comprises the steps that an enhanced image of an obstacle in front of a vehicle is acquired, and the obstacle in front of the vehicle can be shot through a multispectral camera; the method is characterized in that a YOLO model added with a residual error neural network and a convolutional neural network is adopted to extract the multi-level depth features of the enhanced image, and the YOLO model added with the residual error neural network and the convolutional neural network can extract object positions from the network while extracting object features, namely, positioning and classification are realized in the same convolutional neural network, so that the class probability and the coordinates of obstacles are directly obtained.
Preferably, after the multi-level depth features of the enhanced image are extracted, the multi-level depth features are fused to obtain semantic features, and the semantic features are detected and analyzed. Because the effective information of the picture is less along with the increase of the levels when the multi-level depth features are extracted, the semantic features obtained by fusing the multi-level depth features comprise the depth features of all levels, the information richness is improved, particularly the information of small objects is more comprehensive, and the accuracy rate is higher when the material category of the barrier is analyzed.
Preferably, before obtaining the enhanced image of the obstacle in front of the vehicle, the position of the device for detecting and analyzing the obstacle in front of the vehicle may be adjusted by an adjusting device arranged on the vehicle, that is, the angle of the shooting device is adjusted before shooting the image, so that the shooting device can completely and clearly shoot the obstacle in front of the vehicle.
The following description specifically explains the automatic driving obstacle detection and identification method of the present application, taking a Hunan Tan heavy industry MCC600D heavy mine dump truck with a dead weight of 180 tons and a total weight of 600 tons under a full load condition as an example.
Running at the speed of 50km/h under the full load condition of a Hunan pond heavy industry MCC600D heavy mine dump truck, firstly, detecting whether an obstacle exists in a long distance (300 meters) in front of the vehicle through a millimeter wave radar, controlling the vehicle to decelerate for the first time after detecting that the obstacle exists in front of the vehicle, and reducing the vehicle speed to 25km/h-30 km/h; secondly, carrying out point cloud data acquisition on the obstacle at a middle distance (100-150 meters) in front of the vehicle through a laser radar, fusing the laser radar point cloud and the millimeter wave radar point cloud, identifying the volume of the obstacle according to a fusion result, and if the volume of the obstacle is larger than a preset volume (such as 0.5m x 0.5 m), carrying out secondary deceleration on the vehicle to reduce the speed to 5Km/h-10Km/h, otherwise, not carrying out secondary deceleration; finally, adjusting the scanning angle of the multispectral camera by the adjusting device while decelerating the vehicle for the second time, performing material analysis on the obstacle in a short distance (about 50 meters) under the condition of low speed (5 Km/h-10 Km/h), further controlling the vehicle according to the result of the spectral analysis, and controlling the vehicle to go around or stop if the material of the obstacle influences the normal running of the vehicle; and if the material of the obstacle does not influence the normal running, controlling the vehicle to continue to move forward.
Therefore, according to the automatic driving obstacle detection and identification method provided by the embodiment of the application, the obstacle in front of the vehicle is detected, and the vehicle is decelerated for the first time according to the detection result; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and a detection result, and carrying out second deceleration according to a fusion modeling result; detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result; realize detecting and discerning the barrier in vehicle the place ahead through the hierarchical level for the vehicle still can detect and discern the barrier in vehicle the place ahead under the abominable condition of observation, and in time adjust vehicle speed or vehicle running state, not only can reduce the wearing and tearing that the vehicle emergency brake brought, can guarantee moreover that the vehicle effectively keeps away the barrier.
Referring to fig. 5, fig. 5 is a diagram illustrating an automatic driving obstacle detection and recognition apparatus for detecting and recognizing an obstacle in front of a vehicle, the automatic driving obstacle detection and recognition apparatus being integrated in a rear end control device of the vehicle in the form of a computer program according to some embodiments of the present application, and the automatic driving obstacle detection and recognition apparatus including: a detection module 201, a data acquisition module 202 and a detection analysis module 203.
The detection module 201 is configured to detect an obstacle in front of the vehicle, and perform first deceleration control according to a detection result. In the embodiment, the millimeter wave radar is specifically a 77GHz millimeter wave radar, and the 77GHz millimeter wave radar adopts 2-transmission and 74-reception to realize synthetic aperture imaging of a detection area in front of the vehicle, so as to obtain high-resolution imaging and realize high resolution capability of a target azimuth; the target distance direction high resolution capability is realized by transmitting a broadband signal; of course, other devices suitable for long-distance detection may be used as the millimeter wave radar, and the above is only one embodiment of the present application, and should not be limited thereto.
The data acquisition module 202 is configured to perform data acquisition on an obstacle in front of the vehicle after the vehicle decelerates for the first time, perform fusion modeling on the data acquisition result and the detection result, and perform deceleration control for the second time according to the fusion modeling result. In the embodiment, modeling is performed according to the point cloud data of the laser radar and the point cloud data of the millimeter wave radar, as the point cloud of the millimeter wave radar is sparse, point cloud parameters of certain characteristic points of the millimeter wave radar and the point cloud of the laser radar are mainly subjected to matched filtering during fusion to obtain three-dimensional point cloud data, the size of an obstacle is identified according to the three-dimensional point cloud data and is compared with a preset value, if the size of the obstacle is smaller than the preset value, the vehicle does not need to be decelerated, and if the size of the obstacle is larger than the preset value, the vehicle is controlled to be decelerated. In particular, the lidar is a 128-line lidar.
The detection and analysis module 203 is configured to perform detection and analysis on an obstacle in front of the vehicle after the second deceleration, and maintain or change a vehicle driving state according to a detection and analysis result. Maintaining the vehicle running state means that the vehicle continues to run forward at the current speed, and changing the vehicle running state means that the vehicle makes a detour or stops. And if the material of the obstacle is detected and analyzed to influence the normal running of the vehicle, changing the running state of the vehicle, namely generating a bypassing or parking instruction and controlling the vehicle to bypass or park. If the material of the obstacle is detected and analyzed to not influence the normal running of the vehicle, the running state of the vehicle is kept, namely a continuous forward command is generated, and the vehicle is controlled to continue to run forward at the current speed.
In some embodiments, the data acquisition module 202 is configured to perform the following steps when performing the second deceleration control according to the fusion modeling result: and judging whether to perform second deceleration control according to whether the fusion modeling result is within a preset value range. If the fusion modeling result is within the range of the preset value, the vehicle does not need to be decelerated for the second time; and if the fusion modeling result is not within the range of the preset value, controlling the vehicle to decelerate for the second time.
In some embodiments, the data acquisition module 202 is configured to perform the following steps when fusion modeling the data acquisition results and the detection results: acquiring geographic reference information of a vehicle and relative information of a vehicle working environment relative to a laser scanner coordinate system; and carrying out coordinate conversion on the geographic reference information and the relative information to obtain three-dimensional point cloud information.
The geographic reference information of the vehicle can be directly acquired through an IMU/GNSS system, and the IMU/GNSS system acquires point cloud data; relative information of a vehicle working environment relative to a laser scanner coordinate system can be collected through a laser radar, and point cloud data are obtained through collection of the laser radar; and converting the point cloud data of each frame of the geographic reference information and the relative information from an IMU/GNSS coordinate system and a laser scanner coordinate system to a geocentric rectangular coordinate system through an interpolation algorithm, so as to realize the coordinate conversion of the geographic reference information and the relative information.
Specifically, the coordinate transformation of the geographical reference information and the relative information is calculated by the following formula:
wherein,the coordinate of the laser scanning point P in the geocentric rectangular coordinate system is obtained;the coordinate of the IMU/GNSS center in the geocentric rectangular coordinate system, namely the geographic reference information of the vehicle, is obtained by outputting the position measured by the IMU/GNSS system;forming a rotation matrix from an IMU/GNSS coordinate system to a geocentric rectangular coordinate system by the attitude measured by the IMU/GNSS system;the initial value can be obtained by manual measurement for the component expression of the bias from the scanning center of the laser scanner to the IMU/GNSS center in the IMU/GNSS coordinate system;the rotation matrix, which is the laser scanner coordinate system to IMU/GNSS coordinate system, is determined by the particular installation axis, that is,determined by the specific installation position of the laser scanner;for scanning the coordinates of a point in the laser scanner coordinate system, i.e. the vehicle operating environmentRelative information with respect to the laser scanner coordinate system is output by the laser scanner.
In some embodiments, the detection analysis module 203 is configured to perform the following steps when performing detection analysis on the obstacle in front of the vehicle after the second deceleration: acquiring an enhanced image of an obstacle in front of a vehicle; extracting multi-level depth features of the enhanced image; and detecting and analyzing the multi-level depth characteristics to obtain a detection and analysis result.
In some embodiments, the detection analysis module 203 is configured to perform the following steps after extracting the multi-level depth features of the enhanced image and before performing detection analysis on the multi-level depth features: and fusing the multi-level depth features to obtain semantic features. The multi-level depth features are fused, so that the size of the feature map is reduced, the contained information is richer, and the accuracy of detection and analysis is improved.
In some embodiments, the automatic driving obstacle detection and recognition device further comprises an adjustment module. The adjusting module is used for adjusting the position of the device for detecting and analyzing the obstacle in front of the vehicle after the second deceleration before the obstacle in front of the vehicle is detected and analyzed. In particular, the adjustment module may be a robot or other device that can perform angular or displacement adjustment on the acquisition module.
As can be seen from the above, the automatic driving obstacle detection and recognition device provided in the embodiment of the present application performs first deceleration control according to the detection result by detecting an obstacle in front of the vehicle; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and a detection result, and carrying out second deceleration control according to a fusion modeling result; detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result; realize detecting and discerning the barrier in vehicle the place ahead through the hierarchical level for the vehicle still can detect and discern the barrier in vehicle the place ahead under the abominable condition of observation, and in time adjust vehicle speed or vehicle running state, not only can reduce the wearing and tearing that the vehicle emergency brake brought, can guarantee moreover that the vehicle effectively keeps away the barrier.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, in which an electronic device 3 includes: the processor 301 and the memory 302, the processor 301 and the memory 302 being interconnected and communicating with each other via a communication bus 303 and/or other form of connection mechanism (not shown), the memory 302 storing a computer program executable by the processor 301, the computer program being executable by the processor 301 when the computing device is running to perform the method in any of the alternative implementations of the above embodiments when the processor 301 executes the computer program to perform the following functions: detecting an obstacle in front of the vehicle, and performing first deceleration control according to a detection result; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and a detection result, and carrying out second deceleration control according to a fusion modeling result; and detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result.
The embodiment of the present application provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program executes the method in any optional implementation manner of the foregoing embodiment to implement the following functions: detecting an obstacle in front of the vehicle, and performing first deceleration control according to a detection result; carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and a detection result, and carrying out second deceleration control according to a fusion modeling result; and detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result. The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. An automatic driving obstacle detection and recognition method for detecting and recognizing an obstacle in front of a vehicle, the method comprising the steps of:
detecting an obstacle in front of the vehicle, and performing first deceleration control according to a detection result;
carrying out data acquisition on an obstacle in front of the vehicle after the first deceleration, carrying out fusion modeling on a data acquisition result and the detection result, and carrying out second deceleration control according to a fusion modeling result;
and detecting and analyzing the obstacle in front of the vehicle after the second deceleration, and keeping or changing the running state of the vehicle according to the detection and analysis result.
2. The automatic driving obstacle detection and recognition method according to claim 1, wherein the detection and analysis of the obstacle in front of the vehicle after the second deceleration includes:
acquiring an enhanced image of an obstacle in front of the vehicle;
extracting multi-level depth features of the enhanced image;
and carrying out detection analysis on the multi-level depth features.
3. The automatic driving obstacle detection and recognition method according to claim 2, further comprising, after the extracting of the multi-level depth feature of the enhanced image and before the detection analysis of the multi-level depth feature, the steps of:
and fusing the multi-level depth features to obtain semantic features.
4. The automatic driving obstacle detection and recognition method according to claim 1, wherein the performing of the second deceleration control based on the fusion modeling result includes the steps of:
and judging whether to perform the second deceleration control according to whether the fusion modeling result is within a preset value range.
5. The automatic driving obstacle detection and identification method according to claim 1, wherein the fusion modeling of the data acquisition result and the detection result includes the steps of:
acquiring geographic reference information of the vehicle and relative information of the working environment of the vehicle relative to a coordinate system of a laser scanner;
and performing coordinate conversion on the geographic reference information and the relative information.
6. The automatic driving obstacle detection and recognition method according to claim 5, wherein the coordinate conversion of the geographical reference information and the relative information is calculated by the following formula:
wherein,the coordinate of the laser scanning point P in the geocentric rectangular coordinate system is obtained;the coordinate of the IMU/GNSS center in the geocentric rectangular coordinate system is obtained by outputting the position measured by the IMU/GNSS system;forming a rotation matrix from an IMU/GNSS coordinate system to a geocentric rectangular coordinate system by the attitude measured by the IMU/GNSS system;expressing the component of the bias from the scanning center of the laser scanner to the IMU/GNSS center in an IMU/GNSS coordinate system, and obtaining an initial value through manual measurement;a rotation matrix from a laser scanner coordinate system to an IMU/GNSS coordinate system is determined by a specific installation axial direction;the coordinates of the scanning point of the laser scanner in the coordinate system of the laser scanner are output by the laser scanner.
7. The automatic driving obstacle detection and recognition method according to claim 1, further comprising, before performing detection analysis of an obstacle in front of the vehicle after the second deceleration, the steps of:
and adjusting the position of equipment for detecting and analyzing the obstacle in front of the vehicle.
8. An automatic driving obstacle detection and recognition device for detecting and recognizing an obstacle in front of a vehicle, the device comprising:
the detection module is used for detecting an obstacle in front of the vehicle and carrying out first-time deceleration control according to a detection result;
the data acquisition module is used for acquiring data of an obstacle in front of the vehicle after the vehicle decelerates for the first time, fusing and modeling a data acquisition result and the detection result, and performing deceleration control for the second time according to a fused and modeled result;
and the detection and analysis module is used for detecting and analyzing the obstacle in front of the vehicle after the second deceleration and keeping or changing the running state of the vehicle according to the detection and analysis result.
9. An electronic device comprising a processor and a memory, said memory storing computer readable instructions which, when executed by said processor, perform the steps of the method of any of claims 1-7.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111370834.8A CN114120275A (en) | 2021-11-18 | 2021-11-18 | Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111370834.8A CN114120275A (en) | 2021-11-18 | 2021-11-18 | Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114120275A true CN114120275A (en) | 2022-03-01 |
Family
ID=80397903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111370834.8A Pending CN114120275A (en) | 2021-11-18 | 2021-11-18 | Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114120275A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115562280A (en) * | 2022-10-12 | 2023-01-03 | 九识智行(北京)科技有限公司 | Path planning method, device and storage medium for automatically driving vehicle to get rid of trouble |
CN117048596A (en) * | 2023-08-04 | 2023-11-14 | 广州汽车集团股份有限公司 | Method, device, vehicle and storage medium for avoiding obstacle |
-
2021
- 2021-11-18 CN CN202111370834.8A patent/CN114120275A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115562280A (en) * | 2022-10-12 | 2023-01-03 | 九识智行(北京)科技有限公司 | Path planning method, device and storage medium for automatically driving vehicle to get rid of trouble |
CN117048596A (en) * | 2023-08-04 | 2023-11-14 | 广州汽车集团股份有限公司 | Method, device, vehicle and storage medium for avoiding obstacle |
CN117048596B (en) * | 2023-08-04 | 2024-05-10 | 广州汽车集团股份有限公司 | Obstacle avoidance method, device, vehicle and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11691648B2 (en) | Drivable surface identification techniques | |
USRE48322E1 (en) | Construction zone sign detection | |
US20230069346A1 (en) | Methods and Systems for Detecting Weather Conditions using Vehicle Onboard Sensors | |
CN113329927B (en) | Laser radar-based trailer tracking | |
US10705220B2 (en) | System and method for ground and free-space detection | |
US9199641B2 (en) | Construction zone object detection using light detection and ranging | |
US9193355B2 (en) | Construction zone sign detection using light detection and ranging | |
US9221461B2 (en) | Construction zone detection using a plurality of information sources | |
KR102142361B1 (en) | Methods and systems for detecting weather conditions using vehicle onboard sensors | |
CN111551938B (en) | Unmanned technology perception fusion method based on mining area environment | |
US11842430B2 (en) | Methods and systems for ground segmentation using graph-cuts | |
US20140336935A1 (en) | Methods and Systems for Detecting Weather Conditions Using Vehicle Onboard Sensors | |
US12125269B2 (en) | Sensor fusion | |
CN113888463B (en) | Wheel rotation angle detection method and device, electronic equipment and storage medium | |
CN114120275A (en) | Automatic driving obstacle detection and recognition method and device, electronic equipment and storage medium | |
US20210124351A1 (en) | Onboard cluster tracking system | |
Li et al. | Composition and application of current advanced driving assistance system: A review | |
Meydani | State-of-the-Art Analysis of the Performance of the Sensors Utilized in Autonomous Vehicles in Extreme Conditions | |
US20230271616A1 (en) | Vehicle drivable area detection system | |
CN111862654B (en) | Intelligent piloting method, application, intelligent piloting system and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |