CN112666535A - Environment sensing method and system based on multi-radar data fusion - Google Patents
Environment sensing method and system based on multi-radar data fusion Download PDFInfo
- Publication number
- CN112666535A CN112666535A CN202110036133.4A CN202110036133A CN112666535A CN 112666535 A CN112666535 A CN 112666535A CN 202110036133 A CN202110036133 A CN 202110036133A CN 112666535 A CN112666535 A CN 112666535A
- Authority
- CN
- China
- Prior art keywords
- grid
- point cloud
- radar
- target
- clustering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000004927 fusion Effects 0.000 title claims abstract description 21
- 230000008447 perception Effects 0.000 claims abstract description 13
- 238000001514 detection method Methods 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 19
- 230000002159 abnormal effect Effects 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000012800 visualization Methods 0.000 claims description 12
- 238000012216 screening Methods 0.000 claims description 11
- 238000009434 installation Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 claims description 2
- 238000002310 reflectometry Methods 0.000 claims description 2
- 230000007613 environmental effect Effects 0.000 claims 2
- 238000005259 measurement Methods 0.000 claims 1
- 238000003672 processing method Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 230000001010 compromised effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003137 locomotive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Landscapes
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses an environment perception method and system for multi-radar data fusion. According to the method, the point cloud characteristics of the laser radar are considered, different thresholds are set according to the distance, the determination of the different distance thresholds is determined by combining the actual use effect, the problems of over-segmentation and under-segmentation of the target are effectively solved, and the method can independently work in other simple scenes with failure sensing systems. In addition, the method also has the characteristics of expandability and strong practicability.
Description
Technical Field
The invention belongs to the technical field of vehicle environment perception, and particularly relates to a multi-radar environment perception data fusion technology for intelligently driving a vehicle.
Background
The intelligent driving vehicle needs the sensing system to keep stable detection in different environments and working conditions, and generally comprises millimeter waves, ultrasonic waves, panoramic vision and laser radars, wherein a vision sensor serving as a main sensor sensing system is greatly influenced by light, the adaptability and robustness of the sensing system can be effectively improved by fusing multiple laser radars for target detection, and particularly, the condition that the target detection capability of the vision sensor is insufficient or fails due to weak light is adopted. In addition, compared with a single radar system, the blind area of the multi-radar system is smaller, the sensing capability to the surrounding environment is stronger, and the sensing safety performance of the system can be effectively enhanced.
Chinese patent document CN201911178929.2 discloses a method and system for detecting driving obstacles of an automobile with multiple laser radars, which combines a high-precision map with the laser radars, and needs to determine the ROI area by means of other sensors, and has strong dependence. Chinese patent document CN202010379056.8 discloses a method based on multi-lidar data fusion, which is based on multi-lidar data fusion, and can effectively and fully utilize data of multiple lidar, but the method is complex in conversion, time-consuming in processing, and has no expansibility and better practicability.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a multi-radar data fusion environment perception method, aiming at establishing an extensible multi-radar data fusion environment perception system which can work independently and has strong practicability.
The technical scheme of the invention is as follows:
a multi-radar data fusion environment perception method comprises the following steps:
step 1, point cloud information is obtained: obtaining vehicle surrounding environment information by a plurality of radars to obtain point cloud data;
step 2, point cloud distortion compensation: combining the IMU information to obtain the conversion relation between the radar position where each point cloud is located and the initial position, and converting all points of each frame to the initial radar position;
step 3, converting a coordinate system: converting the point cloud after distortion compensation from a radar coordinate system to a vehicle body coordinate system through a calibration file to obtain the point cloud under the vehicle body coordinate system;
step 4, establishing a grid map: setting a grid map, determining a grid area, performing point cloud projection, finally judging the attribute of the grid, and establishing to obtain the grid map;
step 5, target clustering and screening: clustering targets according to the occupation state of the grids, and deleting abnormal targets;
and 6, visualizing the target.
In the method, the original data of the multiple radars needs to be converted into a program predefined structure through driving, the original data structure of the laser radar is sent in the form of udp packets, each udp packet consists of a plurality of blocks, each block contains data of all laser transmitters of the laser radar at the same time, and the udp packets are stored in the form of single lines and single frames through special radar data driving in a computing and processing unit. In addition, the conversion driving of different radar point cloud data to the vehicle body coordinate system is obtained by original radar calibration information on the basis of inertial navigation unit data correction, and then the radar point cloud data is transmitted to a processing unit of a perception system at a frequency higher than the point cloud scanning frequency, so that the main input interface of the processing unit comprises: single-line single-frame point cloud data after point cloud driving processing and conversion matrixes of different radar-vehicle bodies after conversion driving processing.
After the data are converted into the vehicle body coordinate system, filtering is carried out through priori knowledge so as to reduce the subsequent computational power and the load of a grid map. Related parameters of the raster map can be modified through configuration files according to an actual scene, and better matching of scene-program and computational power-efficiency is achieved. And projecting data of different radars to a grid plane (XY plane), putting all point clouds in the grid process of different IDs according to the (x, y) position information of the point clouds, and deleting abnormal point clouds by a denoising processing module in the process so as to reduce the interference of noise points on a sensing system. After the point cloud projection is finished, according to the characteristic that the point cloud is long-distance and sparse, different strategies are adopted for distinguishing the attributes of the grids at different distances, the probability of false detection and missed detection can be effectively reduced, and the establishment of an effective grid map has important influence on the subsequent target clustering effect and efficiency.
The specific method for establishing the grid map in the step 4 is as follows:
step 4.1, setting a grid map: setting the size and the resolution of a grid map, and determining a grid area, namely a forward detection distance, a backward detection distance and a lateral detection distance, by combining the position of a vehicle in a grid;
step 4.2, point cloud projection is carried out: projecting point cloud data of different radars to a grid plane (XY plane), putting all point clouds in grids of different IDs according to (x, y) position information of the point clouds, and deleting abnormal point clouds through denoising processing in the process;
step 4.3, judging the attributes of the grids: for the attribute of each grid, calculating the height difference between the highest point and the lowest point in the grid for the near point cloud, if the height difference is greater than a threshold value X, determining the grid as a target grid, otherwise, determining the grid as not the target grid; and for the point cloud at a distance, according to the absolute height, namely whether the highest point in the grid is higher than a threshold value Y, if so, determining as a target grid, otherwise, determining as a non-target grid.
The method is based on target detection of a grid graph, the whole graph is traversed and clustered, the point cloud is considered to have sparsity at a far position, and a clustering threshold value needs to be adaptively modified according to the distance, so that over-segmentation and under-segmentation of target detection can be effectively reduced, targets which are clustered are screened, a part of targets which do not meet conditions are filtered, and the screening conditions can be flexibly modified through a configuration file to meet the requirements of a scene.
The target clustering and screening in the step 5 specifically comprises the following steps: the target clustering adopts a flooding method, for a far target grid, a clustering threshold is large, for a near target grid, the clustering threshold is small, and the whole graph is traversed and clustered based on the target detection of a grid graph; the screening is to determine the constraint conditions of the abnormal targets according to experience or experiments to delete the abnormal targets, and obtain a series of detection targets and related information including size, speed and the like.
In the target visualization stage, the method can set and display the two-dimensional rectangular frame, the original two-dimensional convex hull and the three-dimensional frame by modifying the visualization configuration parameters.
Further, in the method, at least one radar is arranged on the roof as a main radar, and at least one radar is arranged on each side of the vehicle at a position 0.35-0.5m away from the ground as a side radar. The side radar is required to be arranged at an installation inclination angle of 8-10 degrees and is arranged on two sides of the vehicle head or below the rearview mirror.
The invention further provides an environment perception system for realizing the multi-radar data fusion of the method.
The method has the following specific advantages:
the method has the advantages that: in the point cloud target detection based on the grid map, due to the characteristic of close-dense and far-sparse of the mechanical radar point cloud, if the clustering threshold value is unchanged during clustering, over-segmentation of a far target is easily caused. The method considers the point cloud characteristics of the laser radar, sets different thresholds (small near threshold and large far threshold) according to the distance, determines the determination of the different distance thresholds by combining the actual using effect, effectively solves the problems of over-segmentation and under-segmentation of the target, and can independently work in other simple scenes with failure sensing systems.
The method has the advantages that: because the laser radar is easy to generate a blind area near the vehicle body, generally, the mounting height is higher, the blind area is larger, so that the low-beam radar is symmetrical at the lower position of the vehicle body (considering the cost requirement), in order to further reduce the blind area, the radar is mounted at a certain side inclination angle, in actual use, all vehicle body side targets (cars and trucks) can be detected, the vehicle body side targets (pedestrians) have about 0.5m blind areas, and the vehicle side blind areas have obvious advantages compared with other methods;
the method has the advantages that: the method has obvious advantages in the aspect of expansibility, aiming at the adjustment of the installation quantity of the laser radars, the algorithm does not need to be changed, only the point cloud channel name and the calibration parameters of the point cloud channel name are changed in the configuration file, in addition, in the aspect of visualization, different target perception effects can be presented by modifying the visual configuration parameters so as to meet the requirements of different users, and the method has better expansibility.
Drawings
FIG. 1 is a diagram of a multi-radar data-fused context-aware system algorithm processing framework;
FIG. 2 is a flow diagram of a target grid map creation module;
FIG. 3 is a flow diagram of an object clustering and visualization module;
FIG. 4 is a schematic diagram of an installation position of a multi-radar data fusion environment perception system.
Detailed Description
Embodiments of the present disclosure are described herein. However, it is to be understood that the disclosed embodiments are merely exemplary, and that other embodiments may take various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention. As will be appreciated by one of ordinary skill in the art, various features illustrated and described with reference to any one of the figures may be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combination of features shown provides a representative embodiment for a typical application. However, various combinations and modifications of the features consistent with the teachings of the present disclosure may be desired for particular applications or embodiments.
The system adopted by the invention mainly comprises a plurality of radars, a bracket for bearing, a junction box, a high-speed network cable and an algorithm unit. Fig. 4 roughly illustrates the installation positional relationship of the main radar 33, the side radar 34, and the side radar 35.
Radar main mounting position: one main radar is arranged on the roof of the vehicle, the measuring distance can reach 150m under the reflectivity of 10%, and in addition, the vertical angle resolution and the horizontal angle resolution are smaller, in the embodiment, the design is that under the scanning frequency of 10HZ, the horizontal angle resolution is 0.2 degrees, the vertical angle resolution is 0.33 degrees in the middle area, so that the target can be better depicted, the number of point clouds hitting on the target object is more, the point cloud lines are denser, and more time can be reserved for early finding the front target and further planning control. But because the vertical installation distance of the main radar is higher from the ground, the main radar has larger blind areas around the self-vehicle. Therefore, the laser radars with relatively fewer wire harnesses are required to be installed at the lower positions of the two sides of the vehicle, the actual installation distance is about 0.35m in the embodiment, the application effect and the actual cost can be considered, a certain installation inclination angle is set manually according to needs, and the actual installation inclination angle is about 9 degrees in the embodiment, so that the blind area is effectively reduced. The side radar is in locomotive both sides or rear-view mirror below, and the automobile body rear radar can be installed as required, and the system frame has the interface of predetermineeing, can insert newly-increased radar data fast.
Fig. 1 integrally describes a framework part of the whole method, under the condition that power supply of a laser radar 2, a laser radar 3 and a laser radar 4 meets conditions and the self state is normal, a point cloud UDP packet is generated according to a set strategy, factory setting parameters of each radar are different, corresponding parameter files 5 need to be loaded for correction, then an original point cloud driving program 6 is accessed together, and finally a predefined point cloud data format of the laser radar 7, the laser radar 8 and the laser radar 9 is obtained.
Step 1: acquiring point cloud information: in the process of vehicle movement, the laser radar continuously acquires the information of the surrounding environment, and the expression form is a circle of point clouds.
Step 2: point cloud distortion compensation: because the radar moves along with the vehicle body, points in the point cloud of the same frame are collected at different positions, namely the point cloud is distorted, and the point cloud distortion brings negative effects on subsequent processing precision, the conversion relation between the radar position where each point cloud is located and the initial position needs to be obtained by combining the information of the IMU12, and then all the points of each frame are converted into the initial radar position.
And step 3: and (3) converting a coordinate system: the point cloud after distortion compensation is converted from a radar coordinate system to a vehicle body coordinate system through a calibration file 13, and a point cloud 14 under the vehicle body coordinate system is obtained.
Step 4, establishing a grid map: in order to reduce the amount of computation and improve efficiency, a grid map 17 is created by judging whether a point is located in an area of interest 15, filtering out 16 points that are partially hit on the vehicle body and do not contribute to target detection, and points located in the area of interest, the grid map mainly relating to setting of forward detection distances, backward detection distances, and lateral detection distances, determination of grid resolution, grid occupancy state judgment policy selection, and the like. A complete grid map is obtained and then object clustering and visualization can be performed according to the grid occupancy state 18.
Step 5, target clustering: the process mainly comprises the determination of a clustering strategy and the determination of a clustering threshold value.
Step 6, target visualization: the method mainly comprises the selection of visual contents, and the visual module can selectively output a two-dimensional convex hull, a two-dimensional rectangular frame, a three-dimensional convex hull and a three-dimensional large frame.
Fig. 2 describes in more detail the grid map building process of step 4:
the size and the resolution of the grid map are set 19, the grid area 20, namely the forward detection distance, the backward detection distance and the lateral detection distance, can be determined by combining the position of the vehicle in the grid, and parameters can be flexibly set according to different scenes to achieve balance of detection effect and efficiency. For the laser radar points under the vehicle body coordinates, a part of points are noise points or abnormal points, screening needs to be carried out through a noise point removing module 21, and non-noise points are placed into a grid. For the attribute of each grid, calculating the height difference between the highest point and the lowest point in the grid at the near place, if the height difference is greater than a certain threshold value, such as a threshold value X, determining the grid as a target grid, otherwise, not determining the grid as the target grid; because the point cloud at a distance has sparsity, the points in the grid at the distance are fewer, and according to the absolute height, namely whether the highest point in the grid is higher than a certain threshold value, if the absolute height is higher than the threshold value, the grid is determined as a target grid, otherwise, the grid is determined as a non-target grid. Both the thresholds X and Y can be obtained by experimental results.
FIG. 3 describes the process of step 5 object clustering and screening in more detail:
the target clustering 26 is carried out after the grid map and the grid attributes are established, the clustering method adopts a flooding method, for the far target grid, the clustering threshold is large, the clustering threshold is small for the near target grid, and the acquisition of the specific threshold needs to be determined according to the experimental effect under the large principle, so that over-segmentation and under-segmentation can be effectively reduced. Since the abnormal target 28 needs to be deleted by determining the constraint condition 27 of the abnormal target according to the target detection which only depends on the height information or the situation of inaccurate detection, after the process is completed, a series of detected targets and related information including size, speed and the like can be obtained. This information can be given to downstream modules and visualized.
In the target visualization process, the target setting can be displayed as a two-dimensional rectangular frame, an original two-dimensional convex hull and/or a three-dimensional frame by adopting the visualization configuration parameters.
A further embodiment of the present invention is an environment sensing system for multi-radar data fusion, which implements the above method, and includes a plurality of radars and a processing unit, where the processing unit includes:
the system comprises a point cloud data acquisition module, a point cloud data acquisition module and a data processing module, wherein the point cloud data acquisition module is configured to acquire vehicle surrounding environment information through a plurality of radars to obtain point cloud data;
the point cloud distortion compensation module is configured to be combined with the IMU information to obtain a conversion relation between the radar position where each point cloud is located and the initial position, and all points of each frame are converted to the initial radar position;
the coordinate system conversion module is configured to complete conversion from the radar coordinate system to the vehicle body coordinate system through the calibration file of the point cloud after the distortion compensation to obtain the point cloud under the vehicle body coordinate system;
the grid map building module is configured to set a grid map, determine a grid area, then perform point cloud projection and finally judge the attribute of the grid;
the target clustering and screening module is configured to perform target clustering according to the grid occupation state and delete abnormal targets;
an object visualization module.
And the processing unit system is provided with a preset interface for accessing the newly added radar data.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. As previously described, features of the various embodiments may be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments may have been described as providing advantages or being advantageous over other embodiments or prior art implementations in terms of one or more desired characteristics, those of ordinary skill in the art will recognize that one or more features or characteristics may be compromised to achieve desired overall system attributes, depending on the particular application and implementation. These attributes may include, but are not limited to, cost, strength, durability, life cycle cost, appearance, size, manufacturability, functional robustness, and the like. As such, embodiments described as less desirable in one or more characteristics than other embodiments or prior art implementations are outside the scope of the present disclosure and may be desirable for particular applications.
Claims (10)
1. A multi-radar data fusion environment perception method is characterized by comprising the following steps:
step 1, point cloud data are obtained: obtaining vehicle surrounding environment information by a plurality of radars to obtain point cloud data;
step 2, point cloud distortion compensation: combining the IMU information to obtain the conversion relation between the radar position where each point cloud is located and the initial position, and converting all points of each frame to the initial radar position;
step 3, converting a coordinate system: converting the point cloud after distortion compensation from a radar coordinate system to a vehicle body coordinate system through a calibration file to obtain the point cloud under the vehicle body coordinate system;
step 4, establishing a grid map: setting a grid map, determining a grid area, performing point cloud projection, finally judging the attribute of the grid, and establishing to obtain the grid map;
step 5, target clustering and screening: clustering targets according to the occupation state of the grids, and deleting abnormal targets;
and 6, visualizing the target.
2. The multi-radar data fusion environment perception method according to claim 1, wherein the specific method for building the grid map in step 4 is as follows:
step 4.1, setting a grid map: setting the size and the resolution of a grid map, and determining a grid area, namely a forward detection distance, a backward detection distance and a lateral detection distance, by combining the position of a vehicle in a grid;
step 4.2, point cloud projection is carried out: projecting point cloud data of different radars to a grid plane (XY plane), putting all point clouds in grids of different IDs according to (x, y) position information of the point clouds, and deleting abnormal point clouds through denoising processing in the process;
step 4.3, judging the attributes of the grids: for the attribute of each grid, calculating the height difference between the highest point and the lowest point in the grid for the near point cloud, if the height difference is greater than a threshold value X, determining the grid as a target grid, otherwise, determining the grid as not the target grid; and for the point cloud at a distance, according to the absolute height, namely whether the highest point in the grid is higher than a threshold value Y, if so, determining as a target grid, otherwise, determining as a non-target grid.
3. The multi-radar data fusion environment perception method according to claim 2, wherein the denoising processing method in the step 4.2 is: and filtering out points hitting the vehicle body and points which do not contribute to target detection by judging whether the point cloud is positioned in the region of interest or not, and leaving the points positioned in the region of interest.
4. The multi-radar data fusion context awareness method of claim 2, wherein the step 5 of target clustering and screening: the target clustering adopts a flooding method, for a far target grid, a clustering threshold is large, for a near target grid, the clustering threshold is small, and the whole graph is traversed and clustered based on the target detection of a grid graph; the screening is to determine the constraint conditions of the abnormal targets according to experience or experiments to delete the abnormal targets, and obtain a series of detection targets and related information including size, speed and the like.
5. The method for environmental awareness of multi-radar data fusion according to claim 1, wherein the radar raw data needs to be driven to a predefined format, and the method comprises: the method comprises the steps that under the condition that power supply of radars meets conditions and the self state is normal, a point cloud UDP packet is generated, each radar loads a corresponding parameter file for correction according to factory set parameters, then the corresponding parameter files are connected to an original point cloud driving unit together, and finally a predefined radar point cloud data format is obtained.
6. The method for sensing the environment for multi-radar data fusion according to claim 1, wherein in the target visualization process of step 6, the target setting is displayed as a two-dimensional rectangular frame, an original two-dimensional convex hull and/or a three-dimensional frame by using the visualization configuration parameters.
7. The method for environmental awareness for multi-radar data fusion according to claim 1, wherein at least one of the radars is arranged on the roof as a main radar, and at least one of the radars is arranged on both sides of the vehicle at a position of 0.35-0.5m from the ground as a side radar.
8. The method for context awareness for multi-radar data fusion according to claim 1, wherein the main radar has a measurement range of up to 150m at 10% reflectivity, a vertical angular resolution and a horizontal angular resolution of 0.2 ° at a scanning frequency of 10HZ, and a vertical angular resolution of 0.33 ° in the middle region; the side radar is required to be arranged at an installation inclination angle of 8-10 degrees and is arranged on two sides of the vehicle head or below the rearview mirror.
9. The context aware system for multi-radar data fusion implementing the method of claims 1-8, comprising a plurality of radars and a processing unit, wherein the processing unit comprises:
the system comprises a point cloud data acquisition module, a point cloud data acquisition module and a data processing module, wherein the point cloud data acquisition module is configured to acquire vehicle surrounding environment information through a plurality of radars to obtain point cloud data;
the point cloud distortion compensation module is configured to be combined with the IMU information to obtain a conversion relation between the radar position where each point cloud is located and the initial position, and all points of each frame are converted to the initial radar position;
the coordinate system conversion module is configured to complete conversion from the radar coordinate system to the vehicle body coordinate system through the calibration file of the point cloud after the distortion compensation to obtain the point cloud under the vehicle body coordinate system;
the grid map building module is configured to set a grid map, determine a grid area, then perform point cloud projection and finally judge the attribute of the grid;
the target clustering and screening module is configured to perform target clustering according to the grid occupation state and delete abnormal targets;
an object visualization module.
10. The system of claim 1, wherein the processing unit system has a predetermined interface for accessing new radar data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110036133.4A CN112666535A (en) | 2021-01-12 | 2021-01-12 | Environment sensing method and system based on multi-radar data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110036133.4A CN112666535A (en) | 2021-01-12 | 2021-01-12 | Environment sensing method and system based on multi-radar data fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112666535A true CN112666535A (en) | 2021-04-16 |
Family
ID=75414394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110036133.4A Withdrawn CN112666535A (en) | 2021-01-12 | 2021-01-12 | Environment sensing method and system based on multi-radar data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112666535A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113281783A (en) * | 2021-05-13 | 2021-08-20 | 江苏徐工工程机械研究院有限公司 | Mining truck |
CN113391270A (en) * | 2021-06-11 | 2021-09-14 | 森思泰克河北科技有限公司 | False target suppression method and device for multi-radar point cloud fusion and terminal equipment |
CN113734176A (en) * | 2021-09-18 | 2021-12-03 | 重庆长安汽车股份有限公司 | Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium |
CN113807239A (en) * | 2021-09-15 | 2021-12-17 | 京东鲲鹏(江苏)科技有限公司 | Point cloud data processing method and device, storage medium and electronic equipment |
CN113838143A (en) * | 2021-09-13 | 2021-12-24 | 三一专用汽车有限责任公司 | Method and device for determining calibration external parameter, engineering vehicle and readable storage medium |
CN114167407A (en) * | 2021-11-29 | 2022-03-11 | 中汽创智科技有限公司 | Multi-radar fusion perception processing method and device, vehicle and storage medium |
CN114397654A (en) * | 2022-03-24 | 2022-04-26 | 陕西欧卡电子智能科技有限公司 | Unmanned ship obstacle avoidance method based on multi-radar sensing |
CN114488026A (en) * | 2022-01-30 | 2022-05-13 | 重庆长安汽车股份有限公司 | Underground parking garage passable space detection method based on 4D millimeter wave radar |
CN114791601A (en) * | 2022-04-24 | 2022-07-26 | 深圳裹动科技有限公司 | Method and system for constructing contour of target object, and main control device |
CN115407315A (en) * | 2021-05-27 | 2022-11-29 | 北京万集科技股份有限公司 | Point cloud denoising method, device, detection radar and storage medium |
CN116704455A (en) * | 2022-11-18 | 2023-09-05 | 宇通客车股份有限公司 | A self-driving vehicle and object perception method and system thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404844A (en) * | 2014-09-12 | 2016-03-16 | 广州汽车集团股份有限公司 | Road boundary detection method based on multi-line laser radar |
CN108985171A (en) * | 2018-06-15 | 2018-12-11 | 上海仙途智能科技有限公司 | Estimation method of motion state and state estimation device |
CN109031346A (en) * | 2018-07-09 | 2018-12-18 | 江苏大学 | A kind of periphery parking position aided detection method based on 3D laser radar |
CN109145677A (en) * | 2017-06-15 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
CN110208819A (en) * | 2019-05-14 | 2019-09-06 | 江苏大学 | A kind of processing method of multiple barrier three-dimensional laser radar data |
CN110221603A (en) * | 2019-05-13 | 2019-09-10 | 浙江大学 | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud |
CN110579771A (en) * | 2019-09-12 | 2019-12-17 | 南京莱斯信息技术股份有限公司 | A method of aircraft berth guidance based on laser point cloud |
CN110906923A (en) * | 2019-11-28 | 2020-03-24 | 重庆长安汽车股份有限公司 | Vehicle-mounted multi-sensor tight coupling fusion positioning method and system, storage medium and vehicle |
CN112101092A (en) * | 2020-07-31 | 2020-12-18 | 北京智行者科技有限公司 | Automatic driving environment perception method and system |
-
2021
- 2021-01-12 CN CN202110036133.4A patent/CN112666535A/en not_active Withdrawn
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404844A (en) * | 2014-09-12 | 2016-03-16 | 广州汽车集团股份有限公司 | Road boundary detection method based on multi-line laser radar |
CN109145677A (en) * | 2017-06-15 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, device, equipment and storage medium |
CN108985171A (en) * | 2018-06-15 | 2018-12-11 | 上海仙途智能科技有限公司 | Estimation method of motion state and state estimation device |
CN109031346A (en) * | 2018-07-09 | 2018-12-18 | 江苏大学 | A kind of periphery parking position aided detection method based on 3D laser radar |
CN110221603A (en) * | 2019-05-13 | 2019-09-10 | 浙江大学 | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud |
CN110208819A (en) * | 2019-05-14 | 2019-09-06 | 江苏大学 | A kind of processing method of multiple barrier three-dimensional laser radar data |
CN110579771A (en) * | 2019-09-12 | 2019-12-17 | 南京莱斯信息技术股份有限公司 | A method of aircraft berth guidance based on laser point cloud |
CN110906923A (en) * | 2019-11-28 | 2020-03-24 | 重庆长安汽车股份有限公司 | Vehicle-mounted multi-sensor tight coupling fusion positioning method and system, storage medium and vehicle |
CN112101092A (en) * | 2020-07-31 | 2020-12-18 | 北京智行者科技有限公司 | Automatic driving environment perception method and system |
Non-Patent Citations (1)
Title |
---|
蒋剑飞等: "基于三维激光雷达的障碍物及可通行区域实时检测", 《激光与光电子学进展》, pages 241 - 250 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113281783A (en) * | 2021-05-13 | 2021-08-20 | 江苏徐工工程机械研究院有限公司 | Mining truck |
CN115407315A (en) * | 2021-05-27 | 2022-11-29 | 北京万集科技股份有限公司 | Point cloud denoising method, device, detection radar and storage medium |
CN113391270A (en) * | 2021-06-11 | 2021-09-14 | 森思泰克河北科技有限公司 | False target suppression method and device for multi-radar point cloud fusion and terminal equipment |
CN113838143A (en) * | 2021-09-13 | 2021-12-24 | 三一专用汽车有限责任公司 | Method and device for determining calibration external parameter, engineering vehicle and readable storage medium |
CN113807239A (en) * | 2021-09-15 | 2021-12-17 | 京东鲲鹏(江苏)科技有限公司 | Point cloud data processing method and device, storage medium and electronic equipment |
CN113807239B (en) * | 2021-09-15 | 2023-12-08 | 京东鲲鹏(江苏)科技有限公司 | Point cloud data processing method and device, storage medium and electronic equipment |
CN113734176A (en) * | 2021-09-18 | 2021-12-03 | 重庆长安汽车股份有限公司 | Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium |
CN113734176B (en) * | 2021-09-18 | 2024-11-22 | 重庆长安汽车股份有限公司 | Environmental perception system, method, vehicle and storage medium for intelligent driving vehicle |
CN114167407A (en) * | 2021-11-29 | 2022-03-11 | 中汽创智科技有限公司 | Multi-radar fusion perception processing method and device, vehicle and storage medium |
CN114488026A (en) * | 2022-01-30 | 2022-05-13 | 重庆长安汽车股份有限公司 | Underground parking garage passable space detection method based on 4D millimeter wave radar |
CN114397654A (en) * | 2022-03-24 | 2022-04-26 | 陕西欧卡电子智能科技有限公司 | Unmanned ship obstacle avoidance method based on multi-radar sensing |
CN114791601A (en) * | 2022-04-24 | 2022-07-26 | 深圳裹动科技有限公司 | Method and system for constructing contour of target object, and main control device |
CN116704455A (en) * | 2022-11-18 | 2023-09-05 | 宇通客车股份有限公司 | A self-driving vehicle and object perception method and system thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112666535A (en) | Environment sensing method and system based on multi-radar data fusion | |
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
WO2021259344A1 (en) | Vehicle detection method and device, vehicle, and storage medium | |
US11427193B2 (en) | Methods and systems for providing depth maps with confidence estimates | |
CN108226951B (en) | Laser sensor based real-time tracking method for fast moving obstacle | |
CN110782465B (en) | Ground segmentation method and device based on laser radar and storage medium | |
JP7072641B2 (en) | Road surface detection device, image display device using road surface detection device, obstacle detection device using road surface detection device, road surface detection method, image display method using road surface detection method, and obstacle detection method using road surface detection method | |
US20180260613A1 (en) | Object tracking | |
WO2019208101A1 (en) | Position estimating device | |
JP7095640B2 (en) | Object detector | |
CN110082783B (en) | Method and device for detecting cliffs | |
CN112101316B (en) | Target detection method and system | |
JP2018116004A (en) | Data compression apparatus, control method, program and storage medium | |
JP2021051476A (en) | Object detection device, object detection system, moving object and object detection method | |
CN107832788B (en) | Vehicle distance measuring method based on monocular vision and license plate recognition | |
GB2599939A (en) | Method of updating the existance probability of a track in fusion based on sensor perceived areas | |
CN112835029A (en) | Multi-sensor obstacle detection data fusion method and system for unmanned driving | |
CN113734176B (en) | Environmental perception system, method, vehicle and storage medium for intelligent driving vehicle | |
CN114842166A (en) | Negative obstacle detection method, system, medium and device applied to structured road | |
CN113763262A (en) | Application method of filtering body technology in point cloud data of autonomous mining trucks | |
CN114998860B (en) | A method and device for hierarchical fusion of vehicle-road cooperative perception data | |
CN116071730A (en) | Background object detection method, device and equipment and automatic driving vehicle | |
CN113988197B (en) | Multi-camera and multi-laser radar based combined calibration and target fusion detection method | |
CN117970325A (en) | Looking-around positioning and mapping method and system based on 4D imaging radar and vehicle | |
US20230184954A1 (en) | Systems and methods for determining a drivable surface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210416 |
|
WW01 | Invention patent application withdrawn after publication |