[go: up one dir, main page]

CN114821526B - Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud - Google Patents

Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud Download PDF

Info

Publication number
CN114821526B
CN114821526B CN202210391431.XA CN202210391431A CN114821526B CN 114821526 B CN114821526 B CN 114821526B CN 202210391431 A CN202210391431 A CN 202210391431A CN 114821526 B CN114821526 B CN 114821526B
Authority
CN
China
Prior art keywords
point cloud
points
point
obstacle
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210391431.XA
Other languages
Chinese (zh)
Other versions
CN114821526A (en
Inventor
张博
任凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210391431.XA priority Critical patent/CN114821526B/en
Publication of CN114821526A publication Critical patent/CN114821526A/en
Application granted granted Critical
Publication of CN114821526B publication Critical patent/CN114821526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/292Extracting wanted echo-signals
    • G01S7/2923Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • G01S7/354Extracting wanted echo-signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method for detecting a three-dimensional obstacle frame based on 4D millimeter wave Lei Dadian cloud, which comprises the following steps: 1) Acquiring 4D millimeter wave Lei Dadian cloud data through the Ethernet; 2) Preprocessing point cloud data, and improving the quality of point cloud; 3) Clustering the point cloud data in a three-dimensional space, and identifying an obstacle from the point cloud data; 4) Clustering point cloud data post-processing; 5) Solving a three-dimensional frame algorithm based on the clustered point cloud; according to the detection method, the object can be detected through the clustering algorithm, the height of the object is calculated, and the point cloud reflected by the obstacle is distinguished from the impurity point, so that the impurity point and the false point are removed, 80% of the impurity points in the environment can be effectively filtered, the point cloud reflected by the surface profile of the object can be accurately identified, and accurate data are provided for constructing the three-dimensional frame of the obstacle.

Description

Obstacle three-dimensional frame detection method based on 4D millimeter wave Lei Dadian cloud
Technical Field
The invention belongs to an obstacle detection technology in vehicle running, and particularly relates to a method for detecting a three-dimensional obstacle frame based on 4D millimeter wave Lei Dadian cloud.
Background
Intelligent three-electricity (battery, motor, electric control), intelligent cabin, and automatic driving have represented the trend of future automobiles. Unmanned software systems are generally defined as six modules, sensing, prediction, high-precision mapping, positioning, decision planning, control. The sensing module generally refers to obstacle and traffic light recognition, and the sensors mainly comprise a laser radar, a millimeter wave radar and a camera. These three sensors have different advantages in different environments, respectively.
The camera has low cost, has the advantages of target detection and classification, is widely applied to ADAS automatic driving systems, but is sensitive to light variation, depends on a deep learning network model and a training data set, and has the possibility of failure. The laser radar is used as an active sensor, has the advantages of high precision, strong three-dimensional sensing capability and the like, but has high cost and is easily influenced by weather. The millimeter wave radar has relatively low cost, has three-dimensional point cloud and speed sensing capability, can work all-weather at the same time, and has high reliability, so that the millimeter wave radar is widely applied to ADAS automatic driving systems of various grades.
Compared with the traditional 3D millimeter wave radar, the 4D millimeter wave radar has pitching height detection capability, and meanwhile, the quantity and quality of output point clouds are greatly improved, and the perception capability is more and more similar to that of a laser radar, so that the 4D millimeter wave radar is also called an imaging radar. The high-level ADAS autopilot system has higher requirements on a perception system, such as a driving condition or a parking condition scene of relatively crowded city, and the autopilot system needs a three-dimensional frame outline with accurate targets, so that a decision planning module can plan a safer and more reasonable path conveniently, and better visual display can be performed. Therefore, obstacle three-dimensional bezel detection is becoming an integral part of environmental awareness.
In the prior art, an automatic driving 4D sensing method, an automatic driving 4D sensing system and an automatic driving 4D sensing medium which are suitable for all-weather environments are disclosed in CN113222111A, and application number 202110355978.X; comprising the following steps: performing external parameter calibration on a digital camera and a 4D millimeter wave radar, projecting point cloud information of the 4D millimeter wave radar to a camera image, attaching distance information to pixels of an image part, performing convolution network calculation on a new image to obtain a dense depth map, converting the depth map into a coding point cloud map through camera internal parameter calibration, sending the coding point cloud map into the convolution network to perform target perception detection, mapping a target perception frame into a 3D target frame under camera coordinates, and attaching speed information to the 3D target frame through the 4D millimeter wave point cloud, so that a brand-new 4D target perception result is obtained. The invention can greatly improve the perception capability of the target image, and can still obtain good perception results in severe environments of rain, snow, fog, severe weather, sand wind and dust. The target frame detection method mentioned in the application is to detect that the target comprises a target frame from the image point cloud image by using a neural network deep learning method. The method is a deep learning method, and depends on a deep learning network model and a training data set, so that the possibility of failure exists; and the 3D bounding box of the obstacle target is detected by utilizing the image and the laser radar point cloud respectively, so that the invention of the obstacle three-dimensional frame detection method based on the millimeter wave Lei Dadian cloud is required.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide the obstacle three-dimensional frame detection method based on the 4D millimeter wave Lei Dadian cloud, which can detect objects through a clustering algorithm to obtain an accurate obstacle three-dimensional frame so as to improve the safety of automatic driving.
The technical scheme of the invention is realized as follows:
1. the obstacle three-dimensional frame detection method based on the 4D millimeter wave Lei Dadian cloud is characterized by comprising the following steps of:
Step 1, acquiring 4D millimeter wave Lei Dadian cloud data through an Ethernet;
step 2, preprocessing point cloud data;
Step 3, clustering the point cloud data in a three-dimensional space to identify obstacles from the point cloud data;
step 4, clustering point cloud data post-processing;
step 5, solving a three-dimensional frame algorithm based on the clustered point cloud; and obtaining vertex coordinates of the three-dimensional frame of the obstacle.
In this way, the detection method can detect the object through the clustering algorithm, calculate the height of the object, and distinguish the point cloud reflected by the obstacle from the impurity points, so that the impurity points and the false points are deleted, 80% of the impurity points in the environment can be effectively filtered, the point cloud reflected by the surface profile of the object can be accurately identified, and accurate data can be provided for constructing the three-dimensional frame of the obstacle.
Further: and (3) performing related processing on the image, namely performing physical value conversion on one frame of data acquired by the 4D millimeter wave radar, converting the distance, azimuth angle and pitch angle values into X, Y, Z coordinates according to the definition of a vehicle coordinate system and the trigonometric function relation, converting the coordinate values into the vehicle coordinate system, and storing the coordinate values in a global variable.
Further: the clustering of the point cloud data in the step 3 is performed in a three-dimensional space, and specifically comprises the following steps:
1) Initializing an object container of a point, and storing point cloud data in the container, wherein the object of the point comprises the following member variables: coordinates X/Y/Z, number ID, accessed flag visual, cluster ClusterID, core point flag IsKey, neighborhood point list;
2) Setting clustering parameters including three parameters of a neighborhood radius R, a neighborhood minimum data number MinPts and a data Dimension;
3) Randomly selecting a point from the point cloud data, marking the visible attribute of the point as accessed, calculating the number PtsNum of the points in the neighborhood radius R of the point, marking the IsKey attribute of the point as a core point if the number PtsNum of the points is larger than a set threshold MinPts, and distributing a cluster ClusterID; simultaneously traversing all points in the point neighborhood radius R, marking the point visual attribute as accessed, distributing the same cluster ClusterID, and repeating the operations of the steps 2) and 3) on the points until the end after traversing all points in the access point cloud data;
4) And calculating the average Z coordinate of each point cloud cluster for the point cloud clusters successfully clustered, and storing the average Z coordinate into a variable ClassZInfo.
Further: and step 4, the post-processing of the clustered point cloud data comprises the step of deleting the non-clustered mixed points and the clustered points with the height value not meeting a threshold value from the point cloud data.
Further: the three-dimensional frame calculation algorithm based on the clustered point cloud in the step 5 comprises the following steps:
1) The clustered three-dimensional point cloud is projected to an XY coordinate plane in a dimension reducing mode;
2) Calculating boundary key points of the cluster point cloud by utilizing a convex hull algorithm, firstly, finding out the point with the smallest ordinate y in all points, and marking as p0; then calculating cosine values of included angles between connecting lines of other points and the x-axis, sorting the points from large to small according to cosine values of the points to the lowest point, and marking the sorted points as p1, p2, p3 and the like; pushing the lowest point p0 and the first point p1 in the ordered points into a stack, then starting calculation from p2, calculating whether vectors of two points at the top of the stack and three points at the points rotate anticlockwise, if yes, pushing the points into the stack, otherwise pushing out elements at the top of the stack; finally, the rest points in the stack are all convex hull boundary key points;
3) The boundary key points after the convex hull are sequenced clockwise around the geometric center of the boundary key points, the included angles between the boundary key points and the origin of coordinates are calculated respectively, and the sequenced boundary key points are stored in a container;
4) According to the included angle between each boundary key point and the coordinate origin point obtained by calculation in the previous step, calculating boundary key points in the range of the radar direct irradiation surface in the obstacle point cloud, and marking the boundary key points as effective boundary key points;
5) Calculating a minimum area rectangle containing all boundary key points from the directions of any two adjacent points in the effective boundary key points as the directions of the rectangles, finally outputting the rectangle with the minimum area as a two-dimensional frame of the barrier, and recording four vertex coordinates of the two-dimensional rectangular frame;
6) And (3) respectively assigning the minimum Z coordinate and the maximum Z coordinate to the four vertex coordinates calculated in the step (5) according to the maximum Z coordinate and the minimum Z coordinate in the cluster point cloud, so as to obtain the vertex coordinates of the three-dimensional boundary frame and obtain the three-dimensional frame of the obstacle.
Further: and calculating boundary key points of the cluster point clouds by the convex hull algorithm, and calculating the convex hull boundary key points of each cluster point cloud by adopting the ConvexHull algorithm.
Further: the data Dimension parameter Dimension is set to 2.
Further: the neighborhood radius R is set to be 2 meters, the minimum neighborhood data number MinPts is 12, and the data Dimension is 3.
Further: and step 2, preprocessing the point cloud data, and filtering out impurity points according to the signal-to-noise ratio and the height value.
In summary, the obstacle three-dimensional frame detection method based on the 4D millimeter wave Lei Dadian cloud has the following beneficial effects:
1. according to the method, the object can be detected through a clustering algorithm, the height of the object is calculated, and the point cloud reflected by the obstacle is distinguished from the impurity points, so that the impurity points and the false points are removed, 80% of the impurity points in the environment can be effectively filtered, the point cloud reflected by the surface profile of the object can be accurately identified, and accurate data are provided for constructing the three-dimensional frame of the obstacle.
2. The method fully utilizes the advantages of the imaging radar such as a large number of point clouds, high precision, high coverage rate of obstacle boundary point clouds and the like, and plays a role in ranging of the millimeter wave radar.
3. Compared with a deep learning method, the method has the advantages of good robustness, high reliability, small calculated amount and the like.
Drawings
Fig. 1 is a flowchart of a method for detecting a three-dimensional frame of an obstacle based on a 4D millimeter wave Lei Dadian cloud in an embodiment.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
as shown in fig. 1, the method for detecting the three-dimensional frame of the obstacle based on the 4D millimeter wave Lei Dadian cloud comprises the following steps:
Step 1, acquiring 4D millimeter wave Lei Dadian cloud data through an Ethernet;
When the radar point cloud is relevant, according to the requirements of a built-in processing program, firstly, carrying out physical value conversion on one frame of data acquired by the 4D millimeter wave radar, according to the definition of a vehicle coordinate system and the trigonometric function relation, converting the distance, azimuth angle and pitch angle values into X, Y, Z coordinates, then converting the coordinate values into the vehicle coordinate system, and storing the coordinate values in a global variable.
Step 2, preprocessing point cloud data;
The specific content of the point cloud data preprocessing is as follows: and filtering out the impurity points according to the signal-to-noise ratio and the height value, and improving the quality of the point cloud.
Step 3, clustering the point cloud data in a three-dimensional space, and identifying an obstacle from the point cloud data;
the clustering of the point cloud data is carried out on a three-dimensional space, and the specific process comprises the following steps:
1) Initializing an object container of a point, and storing point cloud data in the container, wherein the object of the point comprises the following member variables: coordinates X/Y/Z, number ID, whether the flag is Visited, clusterID, whether the core point flag IsKey, neighbor point list.
2) The clustering parameters are set, wherein three parameters are respectively a neighborhood radius R, a neighborhood minimum data number MinPts and a data Dimension. Preferably, after analysis according to real vehicle test data, the neighborhood radius R is set to be 2 meters, the minimum neighborhood data number MinPts is 12, and the data Dimension is 3.
3) Randomly selecting a point from the point cloud data, marking the visible attribute of the point as accessed, calculating the number PtsNum of the points in the neighborhood radius R of the point, marking the IsKey attribute of the point as a core point if the number PtsNum of the points is larger than a set threshold MinPts, and distributing a cluster ClusterID; and (3) traversing all points in the point neighborhood radius R at the same time, marking the point visual attribute as accessed, allocating the same cluster ClusterID, and repeating the operation of the step (2-3) on the points until the operation is finished after traversing all points in the access point cloud data.
4) And calculating the average Z coordinate of each point cloud cluster for the point cloud clusters successfully clustered, and storing the average Z coordinate into a variable ClassZInfo.
Step 4, clustering point cloud data post-processing;
The post-processing of the cluster point cloud data comprises the following specific contents: and deleting the impurity points which are not clustered successfully and the clustered points of which the height values do not meet the threshold value from the point cloud data.
Step 5, solving a three-dimensional frame algorithm based on the clustered point cloud;
The three-dimensional frame algorithm based on the clustered point cloud comprises the following steps of:
1) The clustered three-dimensional point cloud is projected to an XY coordinate plane in a dimension reducing mode;
2) And calculating boundary key points of the cluster point clouds by using a convex hull algorithm, wherein ConvexHull algorithm is adopted to calculate the convex hull boundary key points of each cluster point cloud. Preferably, the data Dimension parameter Dimension is set to 2 here. Firstly, finding out the point with the smallest ordinate y in all the points, and marking as p0; then calculating cosine values of included angles between connecting lines of other points and the x-axis, sorting the points from large to small according to cosine values of the points to the lowest point, and marking the sorted points as p1, p2, p3 and the like; pushing the lowest point p0 and the first point p1 in the ordered points into a stack, then starting calculation from p2, calculating whether vectors of two points at the top of the stack and three points at the points rotate anticlockwise, if yes, pushing the points into the stack, otherwise pushing out elements at the top of the stack; finally, the rest points in the stack are all convex hull boundary key points;
3) The boundary key points after the convex hull are sequenced clockwise around the geometric center of the boundary key points, the included angles between the boundary key points and the origin of coordinates are calculated respectively, and the sequenced boundary key points are stored in a container;
4) According to the included angle between each boundary key point and the coordinate origin point obtained by calculation in the previous step, calculating boundary key points in the range of the radar direct irradiation surface in the obstacle point cloud, and marking the boundary key points as effective boundary key points;
5) Calculating a minimum area rectangle containing all boundary key points from the directions of any two adjacent points in the effective boundary key points as the directions of the rectangles, finally outputting the rectangle with the minimum area as a two-dimensional frame of the barrier, and recording four vertex coordinates of the two-dimensional rectangular frame;
6) And (3) respectively assigning the minimum Z coordinate and the maximum Z coordinate to the four vertex coordinates calculated in the step (5) according to the maximum Z coordinate and the minimum Z coordinate in the cluster point cloud, so as to obtain the vertex coordinates of the three-dimensional boundary frame and obtain the three-dimensional frame of the obstacle.
According to the method, the object can be detected through a clustering algorithm, the height of the object is calculated, and the point cloud reflected by the obstacle is distinguished from the impurity point, so that the impurity point and the false point are removed, 80% of the impurity points in the environment can be effectively filtered, the point cloud reflected by the surface profile of the object can be accurately identified, and accurate data are provided for constructing the three-dimensional frame of the obstacle.
Finally, it should be noted that the above examples of the present invention are merely illustrative of the present invention and are not limiting of the embodiments of the present invention. While the invention has been described in detail with reference to the preferred embodiments, it will be apparent to one skilled in the art that various other changes and modifications can be made therein. Not all embodiments are exhaustive. Obvious changes and modifications which are extended by the technical proposal of the invention are still within the protection scope of the invention.

Claims (8)

1. The obstacle three-dimensional frame detection method based on the 4D millimeter wave Lei Dadian cloud is characterized by comprising the following steps of:
Step 1, acquiring 4D millimeter wave Lei Dadian cloud data through an Ethernet;
step 2, preprocessing point cloud data;
Step 3, clustering the point cloud data in a three-dimensional space to identify obstacles from the point cloud data;
step 4, clustering point cloud data post-processing;
Step 5, solving a three-dimensional frame algorithm based on the clustered point cloud; obtaining vertex coordinates of a three-dimensional frame of the obstacle;
the three-dimensional frame algorithm based on the clustered point cloud comprises the following steps:
1) The clustered three-dimensional point cloud is projected to an XY coordinate plane in a dimension reducing mode;
2) Calculating boundary key points of the cluster point cloud by utilizing a convex hull algorithm, firstly, finding out the point with the smallest ordinate y in all points, and marking as p0; then calculating cosine values of included angles between connecting lines of other points and the x-axis, sorting the points from large to small according to cosine values of the points to the lowest point, and marking the sorted points as p1, p2, p3 and the like; pushing the lowest point p0 and the first point p1 in the ordered points into a stack, then starting calculation from p2, calculating whether vectors of two points at the top of the stack and three points at the points rotate anticlockwise, if yes, pushing the points into the stack, otherwise pushing out elements at the top of the stack; finally, the rest points in the stack are all convex hull boundary key points;
3) The boundary key points after the convex hull are sequenced clockwise around the geometric center of the boundary key points, the included angles between the boundary key points and the origin of coordinates are calculated respectively, and the sequenced boundary key points are stored in a container;
4) According to the included angle between each boundary key point and the coordinate origin point obtained by calculation in the previous step, calculating boundary key points in the range of the radar direct irradiation surface in the obstacle point cloud, and marking the boundary key points as effective boundary key points;
5) Calculating a minimum area rectangle containing all boundary key points from the directions of any two adjacent points in the effective boundary key points as the directions of the rectangles, finally outputting the rectangle with the minimum area as a two-dimensional frame of the barrier, and recording four vertex coordinates of the two-dimensional rectangular frame;
6) And (3) respectively assigning the minimum Z coordinate and the maximum Z coordinate to the four vertex coordinates calculated in the step (5) according to the maximum Z coordinate and the minimum Z coordinate in the cluster point cloud, so as to obtain the vertex coordinates of the three-dimensional boundary frame and obtain the three-dimensional frame of the obstacle.
2. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud, which is characterized by comprising the following steps of: in step 1, when the radar point cloud data is correlated, a frame of data acquired by the 4D millimeter wave radar is subjected to physical value conversion, the distance, azimuth angle and pitch angle values are converted into X, Y, Z coordinates according to the definition of a vehicle coordinate system and the trigonometric function relation, and then the coordinate values are converted into the vehicle coordinate system and stored in a global variable.
3. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud according to claim 2, wherein the method is characterized by comprising the following steps of: the clustering of the point cloud data in the step 3 is performed in a three-dimensional space, and specifically comprises the following steps:
1) Initializing an object container of a point, and storing point cloud data in the container, wherein the object of the point comprises the following member variables: coordinates X/Y/Z, number ID, accessed flag visual, cluster ClusterID, core point flag IsKey, neighborhood point list;
2) Setting clustering parameters including three parameters of a neighborhood radius R, a neighborhood minimum data number MinPts and a data Dimension;
3) Randomly selecting a point from the point cloud data, marking the visible attribute of the point as accessed, calculating the number PtsNum of the points in the neighborhood radius R of the point, marking the IsKey attribute of the point as a core point if the number PtsNum of the points is larger than a set threshold MinPts, and distributing a cluster ClusterID; simultaneously traversing all points in the point neighborhood radius R, marking the point visual attribute as accessed, distributing the same cluster ClusterID, and repeating the operations of the steps 2) and 3) on the points until the end after traversing all points in the access point cloud data;
4) And calculating the average Z coordinate of each point cloud cluster for the point cloud clusters successfully clustered, and storing the average Z coordinate into a variable ClassZInfo.
4. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud according to any one of claims 1-3, wherein the method is characterized by comprising the following steps: and step 4, the post-processing of the clustered point cloud data comprises the step of deleting the non-clustered mixed points and the clustered points with the height value not meeting a threshold value from the point cloud data.
5. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud according to any one of claims 1-3, wherein the method is characterized by comprising the following steps: and calculating boundary key points of the cluster point clouds by the convex hull algorithm, and calculating the convex hull boundary key points of each cluster point cloud by adopting the ConvexHull algorithm.
6. The obstacle three-dimensional frame detection method based on the 4D millimeter wave radar point cloud, which is characterized by comprising the following steps of: the data Dimension parameter Dimension is set to 2.
7. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud according to claim 3, wherein the method comprises the following steps of: the neighborhood radius R is set to be 2 meters, the minimum neighborhood data number MinPts is 12, and the data Dimension is 3.
8. The obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud, which is characterized by comprising the following steps of: and step 2, preprocessing the point cloud data, and filtering out impurity points according to the signal-to-noise ratio and the height value.
CN202210391431.XA 2022-04-14 2022-04-14 Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud Active CN114821526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210391431.XA CN114821526B (en) 2022-04-14 2022-04-14 Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210391431.XA CN114821526B (en) 2022-04-14 2022-04-14 Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud

Publications (2)

Publication Number Publication Date
CN114821526A CN114821526A (en) 2022-07-29
CN114821526B true CN114821526B (en) 2024-11-15

Family

ID=82536726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210391431.XA Active CN114821526B (en) 2022-04-14 2022-04-14 Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud

Country Status (1)

Country Link
CN (1) CN114821526B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115616578A (en) * 2022-12-05 2023-01-17 成都航空职业技术学院 Radar detection method and device for unmanned aerial vehicle
CN116524219A (en) * 2023-01-16 2023-08-01 西北工业大学 A Method of Obstacle Detection Based on LiDAR Point Cloud Clustering
CN115840227B (en) * 2023-02-27 2023-07-04 福思(杭州)智能科技有限公司 Road edge detection method and device
CN116381665B (en) * 2023-04-24 2023-11-14 中国人民解放军国防科技大学 Method and system for positioning trapped person based on four-dimensional biological radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537316A (en) * 2021-06-30 2021-10-22 南京理工大学 Vehicle detection method based on 4D millimeter wave radar point cloud
CN113536959A (en) * 2021-06-23 2021-10-22 复旦大学 Dynamic obstacle detection method based on stereoscopic vision

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6903270B2 (en) * 2017-06-29 2021-07-14 株式会社Nsテクノロジーズ Electronic component transfer device and electronic component inspection device
CN111337939B (en) * 2018-12-19 2024-07-19 上海蔚来汽车有限公司 Method and device for estimating outer frame of rectangular object
CN112949542A (en) * 2021-03-17 2021-06-11 哈尔滨理工大学 Wrist division line determining method based on convex hull detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536959A (en) * 2021-06-23 2021-10-22 复旦大学 Dynamic obstacle detection method based on stereoscopic vision
CN113537316A (en) * 2021-06-30 2021-10-22 南京理工大学 Vehicle detection method based on 4D millimeter wave radar point cloud

Also Published As

Publication number Publication date
CN114821526A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN112101092B (en) Autonomous driving environment perception method and system
CN114821526B (en) Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud
CN109143207B (en) Laser radar internal reference precision verification method, device, equipment and medium
CN111797734B (en) Vehicle point cloud data processing method, device, equipment and storage medium
KR102195164B1 (en) System and method for multiple object detection using multi-LiDAR
CN111192295A (en) Target detection and tracking method, related device and computer readable storage medium
CN108828621A (en) Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar
CN108460416A (en) A kind of structured road feasible zone extracting method based on three-dimensional laser radar
CN106842231A (en) A kind of road edge identification and tracking
CN106199558A (en) Barrier method for quick
WO2021207954A1 (en) Target identification method and device
CN105404844A (en) Road boundary detection method based on multi-line laser radar
KR102681992B1 (en) Single stage 3-Dimension multi-object detecting apparatus and method for autonomous driving
CN114724110A (en) Target detection method and device
CN113378647B (en) Real-time track obstacle detection method based on three-dimensional point cloud
CN115273018A (en) Obstacle identification method and device and electronic equipment
CN113743171A (en) Target detection method and device
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN114764885A (en) Obstacle detection method and device, computer-readable storage medium and processor
US20230258813A1 (en) LiDAR Free Space Data Generator and LiDAR Signal Processing Method Using Multi-Modal Noise Filtering Scheme
CN115390050A (en) Calibration method, device and equipment of vehicle-mounted laser radar
CN114488026A (en) Underground parking garage passable space detection method based on 4D millimeter wave radar
CN114089376A (en) Single laser radar-based negative obstacle detection method
Bernardi et al. High integrity lane-level occupancy estimation of road obstacles through LiDAR and HD map data fusion
Liu et al. Target detection from 3D point-cloud using Gaussian function and CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant