[go: up one dir, main page]

CN119043347B - Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics - Google Patents

Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics Download PDF

Info

Publication number
CN119043347B
CN119043347B CN202411535243.5A CN202411535243A CN119043347B CN 119043347 B CN119043347 B CN 119043347B CN 202411535243 A CN202411535243 A CN 202411535243A CN 119043347 B CN119043347 B CN 119043347B
Authority
CN
China
Prior art keywords
radar
semantic
time
vehicle
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411535243.5A
Other languages
Chinese (zh)
Other versions
CN119043347A (en
Inventor
赵映重
李琛玮
陆新飞
张显宏
王海涛
薛旦
史颂华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Geometry Partner Intelligent Driving Co ltd
Original Assignee
Shanghai Geometry Partner Intelligent Driving Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Geometry Partner Intelligent Driving Co ltd filed Critical Shanghai Geometry Partner Intelligent Driving Co ltd
Priority to CN202411535243.5A priority Critical patent/CN119043347B/en
Publication of CN119043347A publication Critical patent/CN119043347A/en
Application granted granted Critical
Publication of CN119043347B publication Critical patent/CN119043347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a method for realizing a self-built map by a 4D millimeter wave imaging radar and ground sign semantics aiming at a memory commuter, which comprises the steps of acquiring a full radar point cloud frame and a timestamp aiming at a received radar point cloud, screening and matching the acquired point cloud of ground sign elements to obtain a ground sign semantic frame point cloud, performing factor map optimization processing by using a self-vehicle sensor GNSS, an inertial measurement unit IMU and a wheel speed odometer, estimating the position of a self-vehicle in real time, and performing radar semantic map building by using the acquired radar point cloud frame, the ground sign semantic frame and a pose queue estimated by the self-vehicle to generate the self-built map aiming at the current vehicle. The invention also relates to a corresponding device, a processor and a storage medium thereof. By adopting the method, the device, the processor and the storage medium thereof, the stability and the robustness of positioning based on the self-built map in the commuting driving process are improved, and comfortable intelligent driving experience is brought to users.

Description

Self-building map method through 4D millimeter wave imaging radar and ground sign semantics
Technical Field
The invention relates to the field of intelligent driving, in particular to the field of commute memory driving, and particularly relates to a method, a device, a processor and a computer readable storage medium for realizing a self-built map through a 4D millimeter wave imaging radar and ground sign semantics for memory commute driving.
Background
The memory commute traveling crane is an intelligent driving function, provides an autonomous traveling function on a pre-memorized (preset) route for a user, and is mainly suitable for high-frequency and fixed-route driving scenes of daily commute to and from work. In the function, the real-time pose of a moving vehicle in the environment, namely the vehicle positioning, is a key and indispensable technology.
In addition, on the other hand, the high-precision map has high manufacturing cost, long period and insufficient slow freshness of map updating, and the map updating is not timely enough to cause positioning errors, thereby causing unfavorable driving experience of manually taking over control of vehicles and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method, a device, a processor and a computer readable storage medium for realizing self-built map by using a 4D millimeter wave imaging radar and ground sign semantics for memory commuting vehicles.
In order to achieve the above purpose, the invention provides a low-cost, easy-mass-production, all-weather 4D millimeter wave imaging radar and ground sign semantic self-construction map method, which is used for generating a self-construction map generated in a memory stage in a memory commute process and positioning a priori map during autonomous driving. Compared with a high-precision map, the method has the advantages of no freshness, complex manufacture, long manufacture period and the like, and only needs to collect data for updating. The 4D millimeter wave radar mainly covers the map construction of the environment above the ground around the self-vehicle, the traffic indication marks such as lane lines, arrows, stop lines and the like on the ground are constructed by the aid of the ground mark semantics, the finally generated self-built map fully covers the ground of the self-vehicle and the information of the environment above the ground, and a light map with comprehensive information is provided for positioning.
The method for sensing the radar fusion target based on the hybrid strategy is mainly characterized by comprising the following steps of:
(1) Acquiring a full radar point cloud frame and a time stamp aiming at the radar point cloud received by the 4D millimeter wave radar;
(2) Screening and matching the semantic frames of the acquired point clouds of the ground mark elements to obtain corresponding point clouds of the ground mark semantic frames;
(3) The method comprises the steps of performing factor graph optimization processing by using a vehicle sensor GNSS, an inertial measurement unit IMU and a wheel speed odometer, so as to estimate a vehicle position in real time;
(4) And carrying out radar semantic mapping by using the obtained radar point cloud frame, the ground sign semantic frame and the pose queue estimated by the vehicle, thereby generating a self-built map for the current vehicle.
Preferably, the step (1) specifically includes:
The radar point clouds respectively from the front main radar, the left front radar, the right front radar, the left rear radar, the right rear radar and the front rear main radar of the vehicle are received through a radar frame construction module, and the 4D millimeter wave Lei Dadian clouds are converted into the vehicle coordinates according to external parameter transformation between the 4D millimeter wave radar and the vehicle coordinates by synchronizing the 6 radar time stamps, so that the full radar point cloud frames and the time stamps after the 6 4D millimeter wave radars are fused are obtained.
Preferably, the step (1) performs radar time synchronization processing in the following manner:
assume that the currently acquired cloud information of the vehicle point is Wherein,,N isThe number of the radar points in the middle,Indicating that the j-th radar is at the time i, the time stamp isThe information of the point cloud is that,From nDot composition;
at time i, in one radar period, all acquired radar time stamps In finding the maximum radar timestampK is the number of the radar corresponding to the maximum timestamp, and the maximum timestamp is used as the maximum timestampSynchronizing other radar time stamps toAnd the time difference is used for compensating the position information of other radars, and the method specifically comprises the following steps:
;
;
;
;
;
;
Wherein, Representing each radar time stampAnd maximum timestampThe time difference between the two times of the two,AndThe angular velocity and the linear velocity of the chassis of the vehicle at the moment i are respectively,AndTo at the same timeThe amount of motion compensation of the radar point in the x, y directions during the time period,Is thatRotated through an angle in time and then rotated by a certain angle,AndFor j-th radar after time synchronization alignmentFor each point(s)Position coordinates in the x, y direction.
Preferably, the step (1) further includes performing radar external parameter change processing:
Assuming that the transformation between each radar and the own vehicle coordinates is The extrinsic transformation is transformed as follows:
;
the method comprises the steps that a radar point cloud of a jth radar subjected to external parameter conversion at the moment i;
After the processing, the radar frame point cloud is WhereinIs the maximum radar time stamp,
;
I.e.Radar points obtained after conversion of 6 radar external parameters at moment iAnd (5) splicing results.
Preferably, the step (2) specifically includes:
semantic screening, namely, inputting semantic frame point cloud Each point in (a)The following judgment processing is carried out:
;
to determine whether a semantic point is selected as a scoring threshold, Is thatConfidence of detection of the mth point in (1), ifPoint thenSelected, otherwise fromDelete in the middle;
semantic matching if the current time is The last moment isThe speed and the angular speed obtained from the chassis of the bicycle at the current moment are respectivelyAndThe semantic point cloud at the current moment isThe last moment isThe relative pose of the current moment relative to the previous moment is transformedThe method comprises the following steps:
;
;
;
;
Then
;
The semantic point cloud at the last momentConverting to the current moment to obtain:
;
Wherein, Representing the time difference between the current time and the last time,Is thatThe angle of rotation through which time passes,AndIs thatMotion compensation amount of the semantic frame in x and y directions in the time period;
For semantic point cloud Each point of (3)At the position ofIn the nearest neighbor search, the threshold is set to 0.05m, if inFind outIs the neighbor point of (1)Remain atIn otherwise fromObtaining the semantic frame point cloud of the ground markWhereinTimestamp entered for semantic frames,The semantic point cloud location information is semantic point cloud location information subjected to semantic screening and semantic matching.
Preferably, the step (3) specifically includes the following steps:
GNSS absolute constraint assuming that the observations of the GNSS at time i are Estimating pose as i moment vehicle state,The rotation information of the own vehicle at the moment i,For the position of the own vehicle at the moment i, the following constraint condition is to be satisfied at the moment i:
;
Wherein, For subject to, represent constrained;
IMU constraint, namely assuming that the current moment is i, and the acceleration input by the IMU is Angular velocity ofThenThe following constraints need to be satisfied at time i:
;
;
Wherein the method comprises the steps of To derive the 2 nd derivative of the translation vector,Deriving a disrotatory torque matrix;
wheel speed odometer position constraint, assuming that the current moment i moves relative to the previous moment i-1 ThenAt time i, constraint processing is performed as follows:
;
Optimizing factor graph, error of factor graph Is that
;
Wherein, The pose observed by the vehicle;
The error equation is:
;
Solving an optimal solution meeting the constraint conditions by using an LM method as an information matrix of the error Obtaining the estimation of the current pose of the vehicle and the corresponding time of each poseEnter into the self-parking pose queueIs a kind of medium.
Preferably, the step (4) performs key frame time synchronization in the following manner:
From the self-vehicle pose queue Respectively find out the radar frame point cloudsAnd semantic frame point cloudThe pose closest in time, after finding, the old pose time is taken fromTo obtain the radar key frameSemantic key framesWherein:
;
And And respectively representing the point clouds of the radar point cloud frame and the semantic point cloud frame under the own vehicle coordinates at the moment i.
Preferably, the method performs radar semantic mapping in the following manner:
Key frame of radar obtained after time synchronization And semantic key framesSplicing to obtain a self-built map:
;
Wherein, For the number of frames after the radar key frame time pose is synchronized,And marking the number of frames after the time and the pose of the semantic frames are synchronized for the ground.
The system for realizing the self-built map by using the method for memorizing the commuter through the 4D millimeter wave imaging radar and the ground sign semantics is mainly characterized in that the system comprises:
the radar frame construction module is used for carrying out radar time synchronization and radar external parameter change processing on the received 6 millimeter wave radar point clouds so as to obtain a full radar point cloud frame and a time stamp after the 6 radars are fused;
the semantic frame construction module is used for carrying out semantic screening and semantic matching processing on the input ground mark semantics to obtain 3-dimensional position information of semantic elements under own vehicle coordinates;
pose generating module for real-time estimating vehicle position information by factor map optimization of vehicle sensor GNSS, inertial measurement unit IMU and wheel speed odometer, and
The self-built map generation module is connected with the radar frame construction module, the semantic frame construction module and the pose generation module and is used for performing key frame time synchronization and radar semantic map construction processing so as to generate a self-built map.
The device for realizing self-built map through the 4D millimeter wave imaging radar and the ground sign semantics aiming at the memory commute crane is mainly characterized in that the device comprises:
a processor configured to execute computer-executable instructions;
The memory stores one or more computer executable instructions which, when executed by the processor, implement the steps of the method for implementing self-built maps for memory commuter vehicles through the 4D millimeter wave imaging radar and the ground sign semantics.
The processor for realizing the self-built map through the 4D millimeter wave imaging radar and the ground sign semantics aiming at the memory commuting crane is mainly characterized in that the processor is configured to execute computer executable instructions, and when the computer executable instructions are executed by the processor, the steps of the method for realizing the self-built map through the 4D millimeter wave imaging radar and the ground sign semantics aiming at the memory commuting crane are realized.
The computer readable storage medium is mainly characterized in that a computer program is stored on the computer readable storage medium, and the computer program can be executed by a processor to realize the steps of the method for realizing the self-built map aiming at the memory commuter through the 4D millimeter wave imaging radar and the ground sign semantics.
The method, the device, the processor and the computer readable storage medium thereof for realizing the self-built map aiming at the memory commuting vehicle through the 4D millimeter wave imaging radar and the ground sign semantics overcome the problems that the laser radar is high in cost and not easy to produce in quantity and the complex period of drawing by using the high-precision map is long and slow to update.
Drawings
FIG. 1 is a flow chart of the method for realizing self-built map for memory commuter by using a 4D millimeter wave imaging radar and ground sign semantics.
Detailed Description
In order to more clearly describe the technical contents of the present invention, a further description will be made below in connection with specific embodiments.
Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
First, a part of technical abbreviations used in the technical scheme are supplemented:
radar/radar 4D millimeter wave imaging radar;
pose pose, including x, y, z position information and roll, pitch, yaw pose information;
GNSS Global Navigation SATELLITE SYSTEM, global navigation satellite System;
inertial Measurement Unit, an inertial measurement unit;
self-building map, which is a map generated by the driving vehicle.
Referring to fig. 1, the method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commuter, wherein the method comprises the following steps:
(1) Acquiring a full radar point cloud frame and a time stamp aiming at the radar point cloud received by the 4D millimeter wave radar;
(2) Screening and matching the semantic frames of the acquired point clouds of the ground mark elements to obtain corresponding point clouds of the ground mark semantic frames;
(3) The method comprises the steps of performing factor graph optimization processing by using a vehicle sensor GNSS, an inertial measurement unit IMU and a wheel speed odometer, so as to estimate a vehicle position in real time;
(4) And carrying out radar semantic mapping by using the obtained radar point cloud frame, the ground sign semantic frame and the pose queue estimated by the vehicle, thereby generating a self-built map for the current vehicle.
As a preferred embodiment of the present invention, the step (1) specifically includes:
The radar point clouds respectively from the front main radar, the left front radar, the right front radar, the left rear radar, the right rear radar and the front rear main radar of the vehicle are received through a radar frame construction module, and the 4D millimeter wave Lei Dadian clouds are converted into the vehicle coordinates according to external parameter transformation between the 4D millimeter wave radar and the vehicle coordinates by synchronizing the 6 radar time stamps, so that the full radar point cloud frames and the time stamps after the 6 4D millimeter wave radars are fused are obtained.
As a preferred embodiment of the present invention, the step (1) performs radar time synchronization processing in the following manner:
assume that the currently acquired cloud information of the vehicle point is Wherein,,N isThe number of the radar points in the middle,Indicating that the j-th radar is at the time i, the time stamp isThe information of the point cloud is that,From nDot composition;
at time i, in one radar period, all acquired radar time stamps In finding the maximum radar timestampK is the number of the radar corresponding to the maximum timestamp, and the maximum timestamp is used as the maximum timestampSynchronizing other radar time stamps toAnd the time difference is used for compensating the position information of other radars, and the method specifically comprises the following steps:
;
;
;
;
;
;
Wherein, Representing each radar time stampAnd maximum timestampThe time difference between the two times of the two,AndThe angular velocity and the linear velocity of the chassis of the vehicle at the moment i are respectively,AndTo at the same timeThe amount of motion compensation of the radar point in the x, y directions during the time period,Is thatRotated through an angle in time and then rotated by a certain angle,AndFor j-th radar after time synchronization alignmentFor each point(s)Position coordinates in the x, y direction.
As a preferred embodiment of the present invention, the step (1) further includes performing radar external parameter changing processing:
Assuming that the transformation between each radar and the own vehicle coordinates is The extrinsic transformation is transformed as follows:
;
the method comprises the steps that a radar point cloud of a jth radar subjected to external parameter conversion at the moment i;
After the processing, the radar frame point cloud is WhereinIs the maximum radar time stamp,
;
I.e.Radar points obtained after conversion of 6 radar external parameters at moment iAnd (5) splicing results.
As a preferred embodiment of the present invention, the step (2) specifically includes:
semantic screening, namely, inputting semantic frame point cloud Each point in (a)The following judgment processing is carried out:
;
to determine whether a semantic point is selected as a scoring threshold, Is thatConfidence of detection of the mth point in (1), ifPoint thenSelected, otherwise fromDelete in the middle;
semantic matching if the current time is The last moment isThe speed and the angular speed obtained from the chassis of the bicycle at the current moment are respectivelyAndThe semantic point cloud at the current moment isThe last moment isThe relative pose of the current moment relative to the previous moment is transformedThe method comprises the following steps:
;
;
;
;
Then
;
The semantic point cloud at the last momentConverting to the current moment to obtain:
;
Wherein, Representing the time difference between the current time and the last time,Is thatThe angle of rotation through which time passes,AndIs thatMotion compensation amount of the semantic frame in x and y directions in the time period;
For semantic point cloud Each point of (3)At the position ofIn the nearest neighbor search, the threshold is set to 0.05m, if inFind outIs the neighbor point of (1)Remain atIn otherwise fromObtaining the semantic frame point cloud of the ground markWhereinTimestamp entered for semantic frames,The semantic point cloud location information is semantic point cloud location information subjected to semantic screening and semantic matching.
As a preferred embodiment of the present invention, the step (3) specifically includes the following processes:
GNSS absolute constraint assuming that the observations of the GNSS at time i are Estimating pose as i moment vehicle state,The rotation information of the own vehicle at the moment i,For the position of the own vehicle at the moment i, the following constraint condition is to be satisfied at the moment i:
;
Wherein, For subject to, represent constrained;
IMU constraint, namely assuming that the current moment is i, and the acceleration input by the IMU is Angular velocity ofThenThe following constraints need to be satisfied at time i:
;
;
Wherein the method comprises the steps of To derive the 2 nd derivative of the translation vector,Deriving a disrotatory torque matrix;
wheel speed odometer position constraint, assuming that the current moment i moves relative to the previous moment i-1 ThenAt time i, constraint processing is performed as follows:
;
Optimizing factor graph, error of factor graph Is that
;
Wherein, The pose observed by the vehicle;
The error equation is:
;
Solving an optimal solution meeting the constraint conditions by using an LM method as an information matrix of the error Obtaining the estimation of the current pose of the vehicle and the corresponding time of each poseEnter into the self-parking pose queueIs a kind of medium.
As a preferred embodiment of the present invention, the step (4) performs key frame time synchronization in the following manner:
From the self-vehicle pose queue Respectively find out the radar frame point cloudsAnd semantic frame point cloudThe pose closest in time, after finding, the old pose time is taken fromTo obtain the radar key frameSemantic key framesWherein:
;
And And respectively representing the point clouds of the radar point cloud frame and the semantic point cloud frame under the own vehicle coordinates at the moment i.
As a preferred embodiment of the invention, the method performs radar semantic mapping in the following manner:
Key frame of radar obtained after time synchronization And semantic key framesSplicing to obtain a self-built map:
;
Wherein, For the number of frames after the radar key frame time pose is synchronized,And marking the number of frames after the time and the pose of the semantic frames are synchronized for the ground.
The following further details the individual constituent modules of the present solution:
1. radar frame construction module
The radar frame construction module receives 6 radar point clouds, wherein the 6 radars are respectively from a front main radar, a left front angle radar, a right front angle radar, a left rear angle radar, a right rear angle radar and a right rear main radar of the vehicle. And 6 radar time stamps are synchronized, and according to external parameter transformation between the radar and the vehicle coordinates, the radar point cloud is converted into the vehicle coordinates, so that a full radar point cloud frame and a time stamp after the 6 radars are fused are obtained.
The specific process flow is as follows, assuming thatThe number of the radar points in the middle,Indicating that the j-th radar is at the time i, the time stamp isThe information of the point cloud is that,Is made up of nDot composition.
A) Radar time synchronization
At time i, within 100ms of one radar cycle, radar time stamps from all arrival systemsFinding the maximum timestampK is the radar number corresponding to the maximum timestamp, so as to reach the radar timestamp of the system at the latestSynchronizing other radar time stamps toOn top of that, other radars are compensated by time differencesA kind of electronic deviceThe compensation formula is as follows:
;
;
;
;
;
;
Wherein, Representing each radar time stampAnd maximum timestampThe time difference between the two times of the two,AndThe angular velocity and the linear velocity of the chassis of the vehicle at the moment i are respectively,AndTo at the same timeThe amount of motion compensation of the radar point in the x, y directions during the time period,Is thatRotated through an angle in time and then rotated by a certain angle,AndFor j-th radar after time synchronization alignmentFor each point(s)Position coordinates in the x, y direction.
B) Radar external parameter transformation
Due to the individual radar points in a)Is expressed in a radar self-coordinate system, is uniformly transformed into a self-vehicle coordinate taking the center of a self-vehicle rear axle as an origin, and is assumed to be transformed between each radar and the self-vehicle coordinateThe extrinsic transformation is transformed according to the following formula:
;
The method is a radar point cloud obtained by performing external parameter conversion on the jth radar at the moment i.
Through the processing of the 2 modules, radar frame point cloudWhereinIs as described above;
;
I.e.Radar point after conversion of 6 radar external parameters at moment iAnd (5) splicing results.
2. Semantic frame construction module
The ground sign semantically inputs the point cloud of ground sign (lane line, arrow, stop line and the like) elements, the semantic point cloud is from the image detection result of the front view 800w camera of the vehicle, and is not the division result of the bird's eye view example formed by splicing the fish-eye original image, namely, the pixels of the ground elements detected within a certain range of the image are sampled every 10 pixel points, the ground elements do not have semantic tags, the ground elements are uniformly regarded as point cloud, and then the pixel points are internally and externally referred to the vehicle coordinate by using a camera to obtain the 3-dimensional position of the semantic elements under the vehicle coordinateThe number of the medium-semantic point clouds,Respectively as dotsIn the x, y-axis direction of the vehicle,Is thatIs used to determine the confidence score of the (c) for the (c),The timestamp representing the semantic frame at time i isThe point cloud position information is
Confidence score for semantic points due to ground markersIn different, some points with lower scores are filtered through a semantic screening module, and in order to ensure that the ground mark semantics exist stably in the front and rear semantic frames, the semantic points which exist stably are reserved through a semantic matching module of the front and rear frames and are output to a self-built graph module as the point cloud of the current semantic frame.
A) Semantic screening, namely, inputting semantic frame point cloudEach point in (a)The following determination is made
;
Is a scoring threshold that determines whether a semantic point is selected.Is thatConfidence of detection of the mth point in (1), ifPoint thenSelected, otherwise fromAnd deleted.
B) Semantic matching if the current time isThe last moment isThe speed and the angular speed obtained from the chassis of the bicycle at the current moment are respectivelyAndThe semantic point cloud at the current moment isThe last moment isThe relative pose of the current moment relative to the previous moment is transformed;
;
;
;
;
Then
;
The semantic point cloud at the last momentConverting to the current moment to obtain;
;
For a pair ofEach point of (3)At the position ofIn the nearest neighbor search, the threshold is set to 0.05m, if inFind outIs the neighbor point of (1)Remain atIn otherwise fromAnd deleted. The semantic frame point cloud of the ground mark is obtained through the semantic screening and semantic matching module,Timestamp entered for semantic framesThe temperature of the liquid crystal is kept unchanged,The semantic point cloud location information is semantic point cloud location information subjected to semantic screening and semantic matching.
3. Pose generation module
The vehicle position is estimated in real time by using a vehicle sensor GNSS, an IMU and a wheel speed odometer. GNSS provides global position constraint of the vehicle, IMU provides constraint of angular velocity of acceleration of the vehicle, wheel speed odometer provides constraint of relative position, and 3 types of constraint are used for carrying out nonlinear optimization to solve current pose of the vehicle through a structural factor graph,Is thatIs a matrix of real rotations of (a),Is thatIs a real translation vector of (a).
A) GNSS absolute constraint assuming that the observed value of GNSS at time i isEstimating pose as i moment vehicle state,The rotation information of the own vehicle at the moment i,For the position of the own vehicle at the moment i, the following constraint condition is to be satisfied at the moment i:
;
b) IMU constraint, namely assuming that the current moment is i, and the acceleration input by the IMU is Angular velocity ofThenThe following constraints need to be satisfied at time i:
;
;
Wherein the method comprises the steps of To derive the 2 nd derivative of the translation vector,To derive a rotation torque matrix.
C) Wheel speed odometer position constraint, provided that the current instant i moves relative to the last instant i-1ThenThe following position constraints need to be met at time i:
;
d) Factor graph optimization, error of factor graph Is that
;
Wherein, The pose observed by the vehicle is mainly measured by GNSS. Error equation
;
Is an information matrix of errors, typically a 10-6 unit diagonal matrix.
Solving an optimal solution meeting the constraint conditions by using an LM method. Obtaining the estimation of the current pose of the vehicle and the corresponding time of each poseEnter into the self-parking pose queueIs a kind of medium.
4. Self-building picture module
Utilizing radar point cloud framesAnd ground logo semantic framesPose queue for vehicle estimationAnd carrying out radar semantic mapping to generate a self-built map. The method mainly comprises the steps of key frame time synchronization and radar semantic construction of FIG. 2.
A) Key frame time synchronization-pose queue estimated from a host vehicleIn, find the radar frames respectivelyAnd semantic framesThe most recent pose in time. I.e. for radar frame timeFrom the slaveFind by traversal inAndPose with nearest timeFor semantic frame timeFrom the slaveFind by traversal inAndPose with nearest timeAfter finding, the older pose time is compared with that of the modelAnd deleted. Respectively obtaining radar key frames through time synchronizationSemantic key framesWherein
;
AndAnd respectively representing the point clouds of the radar point cloud frame and the semantic point cloud frame under the own vehicle coordinates at the moment i.
B) Radar semantic mapping, namely obtaining radar key frames after time synchronizationAnd semantic key framesSplicing to obtain a self-built map:
;
Is the number of frames after the synchronization of the radar key frame time pose,And marking the number of frames after the time and the pose of the semantic frames are synchronized for the ground.
The device for realizing self-built map through the 4D millimeter wave imaging radar and the ground sign semantics aiming at the memory commute crane comprises:
a processor configured to execute computer-executable instructions;
The memory stores one or more computer executable instructions which, when executed by the processor, implement the steps of the method for implementing self-built maps for memory commuter vehicles through the 4D millimeter wave imaging radar and the ground sign semantics.
The processor is configured to execute computer executable instructions, and when the computer executable instructions are executed by the processor, the method for realizing the self-built map by the memory commuter through the 4D millimeter wave imaging radar and the ground sign semantics is realized.
The computer readable storage medium has stored thereon a computer program executable by a processor to perform the steps of the method for realizing self-built maps for memory commuter carts through 4D millimeter wave imaging radar and ground sign semantics described above.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution device.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "embodiments," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
The method, the device, the processor and the computer readable storage medium thereof for realizing the self-built map aiming at the memory commuting vehicle through the 4D millimeter wave imaging radar and the ground sign semantics overcome the problems that the laser radar is high in cost and not easy to produce in quantity and the complex period of drawing by using the high-precision map is long and slow to update.
In this specification, the invention has been described with reference to specific embodiments thereof. It will be apparent that various modifications and variations can be made without departing from the spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (10)

1. A method for realizing self-building map by using 4D millimeter wave imaging radar and ground sign semantics for memory commuter driving is characterized by comprising the following steps:
(1) Acquiring a full radar point cloud frame and a time stamp aiming at the radar point cloud received by the 4D millimeter wave radar;
(2) Screening and matching the semantic frames of the acquired point clouds of the ground mark elements to obtain corresponding point clouds of the ground mark semantic frames;
(3) The method comprises the steps of performing factor graph optimization processing by using a vehicle sensor GNSS, an inertial measurement unit IMU and a wheel speed odometer, so as to estimate a vehicle position in real time;
(4) Carrying out radar semantic mapping by using the obtained radar point cloud frame, the ground sign semantic frame and the pose queue estimated by the vehicle, thereby generating a self-built map for the current vehicle;
the step (4) performs key frame time synchronization according to the following mode:
From the car position queue Respectively find out the radar frame point cloudsAnd semantic frame point cloudThe most recent pose in time, among others,Estimating pose for i moment vehicle state,The rotation information of the own vehicle at the moment i,The position of the own vehicle at the moment i,For the time corresponding to each pose,Is the maximum radar time stamp,The radar points obtained by conversion of 6 radar external parameters from the front main radar, the left front angle radar, the right front angle radar, the left rear angle radar, the right rear angle radar and the front rear main radar of the vehicle at the moment iAs a result of the splicing, the splice results,Timestamp entered for semantic frames,Semantic point cloud position information subjected to semantic screening and semantic matching;
After finding the pose meeting the requirements, the old pose time is taken from To obtain the radar key frameSemantic key framesWherein:
;
;
And Respectively representing the point clouds of a radar point cloud frame and a semantic point cloud frame under the own vehicle coordinates at the moment i;
The method carries out radar semantic mapping according to the following modes:
Key frame of radar obtained after time synchronization And semantic key framesSplicing to obtain a self-built map:
;
Wherein, For the number of frames after the radar key frame time pose is synchronized,And marking the number of frames after the time and the pose of the semantic frames are synchronized for the ground.
2. The method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commute as claimed in claim 1, wherein the step (1) is specifically:
The radar point clouds respectively from the front main radar, the left front radar, the right front radar, the left rear radar, the right rear radar and the front rear main radar of the vehicle are received through a radar frame construction module, and the 4D millimeter wave Lei Dadian clouds are converted into the vehicle coordinates according to external parameter transformation between the 4D millimeter wave radar and the vehicle coordinates by synchronizing the 6 radar time stamps, so that the full radar point cloud frames and the time stamps after the 6 4D millimeter wave radars are fused are obtained.
3. The method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commute as claimed in claim 2, wherein the step (1) performs radar time synchronization processing according to the following manner:
assume that the currently acquired cloud information of the vehicle point is Wherein,,N isThe number of the radar points in the middle,Indicating that the j-th radar is at the time i, the time stamp isThe information of the point cloud is that,From nDot composition;
at time i, in one radar period, all acquired radar time stamps In finding the maximum radar timestampK is the number of the radar corresponding to the maximum timestamp, and the maximum timestamp is used as the maximum timestampSynchronizing other radar time stamps toAnd the time difference is used for compensating the position information of other radars, and the method specifically comprises the following steps:
;
;
;
;
;
;
Wherein, Representing each radar time stampAnd maximum timestampThe time difference between the two times of the two,AndThe angular velocity and the linear velocity of the chassis of the vehicle at the moment i are respectively,AndTo at the same timeThe amount of motion compensation of the radar point in the x, y directions during the time period,Is thatThe angle of rotation through which time passes,AndFor j-th radar after time synchronization alignmentFor each point(s)Position coordinates in the x, y direction.
4. The method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commute as claimed in claim 3, wherein the step (1) further comprises performing radar outlier change processing:
Assuming that the transformation between each radar and the own vehicle coordinates is The extrinsic transformation is transformed as follows:
;
the method comprises the steps that a radar point cloud of a jth radar subjected to external parameter conversion at the moment i;
After the processing, the radar frame point cloud is WhereinIs the maximum radar time stamp,
;
I.e.Radar points obtained after conversion of 6 radar external parameters at moment iAnd (5) splicing results.
5. The method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commute as claimed in claim 4, wherein the step (2) specifically comprises:
semantic screening, namely, inputting semantic frame point cloud Each point in (a)The following judgment processing is carried out:
;
To determine whether a semantic point is selected as a scoring threshold, Is thatConfidence of detection of the mth point in (1), ifPoint thenSelected, otherwise fromDelete in the middle;
semantic matching if the current time is The last moment isThe speed and the angular speed obtained from the chassis of the bicycle at the current moment are respectivelyAndThe semantic point cloud at the current moment isThe last moment isThe relative pose of the current moment relative to the previous moment is transformedThe method comprises the following steps:
;
;
;
;
Then
;
The semantic point cloud at the last momentConverting to the current moment to obtain:
;
Wherein, Representing the time difference between the current time and the last time,Is thatThe angle of rotation through which time passes,AndIs thatMotion compensation amount of the semantic frame in x and y directions in the time period;
For semantic point cloud Each point of (3)At the position ofIn the nearest neighbor search, the threshold is set to 0.05m, if inFind outIs the neighbor point of (1)Remain atIn otherwise fromObtaining the semantic frame point cloud of the ground markWhereinTimestamp entered for semantic frames,The semantic point cloud location information is semantic point cloud location information subjected to semantic screening and semantic matching.
6. The method for realizing self-built map by using 4D millimeter wave imaging radar and ground sign semantics for memory commute of claim 5, wherein the step (3) specifically comprises the following steps:
GNSS absolute constraint assuming that the observations of the GNSS at time i are Estimating pose as i moment vehicle state,The rotation information of the own vehicle at the moment i,For the position of the own vehicle at the moment i, the following constraint condition is to be satisfied at the moment i:
;
Wherein, For subject to, represent constrained;
IMU constraint, namely assuming that the current moment is i, and the acceleration input by the IMU is Angular velocity ofThenThe following constraints need to be satisfied at time i:
;
;
Wherein the method comprises the steps of To derive the 2 nd derivative of the translation vector,Deriving a disrotatory torque matrix;
wheel speed odometer position constraint, assuming that the current moment i moves relative to the previous moment i-1 ThenAt time i, constraint processing is performed as follows:
;
Optimizing factor graph, error of factor graph Is that
;
Wherein, The pose observed by the vehicle;
The error equation is:
;
Where m is the number of all factors, Solving an optimal solution meeting the constraint conditions by using an LM method as an information matrix of the errorObtaining the estimation of the current pose of the vehicle and the corresponding time of each poseEnter into the self-parking pose queueIs a kind of medium.
7. A system for implementing self-built maps for memory commute cars by means of 4D millimeter wave imaging radar and ground sign semantics using the method of any one of claims 1 to 6, characterized in that the system comprises:
the radar frame construction module is used for carrying out radar time synchronization and radar external parameter change processing on the received 6 millimeter wave radar point clouds so as to obtain a full radar point cloud frame and a time stamp after the 6 radars are fused;
the semantic frame construction module is used for carrying out semantic screening and semantic matching processing on the input ground mark semantics to obtain 3-dimensional position information of semantic elements under own vehicle coordinates;
pose generating module for real-time estimating vehicle position information by factor map optimization of vehicle sensor GNSS, inertial measurement unit IMU and wheel speed odometer, and
The self-built map generation module is connected with the radar frame construction module, the semantic frame construction module and the pose generation module and is used for performing key frame time synchronization and radar semantic map construction processing so as to generate a self-built map.
8. Device for realizing self-built map through 4D millimeter wave imaging radar and ground sign semantics for memory commute, which is characterized in that the device comprises:
a processor configured to execute computer-executable instructions;
a memory storing one or more computer-executable instructions which, when executed by the processor, implement the steps of the method for implementing a self-building map for a memory commuter vehicle through a 4D millimeter wave imaging radar and ground sign semantics of any one of claims 1-6.
9. A processor for realizing self-built map through 4D millimeter wave imaging radar and ground sign semantics for memory commuting vehicles, characterized in that the processor is configured to execute computer executable instructions which, when executed by the processor, realize the steps of the method for realizing self-built map through 4D millimeter wave imaging radar and ground sign semantics for memory commuting vehicles according to any one of claims 1-6.
10. A computer readable storage medium having stored thereon a computer program executable by a processor to perform the steps of the method for realizing self-map for memory commuter traffic via 4D millimeter wave imaging radar and ground sign semantics as claimed in any one of claims 1 to 6.
CN202411535243.5A 2024-10-31 2024-10-31 Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics Active CN119043347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411535243.5A CN119043347B (en) 2024-10-31 2024-10-31 Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411535243.5A CN119043347B (en) 2024-10-31 2024-10-31 Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics

Publications (2)

Publication Number Publication Date
CN119043347A CN119043347A (en) 2024-11-29
CN119043347B true CN119043347B (en) 2025-02-07

Family

ID=93587260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411535243.5A Active CN119043347B (en) 2024-10-31 2024-10-31 Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics

Country Status (1)

Country Link
CN (1) CN119043347B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359873A (en) * 2023-03-29 2023-06-30 上海几何伙伴智能驾驶有限公司 Method, device, processor and storage medium for 4D millimeter-wave radar SLAM processing in vehicles combined with fisheye camera
CN116358525A (en) * 2023-03-31 2023-06-30 三一重机有限公司 Mapping and positioning method, system and engineering vehicle based on lidar

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690338A (en) * 2021-07-30 2023-02-03 北京图森智途科技有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN113865580B (en) * 2021-09-15 2024-03-22 北京易航远智科技有限公司 Method and device for constructing map, electronic equipment and computer readable storage medium
CN115877367A (en) * 2021-09-27 2023-03-31 中车株洲电力机车研究所有限公司 A vehicle positioning method and device
CN117452358A (en) * 2023-03-13 2024-01-26 武汉大学 A vehicle 4D millimeter wave radar inertial odometry method and computer-readable medium
CN117970325A (en) * 2024-01-25 2024-05-03 北京行易道科技有限公司 Looking-around positioning and mapping method and system based on 4D imaging radar and vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359873A (en) * 2023-03-29 2023-06-30 上海几何伙伴智能驾驶有限公司 Method, device, processor and storage medium for 4D millimeter-wave radar SLAM processing in vehicles combined with fisheye camera
CN116358525A (en) * 2023-03-31 2023-06-30 三一重机有限公司 Mapping and positioning method, system and engineering vehicle based on lidar

Also Published As

Publication number Publication date
CN119043347A (en) 2024-11-29

Similar Documents

Publication Publication Date Title
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN114840703B (en) Pose information acquisition method, device, equipment, medium and product
EP3942794B1 (en) Depth-guided video inpainting for autonomous driving
JP2022019642A (en) Positioning method and device based upon multi-sensor combination
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN110596683A (en) A multi-group laser radar external parameter calibration system and method
CN111986506A (en) Mechanical parking space parking method based on multi-vision system
CN113405555B (en) Automatic driving positioning sensing method, system and device
CN111426320A (en) A Vehicle Autonomous Navigation Method Based on Image Matching/Inertial Navigation/Odometer
CN113252022A (en) Map data processing method and device
CN114998436B (en) Object labeling method, device, electronic device and storage medium
CN113706702A (en) Mining area three-dimensional map construction system and method
CN116977628A (en) A SLAM method and system based on multi-modal semantic framework applied to dynamic environments
CN113673386A (en) Method for marking traffic signal lamp in prior-to-check map
CN115628738B (en) A multi-modal autonomous navigation and positioning system
CN119043347B (en) Self-built map method based on 4D millimeter wave imaging radar and ground landmark semantics
WO2025161553A1 (en) High-precision positioning method and system based on multi-source fusion technology, and vehicle
CN113403942A (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN115655288B (en) Intelligent laser positioning method, device, electronic device and storage medium
CN115601530B (en) Method and system for automatically marking traffic lights on point cloud maps
CN118896601A (en) Map generation method, device, medium, program product and unmanned vehicle
CN113390422B (en) Automobile positioning method and device and computer storage medium
CN118196205A (en) On-line self-calibration method and system for external parameters of vehicle-mounted camera
CN117249819A (en) Incremental map construction method, storage medium and electronic equipment
CN116279576A (en) Method and device for memory driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant