[go: up one dir, main page]

CN117974926A - A positioning and mapping method and system based on mechanical rotating three-dimensional laser - Google Patents

A positioning and mapping method and system based on mechanical rotating three-dimensional laser Download PDF

Info

Publication number
CN117974926A
CN117974926A CN202410362519.8A CN202410362519A CN117974926A CN 117974926 A CN117974926 A CN 117974926A CN 202410362519 A CN202410362519 A CN 202410362519A CN 117974926 A CN117974926 A CN 117974926A
Authority
CN
China
Prior art keywords
frame
point cloud
key frame
cloud data
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410362519.8A
Other languages
Chinese (zh)
Other versions
CN117974926B (en
Inventor
刘旺
庄文密
周航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Agv Robot Co ltd
Original Assignee
Suzhou Agv Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Agv Robot Co ltd filed Critical Suzhou Agv Robot Co ltd
Priority to CN202410362519.8A priority Critical patent/CN117974926B/en
Publication of CN117974926A publication Critical patent/CN117974926A/en
Application granted granted Critical
Publication of CN117974926B publication Critical patent/CN117974926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请涉及一种基于机械旋转式三维激光的定位与建图方法及系统,其属于定位与建图技术领域,其中方法包括:基于运动畸变校正模型,对激光点云数据执行畸变校正处理;对畸变校正后的激光点云数据执行预处理操作,所述预处理操作至少包括地面点云分割、角点分割、平面点分割,实时滤除动态点;将预处理后的激光点云数据分别合并形成关键帧,并与之前的关键帧进行回环检测,在检测出回环后,对所有的关键帧位姿进行优化处理;基于关键帧点云和优化后的关键帧位姿,构建并输出整体点云地图。本申请提出的方法能使用较少运算量并提高三维激光雷达定位和建图精度。

The present application relates to a positioning and mapping method and system based on mechanical rotating three-dimensional laser, which belongs to the field of positioning and mapping technology, wherein the method includes: based on the motion distortion correction model, performing distortion correction processing on the laser point cloud data; performing preprocessing operations on the distortion-corrected laser point cloud data, the preprocessing operations at least include ground point cloud segmentation, corner point segmentation, plane point segmentation, and real-time filtering of dynamic points; merging the preprocessed laser point cloud data to form key frames, and performing loop detection with the previous key frames, and optimizing all key frame poses after detecting the loop; constructing and outputting the overall point cloud map based on the key frame point cloud and the optimized key frame pose. The method proposed in the present application can use less computation and improve the accuracy of three-dimensional laser radar positioning and mapping.

Description

Positioning and mapping method and system based on mechanical rotation type three-dimensional laser
Technical Field
The application relates to the technical field of positioning and mapping, in particular to a positioning and mapping method based on mechanical rotation type three-dimensional laser.
Background
Mobile robots are currently used in various locations, such as warehouse robots in industrial logistics, sweeping robots in service industry, killing robots in medical industry, etc. With the advent of intelligent lifestyles, the demands of mobile robots have become diversified, and it is desirable to perfectly realize specific tasks. Meanwhile, positioning and mapping technology (SLAM) is currently commonly applied to the field of mobile robots, and is considered as a core link for realizing applications such as obstacle avoidance, road strength planning, navigation and the like.
SLAM is used for positioning and mapping; when the robot is in an unknown environment, the sensor senses environment information and constructs a map, and meanwhile, the pose of the robot is estimated. The laser radar can provide accurate and reliable ranging, is not easily affected by illumination, and is widely applied to SLAM ranging.
Because most of the laser radars are of rotating structures, the laser radars can gradually scan the surrounding environment in the rotating process, and acquire and generate laser point cloud data, and the corresponding laser point cloud data is discrete sampling data of the external environment in a three-dimensional space; however, in the movement process of the laser radar based on mechanical rotation scanning, the coordinate system is continuously changed, so that point coordinates in the same frame of point cloud are not in the same coordinate system, and movement distortion is generated, and the robustness and the accuracy of the SLAM system are easily reduced.
Disclosure of Invention
In order to improve the influence of the motion distortion of the laser radar on the SLAM system and optimize the robustness and the accuracy of the SLAM system, the application provides a positioning and mapping method based on mechanical rotation type three-dimensional laser.
In a first aspect, the present application provides a positioning and mapping method based on mechanical rotation type three-dimensional laser, including:
Performing distortion correction processing on the laser point cloud data based on the motion distortion correction model;
Performing preprocessing operation on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and filtering dynamic points in real time;
Combining the preprocessed laser point cloud data to form key frames respectively, performing loop detection with the previous key frames, and performing optimization processing on all the key frame poses after loop detection;
And constructing and outputting an integral point cloud map based on the point cloud of the key frame and the optimized key frame.
By adopting the technical scheme, the obtained laser point cloud data is subjected to motion distortion removal processing so as to redetermine the actual position of each point in the point cloud data, optimize positioning accuracy, and then perform preprocessing, key frame merging, loop detection and loop optimization on the point cloud data to finally obtain an integral point cloud map, realize positioning and composition, and improve the accuracy and robustness of positioning and image construction technologies.
Optionally, the laser point cloud data is obtained by a preset laser radar according to a preset frequencyScanning to obtain laser point cloud data, wherein the laser point cloud data obtained by each laser radar scanning are one frame of point cloud data;
the performing distortion correction processing on the laser point cloud data based on the motion distortion correction model includes:
When one frame of point cloud data is acquired, calculating the pose of the current frame of point cloud data based on a pre-constructed wheel type odometer interpolation model And pose/>, of the previous frame of point cloud data; Wherein, the previous frame point cloud data refers to: scanning time dimension from the corresponding laser radar, and one frame of point cloud data before the current frame of point cloud data;
calculating the relative pose of the current frame point cloud data relative to the previous frame point cloud data
Traversing each point of the point cloud data of the current frame, and calculating to obtain a horizontal direction angle corresponding to each point according to the Z-direction projection coordinate of each pointDefining the horizontal angle of the last point in the point cloud data of the current frame as; Calculating the corresponding generation time/>, of each pointAnd interpolation ratio; Wherein, theThe Z-direction projection coordinates where the individual points are located can be expressed asFirstHorizontal direction angle corresponding to each point
According to the generation time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, and calculating to obtain the coordinate data of each point of the current frame according to the pose to finish distortion correction.
By adopting the technical scheme, the application mainly utilizes a sensor auxiliary method to compensate motion distortion, and particularly adopts a wheel type odometer method, and distortion correction is realized by solving the odometer pose corresponding to the point cloud data of each frame of the laser radar, converting the obtained pose into the same coordinate system, and finally packaging to obtain the point cloud data of the corresponding frame.
Optionally, the method further comprises:
acquiring odometer data obtained by wheel type odometer detection, and sequentially inserting the odometer data into a preset queue from the tail of the preset queue according to the sequence of detection time; wherein each piece of odometer data comprises a data time stamp and pose information;
When receiving the interpolation time stamp When the preset wheel type odometer difference model is used, the odometer data in the preset queue are deleted from the head of the queue one by one; and each time one mileometer data is deleted, temporarily storing the data timestamp/>, of the deleted mileometer dataAnd pose informationJudging whether a preset interpolation condition is met, if yes, stopping deleting, and executing interpolation operation; wherein, the preset difference condition is: only 1 milemeter data is left in a preset queue or a data timestamp/>, corresponding to the milemeter data at the head of the preset queueGreater than the interpolation timestamp
Wherein the performing interpolation operation includes:
If the interpolation time stamp If the first preset condition is met, determining and outputting the interpolation time stampCorresponding pose information; Wherein, the first preset condition is: the interpolation timestampAnd saidThe time difference value of (2) is smaller than a first preset difference value;
If the interpolation time stamp If the second preset condition is met, determining and outputting; Wherein the second preset condition is: the interpolation timestampAnd saidThe time difference value of (2) is smaller than a second preset difference value;
If the interpolation time stamp The first preset condition and the second preset condition are not satisfied, and atAndBetween them, determine and outputWhereinFor theAndPose speed between;
If the interpolation time stamp Greater thanAndAndIf the time interval of (2) is smaller than the preset time threshold, determining and outputting; IfAndIf the time interval of the interpolation time stamp is not less than the preset time threshold value, determining and outputting the interpolation time stampCorresponding pose information
The generation time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, wherein the method comprises the following steps:
according to the generation time corresponding to each point in the point cloud data of the current frame And interpolation ratioDetermining interpolation time stamp/>, corresponding to each pointInterpolation timestampAnd inputting a pre-constructed motion distortion correction model, and outputting the pose corresponding to each point in the point cloud data of the current frame.
Optionally, the preprocessing operation further includes a downsampling process; the step of respectively merging the preprocessed laser point cloud data to form key frames comprises the following steps:
s1041, judging whether the current key frame is empty or not after the preprocessing operation is finished on the laser point cloud data;
S1042, if yes, inserting laser point cloud data which is currently subjected to preprocessing operation into a key frame as first frame point cloud data, and taking the first frame point cloud data as a reference frame of the current key frame; calculating the point cloud center of gravity of the angular point cloud in the first frame of point cloud data Point cloud center of gravity/>, of planar point cloud; Wherein,For the number of corner points in the first frame of point cloud data,For the number of plane points in the first frame point cloud data,The first frame of point cloud data is the first/> in the corner point cloud corresponding to the first frame of point cloud dataCoordinates of the corner points,The first frame of point cloud data is the first/>, in the plane point cloud corresponding to the first frame of point cloud dataCoordinates of the individual plane points;
s1043, if not, calculating and judging whether the current frame point cloud data of which the preprocessing operation is currently finished meets a first judging condition or a second judging condition, and then executing the preset processing operation according to the judging result;
Wherein, the executing the preset processing operation according to the determination result includes:
If the first judging condition is met, transforming the pose corresponding to the point cloud data of the current frame into a reference frame coordinate system, carrying out point cloud superposition and downsampling on the point cloud data of the current frame and the reference frame to form a new reference frame, and updating the new reference frame
If the second judging condition is met, packaging the keyframes at the moment, storing the keyframes in a preset keyframe database, clearing the keyframes, inserting the point cloud data of the current frame into the cleared keyframes, and replacing the current reference frame with the point cloud data of the current frame; the key frame database is used for storing key frames formed by packing each time;
wherein, the first judging condition is: nth frame point cloud data , Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameIs less than a first distance threshold; the second judgment condition is: />, of nth frame point cloud data, Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameThe sum of the distance values of the frames is not smaller than a first distance threshold, or the distance of the pose coordinates of the reference frame of the pose coordinates of the nth frame of point cloud data is larger than a second distance threshold, or the rotation angle of the nth frame of laser point cloud data is larger than a specified angle threshold.
Optionally, the merging the preprocessed laser point cloud data to form key frames respectively further includes:
After the preprocessing operation is finished on the laser point cloud data, inserting the point cloud data of the current frame after the preprocessing operation into a secondary key frame, and executing the step contents of S1041, S1042 and S1043 on the primary key frame;
When the main key frame meets the second judging condition and performs the packing operation in the preset processing operation, namely, when the main key frame is packed and stored in a preset key frame database, if the packing is the first packing, recording the pose coordinate M of the current main key frame when the current main key frame is packed, and determining the relative distance between the pose coordinate M of the current main key frame and the pose coordinate of the reference frame ; If the current package is not the first package, determining the current main key frame pose, the relative distance between the main key frame pose and the main key frame pose coordinate M recorded by the previous package, updating M by using the current main key frame pose, and updating/> by using the current determined relative distance
When the relative distance between the pose coordinates and the coordinates M of the auxiliary key frame reachesWhen the secondary key frames are packaged and stored in a key frame database;
at each determination, a relative distance is derived Then, if the relative distance between the pose coordinates of the main key frame and M of the auxiliary key frame reachesBefore, a packing operation in a preprocessing operation is executed, and at the moment, the auxiliary key frame and the main key frame are packed together and stored in a key frame database;
Clearing the primary keyframe each time the primary keyframe is packed; the secondary key frame is emptied each time it is packed.
Optionally, the loop detection is performed on the previous key frame, and after loop detection, the optimization processing is performed on all the key frame poses, including:
calculating the feature vector of the current newly added key frame when the key frame is newly added in the key frame database, and storing the feature vector of the current newly added key frame in a preset loop detection module;
determining a key frame corresponding to a key frame number most similar to the feature vector of the current newly added key frame based on a pre-established k-d tree, determining the key frame as a potential key frame, and if the hamming distance between the feature vector of the potential key frame and the current newly added feature vector is smaller than a third distance threshold, considering the potential key frame as a candidate key frame corresponding to the current newly added key frame;
if the candidate key frame exists, generating a large key frame, wherein the large key frame is the key frame formed by a plurality of frame point cloud data adjacent to the candidate key frame in detection time;
calculating the relative pose between the current newly added key frame and the large key frame, and transforming the coordinates of the current newly added key frame so as to enable the coordinate origin of the coordinates of the current newly added key frame and the large key frame to be consistent; traversing each point of the current newly added key frame, calculating Euclidean distance between each point and the nearest point, and if the Euclidean distance is smaller than a fourth distance threshold, considering that matching is valid;
when traversing is completed, averaging Euclidean distances corresponding to all points which are considered to be matched effectively, if the averaged value is smaller than a fourth distance threshold value, determining that a loop is detected, determining that the current newly added key frame is a loop frame, and determining that all candidate key frames in a candidate key frame group corresponding to the loop frame are adjacent frames;
And after loop-back is detected, connecting the current newly added key frame and the candidate key frames corresponding to the current newly added key frame in a graph optimization mode, performing nonlinear pose optimization, and updating the pose of each key frame in the key frame database by using the optimized pose.
Optionally, if the hamming distance between the feature vector of the potential key frame and the current newly added feature vector is smaller than a third distance threshold, the potential key frame is considered to be a candidate key frame corresponding to the current newly added key frame, including:
If the Hamming distance between the feature vector of the potential key frame and the current newly-added feature vector is smaller than a third distance threshold, the difference value between the key frame number of the potential key frame and the key frame number of the current newly-added key frame is not smaller than a specified difference value, and the relative position distance between the candidate key frame and the current key frame is not larger than a specified position threshold; the potential key frame is considered to be a candidate key frame corresponding to the current newly added key frame; the key frame numbers are used for representing the synthesis time of the corresponding key frames, the difference value of the key frame numbers corresponding to any key frame can be used for representing the adjacent degree of the synthesis time of the key frames, and the smaller the difference value is, the more adjacent the key frames are.
In a second aspect, the present application provides a positioning and mapping system based on mechanical rotation type three-dimensional laser, comprising:
The motion distortion correction module is used for performing distortion correction processing on the laser point cloud data based on the motion distortion correction model;
The point cloud data preprocessing module is used for executing preprocessing operation on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and dynamic points are filtered in real time;
The key frame merging maintenance module is used for merging the preprocessed laser point cloud data respectively to form key frames, carrying out loop detection on the key frames and the previous key frames, and carrying out optimization treatment on all the key frame pose after loop detection;
and the point cloud map generation module is used for constructing and outputting an integral point cloud map based on the key frame point cloud and the optimized key frame pose.
In a third aspect, the present application provides a positioning and mapping device based on a mechanically rotated three-dimensional laser, comprising a memory and a processor, the memory having stored thereon a computer program capable of being loaded by the processor and performing the method according to any of the first aspects.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program capable of being loaded by a processor and performing any one of the methods of the first aspect.
In summary, the application has the following beneficial technical effects:
The application utilizes the mechanical rotation scanning function of the three-dimensional laser radar, is assisted with the detection of the pose and other information of the wheel type odometer, corrects the motion distortion generated in the laser radar scanning process, effectively compensates the motion distortion, improves the motion estimation precision of the laser radar, and generates a global consistent map.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a positioning and mapping method based on a mechanical rotary three-dimensional laser according to an embodiment of the present application.
Fig. 2 is a block diagram of a positioning and mapping structure based on a mechanical rotary three-dimensional laser according to an embodiment of the present application.
Reference numerals illustrate: 1. a motion distortion correction module; 2. the point cloud data preprocessing module; 3. a key frame merging maintenance module; 4. and the point cloud map generation module.
Detailed Description
The application is described in further detail below with reference to fig. 1-2.
The embodiment of the application discloses a positioning and mapping method based on mechanical rotary three-dimensional laser (hereinafter referred to as positioning and mapping method for short), which is applied to the motion positioning and map construction of a mobile robot, and specifically utilizes the mechanical rotary scanning function of a laser radar carried on the mobile robot to construct and present map information of the environment where the mobile robot is located and the position information of the mobile robot. Specifically, the main implementation body of the positioning and mapping method disclosed by the application is a positioning and mapping system (hereinafter referred to as SLAM system) of a mechanical rotary three-dimensional laser, and the SLAM system is in communication connection with a laser radar and a wheel type odometer, so that laser point cloud data detected by the laser radar and pose information detected by the wheel type odometer can be obtained.
Accordingly, the detailed steps of the positioning and mapping method performed by the SLAM system will be described in detail below with reference to fig. 1.
S101, performing distortion correction processing on the laser point cloud data based on the motion distortion correction model.
Specifically, S101 includes the following sub-steps:
s1011, calculating the pose of the current frame point cloud data based on a pre-constructed wheel type odometer interpolation model when one frame point cloud data is acquired And pose/>, of the previous frame of point cloud data; The previous frame point cloud data refers to: scanning time dimension from the corresponding laser radar, and one frame of point cloud data before the current frame of point cloud data;
S1012, calculating the relative pose of the point cloud data of the current frame relative to the point cloud data of the previous frame
S1013, traversing each point of the point cloud data of the current frame, and calculating a horizontal direction angle corresponding to each point according to the Z-direction projection coordinate of each pointThe horizontal angle of the last point in the point cloud data of the current frame is defined as; Calculating the corresponding generation time/>, of each pointAnd interpolation ratio; Wherein, theThe Z-direction projection coordinates where the individual points are located can be expressed asFirstHorizontal direction angle corresponding to each point
S1014, according to the generation time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, and calculating to obtain the coordinate data of each point of the current frame according to the pose to finish distortion correction.
In the implementation, firstly, laser point cloud data is acquired, and the laser point cloud data is obtained by a laser radar according to a preset frequencyScanning data which are obtained from the surrounding environment and exist in the form of point clouds during mechanical rotation, obtaining one frame of point cloud data each time the laser radar completes one-time rotation scanning operation according to different scanning time, wherein each frame of point cloud data comprises a plurality of points, and each point is the position corresponding to each emitted laser beam in the rotation process of the laser radar; the specific definition of the point cloud data and the specific content contained therein are all the prior art, and are not described herein.
The embodiment of the application limits the pose of the point cloud data of the current frame to be horizontally installed on the ground by the laser radarThe pose of the point cloud data of the previous frame/>, which is the pose of the point cloud of the current frame at the origin of coordinates, is similar to the pose of the point cloud data of the current frameFor the position at the origin of coordinates of the point cloud of the previous frame, the aforementioned two pose information will be calculated by a pre-constructed motion distortion correction model, in particular a wheel odometer interpolator (the principle of which will be explained in detail below).
Calculating a wheel odometer pose at a previous frame using a wheel odometer interpolatorAnd wheel odometer pose of current frameAt this time, the relative pose/>, between two frames can be obtained. Traversing each point of the point cloud of the current frame, and calculating the corresponding horizontal direction angle/>, according to the projection coordinates of each point in the Z direction, of each pointWherein the horizontal angle of the last point is denoted; According toAndThe ratio of the frame to which the point belongs can be calculated; Such as dotIs the first frameThe method defaults that when the laser radar scans for the first time, the time of the first point of the scanned point cloud is known, and then the method can be combined with the first pointCalculating to obtain the scanning/>, of the laser radarThe time at each point and the time at the first point due to the subsequent second frame and mth frame can be based onAnd the time of a point of the previous frame, so the time of each point of each frame can be calculated. The SLAM system then determines relative poseIs converted into Euler angles according to the proportion/>, of the time takenInterpolation is carried out, and the displacement is also carried out according to the same proportionInterpolation is carried out, and the coordinate in the Z direction is inconvenient to maintain, so that the actual coordinate of each point can be calculated, and correction is realized.
Alternatively, the aforementioned motion distortion correction model works as follows:
Acquiring odometer data obtained by wheel type odometer detection, and sequentially inserting the odometer data into a preset queue from the tail of the preset queue according to the sequence of detection time; wherein each odometer data includes a data timestamp and pose information;
When receiving the interpolation time stamp When the preset wheel type odometer difference model is used, the odometer data in the preset queue are deleted from the head of the queue one by one; and each time one mileometer data is deleted, temporarily storing the data timestamp/>, of the deleted mileometer dataAnd pose informationJudging whether a preset interpolation condition is met, if yes, stopping deleting, and executing interpolation operation; wherein, the preset difference condition is: only 1 milemeter data is left in a preset queue or a data timestamp/>, corresponding to the milemeter data at the head of the preset queueGreater than interpolation timestamp
Wherein, the step of performing interpolation operation includes:
If the time stamp is interpolated If the first preset condition is met, determining and outputting the interpolation time stampCorresponding pose information; The first preset condition is as follows: interpolation timestampAndThe time difference value of (2) is smaller than a first preset difference value;
If the time stamp is interpolated If the second preset condition is met, determining and outputting; The second preset condition is as follows: interpolation timestampAndThe time difference value of (2) is smaller than a second preset difference value;
If the time stamp is interpolated The first preset condition and the second preset condition are not satisfied, and atAndBetween them, determine and outputWhereinForAndPose speed between;
If the time stamp is interpolated Greater thanAndAndIf the time interval of (2) is smaller than the preset time threshold, determining and outputting; IfAndIf the time interval of the interpolation time stamp is not less than the preset time threshold value, determining and outputting the interpolation time stampCorresponding pose information
Accordingly, the "generation time corresponding to each point in the point cloud data according to the current frame" in S1014And interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, wherein the method comprises the following steps of:
Generating time corresponding to each point in the point cloud data of the current frame And interpolation ratioDetermining interpolation time stamp/>, corresponding to each pointInterpolation timestampAnd inputting a pre-constructed motion distortion correction model, and outputting the pose corresponding to each point in the point cloud data of the current frame.
In practice, the wheel type odometer synchronously detects the odometer data of the mobile robot while the laser radar scans, wherein the odometer data at least comprises detection time (namely a data timestamp) and corresponding pose information; the SLAM system receives the odometer data in real time and inserts the odometer data into a preset queue, and the insertion rule is as follows: and sequentially inserting from the tail of the team according to the sequence of the detection time. When receiving the interpolation time stampWhen the motion distortion correction model will calculate the interpolation timestamp/>, using the following methodIs specific to the pose of (a): sequentially deleting the mileometer data in the preset queue from the head of the queue according to the insertion sequence when the mileometer data is inserted into the queue, temporarily storing the deleted mileometer data every time when one piece of the mileometer data is deleted, judging whether interpolation conditions are met, if so, suspending the deletion, and executing the interpolation operation to calculate out an interpolation timestampCorresponding pose
Wherein if the time stamp is interpolatedGreater thanAndAndIf the time interval of (a) is less than the preset time threshold (e.g., 0.2 seconds), determining and outputting; IfAndIf the time interval of (a) is not less than the preset time threshold (e.g. 0.2 seconds), the interpolation is considered to be failed, and an interpolation timestampCorresponding pose information
S102, preprocessing operation is carried out on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and dynamic points are filtered in real time.
In implementation, the following schemes are exemplarily provided to realize the operations of ground point cloud segmentation, focus segmentation, plane point segmentation and filtering dynamic points, but it should be noted that the manner of implementing the preprocessing operation is not limited to the manner disclosed in the embodiment of the present application, and other technologies capable of implementing the preprocessing operation are all within the protection scope of the present application.
For ground point cloud segmentation processing, the ground point cloud is primarily screened through the longitudinal point cloud curvature with the same horizontal direction angle of the mechanical rotary 3-dimensional laser radar, downsampled into sparse point cloud, and more accurate ground plane point cloud is extracted through a random sample consistency algorithm.
For corner segmentation and planar point segmentation: the curvature is calculated for the point cloud of each laser beam of the 3-dimensional laser radar, wherein the point cloud is the corner point with the curvature larger than a preset experience value and the plane point with the curvature not larger than the preset experience value.
For filtering dynamic points: under a non-narrow indoor environment (such as a park, a wide workshop and the like), the point clouds appearing above the places are dynamic points as long as the laser radar irradiates the ground plane, and the specific judging mode is as follows: and maintaining the whole ground grid by using the ground plane point cloud extracted and obtained above, projecting the input laser point cloud data to the whole ground grid, and considering the point as a dynamic point if the whole ground grid is occupied, or else, as a static point.
After the corner points and the plane points are obtained by segmentation and the dynamic points are removed, it is noted here that the segmented plane points need to be added as the plane points into the plane point cloud after the dynamic points are removed.
In addition, the application constructs laser mileage to be used for calculating pose; specifically, for the first frame point cloud of the laser radar, setting the pose asThen, the position where the first frame is located is a center drawing cube, point clouds in the cube are reserved to form a local point cloud map, the local point cloud map comprises a corner local point cloud map and a plane point local point cloud map, which are maintained respectively, and the point cloud map and the plane point local point cloud map are noted that since the number of the corner points is small and the contribution of the corner points in positioning is larger than that of the plane points, the downsampling resolution of the corner points voxels is generally smaller than that of the plane point voxels. And then matching the removed dynamic point corner points of each frame with the corner point local point cloud map, matching the plane points of the removed dynamic points with the plane point local point cloud map, and then jointly calculating the pose result of the laser odometer and updating the local point cloud map so as to obtain the local point cloud map of each frame and the pose information of each point in the local point cloud map.
S103, respectively merging the preprocessed laser point cloud data to form key frames, carrying out loop detection on the key frames and the previous key frames, and carrying out optimization processing on all the key frame poses after loop detection.
The specific implementation content of the step of forming the key frame in S103 is as follows:
s1041, judging whether the current key frame is empty or not after the preprocessing operation is finished on the laser point cloud data;
S1042, if yes, inserting laser point cloud data which is currently subjected to preprocessing operation into a key frame as first frame point cloud data, and taking the first frame point cloud data as a reference frame of the current key frame; calculating the point cloud center of gravity of the point cloud of the corner point in the first frame of point cloud data Point cloud center of gravity/>, of planar point cloud; WhereinFor the number of corner points in the first frame of point cloud data,For the number of plane points in the first frame point cloud data,The first frame of point cloud data is the first/>, in the point cloud corresponding to the first frame of point cloud dataCoordinates of the corner points,The first frame of point cloud data is the first/> in the plane point cloud corresponding to the first frame of point cloud dataCoordinates of the individual plane points;
s1043, if not, calculating and judging whether the current frame point cloud data of which the preprocessing operation is currently finished meets a first judging condition or a second judging condition, and then executing the preset processing operation according to the judging result;
Wherein, according to the judging result, executing the preset processing operation, comprising:
If the first judging condition is met, transforming the pose corresponding to the point cloud data of the current frame into a reference frame coordinate system, carrying out point cloud superposition and downsampling on the point cloud data of the current frame and the reference frame to form a new reference frame, and updating the new reference frame
If the second judging condition is met, packing the key frame at the moment and storing the key frame into a preset key frame database, clearing the key frame, inserting the point cloud data of the current frame into the cleared key frame, and replacing the current reference frame with the point cloud data of the current frame; the key frame database is used for storing key frames formed by packing each time;
wherein, the first judging condition is: nth frame point cloud data , Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameIs less than a first distance threshold; the second judgment condition is: />, of nth frame point cloud data, Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameThe sum of the distance values of the frames is not smaller than a first distance threshold, or the distance of the pose coordinates of the reference frame of the pose coordinates of the nth frame of point cloud data is larger than a second distance threshold, or the rotation angle of the nth frame of laser point cloud data is larger than a specified angle threshold.
In practice, the above operations will be utilized to combine to form a keyframe for point cloud data that completes the preprocessing operation. Specifically, when the program is just started, the key frame is empty; since the point cloud data is discrete data which is generated by the laser radar according to the designated frequency and takes the frame as a unit, when the SLAM system receives one frame of point cloud data, motion distortion removal and preprocessing operation are carried out on the point cloud data once, whether the key frame is empty or not is firstly judged every time the preprocessing operation is finished on one frame of point cloud data, if the key frame is empty, the point cloud data which is currently finished in the preprocessing operation is inserted into the key frame, that is, the point cloud data at the moment is the first frame of point cloud data which is inserted into the key frame and is used as a reference frame, and the reference frame is mainly used as a reference for comparing with other frame data to judge whether the first judging condition or the second judging condition is met. Correspondingly, the SLAM system calculates the point cloud center of gravity of the point cloudPoint cloud center of gravity/>, of planar point cloud
When the key frame is not empty at this time, then when the second frame point cloud data for which preprocessing is completed is received, since the key frame is not empty at this time, the SLAM will perform S1043 step on the frame point cloud data, that is, calculate the frame point cloud data to be usedAndAnd judging whether the first judging condition or the second judging condition is met, if the first judging condition is met, the SLAM system transforms the pose of the frame point cloud data to a reference frame coordinate system, performs point cloud superposition and downsampling processing on the frame point cloud and the point cloud of the current reference frame, and at the moment, according to the point cloud superposition, the/> of the reference frame needs to be recalculated and updated according to the reference frame after the point cloud superpositionAnd. When the third frame point cloud data is received, the steps of S1041, S1042, S1043 will be performed again.
If the second frame point cloud data meets the second judging condition, packing the key frame into a preset key frame database, namely, storing the key frame formed by combining all the point cloud data inserted into the current key frame into the preset key frame database, then emptying the current key frame, inserting the second frame point cloud data into the emptied key frame, and simultaneously replacing the reference frame with the second frame point cloud data to realize the updating of the reference frame and the corresponding reference frameAndAlso updated as/>, of the second frame point cloud dataAnd. The reference frame is the first frame point cloud data of the inserted key frame every time the key frame is changed from idle to non-idle. When the above operation is completed, the steps S1041 to S1043 are executed again when new preprocessed frame point cloud data is received, that is, each time the preprocessed frame point cloud data is received, the steps S1041 to S1043 are executed once.
Preferably, the application also provides a specific maintenance mode for simultaneously maintaining two parallel key frames, and the corresponding steps are as follows:
s1044, inserting the point cloud data of the current frame after the preprocessing operation into the auxiliary key frame each time the preprocessing operation is finished on the laser point cloud data, and executing the step contents of S1041, S1042 and S1043 on the main key frame;
s1045, judging whether the main key frame meets the first judging condition and the second judging condition in real time, and executing preset processing operation according to the judging result;
S1046, when the main key frame satisfies the second determination condition and performs the packing operation in the preset processing operation, that is, when the main key frame is packed and stored in the preset key frame database, if the packing is the first packing, recording the pose coordinate M of the current main key frame when the current main key frame is packed, and determining the relative distance between the pose coordinate M of the current main key frame and the pose coordinate of the reference frame ; If the current package is not the first package, determining the current main key frame pose, the relative distance between the main key frame pose and the main key frame pose coordinate M recorded by the previous package, updating M by using the current main key frame pose, and updating/> by using the current determined relative distance
S1047, when the relative distance between the pose coordinates of the sub-key frame and the coordinate M reachesWhen the secondary key frames are packaged and stored in a key frame database;
s1048, obtaining the relative distance at each determination Then, if the relative distance between the pose coordinates of the main key frame and M of the auxiliary key frame reachesBefore, a packing operation in a preprocessing operation is executed, and at the moment, the auxiliary key frame and the main key frame are packed together and stored in a key frame database;
S1049, clearing the main key frame each time the main key frame is packed; the secondary key frame is emptied each time it is packed.
In implementation, the difference between the scheme and the scheme is that a secondary key frame is added, so that the key frames are changed from one to two; while the primary key frame herein may be regarded as the key frame mentioned in the above steps S1041-S1043, for the primary key frame, the primary key frame will perform the steps S1041-S1043 to determine whether the primary key frame satisfies the packing condition. For the secondary key frame, S1044 is always performed to insert laser point cloud data completing the preprocessing operation into the secondary key frame;
specifically, at the start of the program, the primary key frame and the secondary key frame are both in a null state, and it is known in conjunction with S1041 and S1044 that, for the frame point cloud data that completes the preprocessing operation, the frame point cloud data will be inserted into the primary key frame and the secondary key frame at the same time. For the primary key frame, the same steps as in S1041-S1043 above are performed whenever the SLAM system acquires the frame point cloud data completing the preprocessing operation, so as to determine whether the primary key frame meets the packing condition (i.e. the second determination condition), and the packing operation is performed when the primary key frame meets the packing condition, in which each acquired frame point cloud data completing the preprocessing operation is continuously inserted into the secondary key frame.
For S1046, if after the first start of the program, the primary key frame satisfies the packing condition for the first time and is packed into the key frame database, the SLAM system records the pose coordinates of the current frame (the current frame is the current frame point cloud data of the current completed preprocessing operation satisfying the second determination condition mentioned in S1043) of the current primary key frame when the current primary key frame is packed, and marks M as the pose coordinates, and the relative distance between the current frame and the current reference frame in terms of coordinatesM andCan be calculated from the laser odometer described above,
If the primary key frame is secondary or tertiary) When the packing condition is satisfied and packed into the key frame database, the SLAM system records the pose coordinate of the current frame when the current main key frame is packed and the relative distance between the pose coordinate and the pose coordinate (namely M) when the main key frame is packed last time, the recorded pose coordinate updates M, and the relative excitation determined at this time is used for updating
When the primary key frame is packed, the secondary key frame always maintains the state of the frame point cloud data inserted into the preprocessing operation. While S1047 and S1048 illustrate the condition that the secondary key frames are packed: for the auxiliary key frame of the continuously inserted frame point cloud data, the distance between the auxiliary key frame and M reaches after the main key frame is packaged each timeThen SLAM packages and saves the secondary key frame to the key frame database;
however, if the distance between the secondary key frame and M is still not reached after each packing of the primary key frame When the primary key frame is packed for the second time, the secondary key frame and the primary key frame are packed and stored in the key frame database at the same time, namely the secondary key frame is packed once forcedly, and when the primary key frame is packed, the SLAM system records the pose coordinate when the primary key frame is packed, records the relative distance between the pose coordinate and the pose coordinate when the primary key frame is packed last time, updates M by the pose coordinate, and updates/> bythe relative distanceRepeating S1047 and S1048 with updated M andAnd judging whether the distance between the secondary key frame and M reaches/>, in real time; Until the program ends.
From the above, the conditions for packing the main key frames are: the second determination condition is satisfied when the steps S1041 to S1043 are performed. And the secondary key frame packing condition is two: one is that after each packing of the main key frame, the distance between the auxiliary key frame and M reachesIs not limited to the above; and the second is: the primary key frame is packed twice in succession (i.e. after each packing to the next packing period), if the distance between the secondary key frame and M is not reached in this periodThen forced packing is carried out; it should be noted that, when the primary key frame is packed, after S1046 is executed, the primary key frame is emptied, and then the laser point cloud data after preprocessing is reinserted according to S1041-S1043, and the reference frame is redetermined; after each sub-key frame is packed, the sub-key frame is emptied and laser point cloud data subjected to preprocessing is inserted in the step S1044.
The specific implementation steps of the loop detection in S103 are as follows:
S1141, calculating the feature vector of the current newly added key frame every time the key frame is newly added in the key frame database, and storing the feature vector of the current newly added key frame in a preset loop detection module;
S1142, determining a key frame corresponding to a key frame number most similar to the feature vector of the current newly-added key frame based on the pre-established k-d tree, determining the key frame as a potential key frame, if the hamming distance between the feature vector of the potential key frame and the current newly-added feature vector is smaller than a third distance threshold, the difference value between the key frame number of the potential key frame and the key frame number of the current newly-added key frame is not smaller than a specified difference value, and the relative position distance between the candidate key frame and the current key frame is not larger than a specified position threshold; the potential key frame is considered to be a candidate key frame corresponding to the current newly added key frame; the key frame numbers are used for representing the synthesis time of the corresponding key frames, the difference value of the key frame numbers corresponding to any key frame can be used for representing the adjacent degree of the synthesis time of the key frames, and the smaller the difference value is, the more adjacent the key frames are;
S1143, if the candidate key frame exists, generating a large key frame, wherein the large key frame is the candidate key frame, and key frames formed by a plurality of frame point cloud data adjacent to the candidate key frame in detection time;
S1144, calculating the relative pose between the current newly added key frame and the large key frame, and transforming the coordinates of the current newly added key frame so that the coordinate origin of the coordinates of the current newly added key frame and the large key frame is consistent; traversing each point of the current newly added key frame, calculating Euclidean distance between each point and the nearest point, and if the Euclidean distance is smaller than a fourth distance threshold, considering that the matching is valid;
S1145, when traversing is completed, averaging Euclidean distances corresponding to all points identified as being valid for matching, if the averaged value is smaller than a fourth distance threshold value, determining that loop is detected, determining that the current newly added key frame is a loop frame, and determining that all candidate key frames in a candidate key frame group corresponding to the loop frame are adjacent frames;
s1146, after loop is detected, connecting the current newly added key frame and the candidate key frames corresponding to the current newly added key frame in a graph optimization mode, performing nonlinear pose optimization, and updating the pose of each key frame in the key frame database by using the optimized pose.
In an implementation, the key frame database is used for storing key frames packaged each time, and the key frames in the key frame database at least comprise key frame numbers for distinguishing other key frames, the key frame numbers can be used for representing the synthesis time of the corresponding key frames, or the time when the key frames are inserted into the key frame database, the difference value of the key frame numbers corresponding to any key frames can be used for representing the proximity degree of the synthesis time of the key frames, and the smaller the difference value is, the more adjacent the key frames are.
When a key frame is newly added in the key frame database, the loop detection step of S1141-S1146 is executed, and firstly, the newly added key frame is stored in the key frame database, and meanwhile, the feature vector of the newly added key frame is calculated by using the distribution feature of the point cloud, and the feature vector is stored in a preset loop detection module, and all the feature vectors are inserted into a pre-established k-d tree according to the vector content. Key frame numbers which are most similar to the current feature vector (namely, the element coincidence degree in the feature vector is highest) and key frames corresponding to the most similar key frame numbers (namely, potential key frames mentioned in S1142) can be quickly found from the current key frame database by inquiring the k-d tree;
Then, judging whether the potential key frame is a candidate key frame corresponding to the newly added key frame, and after the screening, if the candidate key frame exists, determining a large key frame firstly, wherein the large key frame is a key frame consisting of the candidate key frame and a plurality of frame point cloud data (such as the first two frames and the second two frames of the candidate key frame) adjacent to the candidate key frame in detection time; and then calculating the relative pose between the newly added key frame and the large key frame by using a point-to-plane ICP matching algorithm. After the coordinates of the newly added key frame are transformed into the large key frame, traversing each point of the current newly added key frame, calculating the Euclidean distance between each point and the nearest point through a KNN algorithm, and considering that matching is effective if the Euclidean distance is smaller than a fourth distance threshold (such as 1 meter). Averaging the distances between all the points with effective matching, and if the average distance is smaller than a fourth distance threshold value, considering that the matching degree is higher, namely, true loop, and considering that loop back occurs; otherwise, false loop is considered, and loop is not considered to occur.
And after loop-back is considered, connecting the current key frame and the candidate key frame in a graph optimization mode, and performing nonlinear pose optimization. And finally, updating the pose of each key frame in the key frame database by using the pose result of the graph optimization.
S104, constructing and outputting an integral point cloud map based on the point cloud of the key frame and the optimized key frame.
In implementation, the embodiment of the present application exemplarily gives the following method for constructing an overall point cloud map, but the method for implementing the function is not limited to the example of the present application. Specifically, the SLAM system traverses the whole key frame database, transforms the position and the posture of the point cloud of the key frame to a global coordinate system through the position and the posture of the key frame, and then superimposes and downsamples each point cloud according to the key frame number to finally obtain the whole point cloud map. In other embodiments, all the point clouds may be superimposed first, the whole point cloud may be divided into small block areas by using an octree, and then downsampling may be performed on every other divided small block area, so that the calculation efficiency may be further improved.
The embodiment of the application also discloses a positioning and mapping system based on the mechanical rotary three-dimensional laser. Referring to fig. 2, comprising:
A motion distortion correction module 1 for performing distortion correction processing on the laser point cloud data based on the motion distortion correction model;
The point cloud data preprocessing module 2 is used for executing preprocessing operation on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and dynamic points are filtered in real time;
The key frame merging maintenance module 3 is used for merging the preprocessed laser point cloud data respectively to form key frames, carrying out loop detection on the key frames and the previous key frames, and carrying out optimization treatment on all the key frame pose after loop detection;
And the point cloud map generation module 4 is used for constructing and outputting an integral point cloud map based on the key frame point cloud and the optimized key frame pose.
Optionally, the motion distortion correction module 1 is configured to calculate, based on a pre-constructed wheel-type odometer interpolation model, a pose of the current frame point cloud data each time one frame point cloud data is acquiredAnd pose/>, of the previous frame of point cloud data; The previous frame point cloud data refers to: scanning time dimension from the corresponding laser radar, and one frame of point cloud data before the current frame of point cloud data; calculating to obtain the relative pose/> of the point cloud data of the current frame relative to the point cloud data of the previous frame; Traversing each point of the point cloud data of the current frame, and calculating to obtain a horizontal direction angle/>, corresponding to each point, according to the Z-direction projection coordinate of each pointThe horizontal angle of the last point in the point cloud data of the current frame is defined as; Calculating the corresponding generation time/>, of each pointAnd interpolation ratio; Generating time/>, corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, and calculating to obtain the coordinate data of each point of the current frame according to the pose to finish distortion correction.
Optionally, the system further comprises an odometer interpolation module 5, which is used for acquiring odometer data detected by the wheel type odometer and sequentially inserting the odometer data into a preset queue from the tail of the preset queue according to the sequence of detection time; when receiving the interpolation time stampWhen the preset wheel type odometer difference model is used, the odometer data in the preset queue are deleted from the head of the queue one by one; and each time one mileometer data is deleted, temporarily storing the data timestamp/>, of the deleted mileometer dataAnd pose informationJudging whether a preset interpolation condition is met, if yes, stopping deleting, and executing interpolation operation;
Also for if interpolation time stamp If the first preset condition is met, determining and outputting the interpolation time stampCorresponding pose information; The first preset condition is as follows: interpolation timestampAndThe time difference value of (2) is smaller than a first preset difference value; if the time stampIf the second preset condition is met, determining and outputting; The second preset condition is as follows: interpolation time stampAndThe time difference value of (2) is smaller than a second preset difference value; if the time stampThe first preset condition and the second preset condition are not satisfied, and atAndBetween them, determine and output; If the time stampGreater thanAndAndIf the time interval of (2) is smaller than the preset time threshold, determining and outputting; IfAndIf the time interval of the interpolation time stamp is not less than the preset time threshold value, determining and outputting the interpolation time stampCorresponding pose information
The motion distortion correction module 1 is further configured to generate a time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, and calculating to obtain the coordinate data of each point of the current frame according to the pose to finish distortion correction.
Optionally, the key frame merging maintenance module 3 is configured to determine whether the current key frame is empty every time after the preprocessing operation is completed on the laser point cloud data; if yes, inserting the laser point cloud data which is subjected to preprocessing operation currently into a key frame as first frame point cloud data, and taking the first frame point cloud data as a reference frame of the current key frame; calculating the point cloud center of gravity of the point cloud of the corner point in the first frame of point cloud dataPoint cloud center of gravity/>, of planar point cloud
And if the first judging condition is met, transforming the pose corresponding to the point cloud data of the current frame into a reference frame coordinate system, carrying out point cloud superposition and downsampling on the point cloud data of the current frame and the reference frame to form a new reference frame, and updating the new reference frame; If the second judging condition is met, packing the key frame at the moment and storing the key frame into a preset key frame database, clearing the key frame, inserting the point cloud data of the current frame into the cleared key frame, and replacing the current reference frame with the point cloud data of the current frame; the key frame database is used for storing key frames formed by packing each time.
Optionally, the key frame merging maintenance module 3 is further configured to insert the current frame point cloud data after the preprocessing operation is completed into the secondary key frame each time the preprocessing operation is completed on the laser point cloud data, and execute the step contents of S1041, S1042, and S1043 on the primary key frame; and the method is also used for recording the pose coordinate M of the current main key frame when the current main key frame is packed and determining the relative distance between the pose coordinate M of the current main key frame and the pose coordinate of the reference frame when the main key frame meets the second judging condition and performs the packing operation in the preset processing operation, namely, the main key frame is packed and stored in the preset key frame database, and if the main key frame is packed for the first time, the pose coordinate M of the current main key frame is recorded and the relative distance between the pose coordinate M of the current main key frame and the pose coordinate of the reference frame is determined; If the current package is not the first package, determining the current main key frame pose, the relative distance between the main key frame pose and the main key frame pose coordinate M recorded by the previous package, updating M by using the current main key frame pose, and updating/> by using the current determined relative distance; And is also used for when the relative distance between the pose coordinates of the secondary key frame and the coordinates M reachesWhen the secondary key frames are packaged and stored in a key frame database; at each determination a relative distanceThen, if the relative distance between the pose coordinates of the main key frame and M of the auxiliary key frame reachesBefore, a packing operation in a preprocessing operation is executed, and at the moment, the auxiliary key frame and the main key frame are packed together and stored in a key frame database; clearing the primary keyframe each time the primary keyframe is packed; the secondary key frame is emptied each time it is packed.
Optionally, the key frame merging maintenance module 3 is further configured to calculate a feature vector of a current newly added key frame every time a key frame is newly added in the key frame database, and store the feature vector of the current newly added key frame in a preset loop detection module; the method is also used for determining a key frame corresponding to a key frame number most similar to the feature vector of the current newly added key frame based on a pre-established k-d tree, determining the key frame as a potential key frame, and if the hamming distance between the feature vector of the potential key frame and the current newly added feature vector is smaller than a third distance threshold, considering the potential key frame as a candidate key frame corresponding to the current newly added key frame;
The key frame merging maintenance module 3 is further configured to generate a large key frame if the candidate key frame exists, where the large key frame is the candidate key frame, and a key frame formed by a plurality of frame point cloud data adjacent to the candidate key frame in detection time; the method is also used for calculating the relative pose between the current newly added key frame and the large key frame, and transforming the coordinates of the current newly added key frame so as to enable the coordinate origin of the coordinates of the current newly added key frame and the large key frame to be consistent; traversing each point of the current newly added key frame, calculating Euclidean distance between each point and the nearest point, and if the Euclidean distance is smaller than a fourth distance threshold, considering that the matching is valid;
The key frame merging maintenance module 3 is further configured to average euclidean distances corresponding to all points identified as valid for matching when traversing is completed, and if the obtained average value is smaller than a fourth distance threshold, consider that a loop is detected, consider that a currently newly added key frame is a loop frame, and consider that all candidate key frames in a candidate key frame group corresponding to the loop frame are adjacent frames; and when detecting the loop, connecting the current newly added key frame and the candidate key frame corresponding to the current newly added key frame in a graph optimization mode, performing nonlinear pose optimization, and updating the pose of each key frame in the key frame database by using the optimized pose.
Optionally, the key frame merging maintenance module 3 is further configured to, if the hamming distance between the feature vector of the potential key frame and the current newly added feature vector is smaller than a third distance threshold, and the difference between the key frame number of the potential key frame and the key frame number of the current newly added key frame is not smaller than a specified difference, and the relative position distance between the candidate key frame and the current key frame is not greater than a specified position threshold; the potential key frame is considered to be a candidate key frame corresponding to the current newly added key frame.
The embodiment of the application also discloses a positioning and mapping device based on the mechanical rotary three-dimensional laser, which comprises a memory and a processor, wherein the memory stores a computer program which can be loaded by the processor and execute the positioning and mapping method based on the mechanical rotary three-dimensional laser.
The embodiment of the application also discloses a computer readable storage medium, which stores a computer program capable of being loaded by a processor and executing the positioning and mapping method based on the mechanical rotation type three-dimensional laser, and the computer readable storage medium comprises: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the scope of application. It will be apparent that the described embodiments are merely some, but not all, embodiments of the application. Based on these embodiments, all other embodiments that may be obtained by one of ordinary skill in the art without inventive effort are within the scope of the application.

Claims (10)

1. The positioning and mapping method based on the mechanical rotary three-dimensional laser is characterized by comprising the following steps of:
Performing distortion correction processing on the laser point cloud data based on the motion distortion correction model;
Performing preprocessing operation on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and filtering dynamic points in real time;
Combining the preprocessed laser point cloud data to form key frames respectively, performing loop detection with the previous key frames, and performing optimization processing on all the key frame poses after loop detection;
And constructing and outputting an integral point cloud map based on the point cloud of the key frame and the optimized key frame.
2. The positioning and mapping method based on mechanical rotation type three-dimensional laser according to claim 1, wherein the laser point cloud data is obtained by a preset laser radar according to a preset frequencyScanning to obtain laser point cloud data, wherein the laser point cloud data obtained by each laser radar scanning are one frame of point cloud data;
the performing distortion correction processing on the laser point cloud data based on the motion distortion correction model includes:
When one frame of point cloud data is acquired, calculating the pose of the current frame of point cloud data based on a pre-constructed wheel type odometer interpolation model And pose/>, of the previous frame of point cloud data; Wherein, the previous frame point cloud data refers to: scanning the frame point cloud data of a frame before the current frame point cloud data from the corresponding laser radar scanning time dimension;
calculating the relative pose of the current frame point cloud data relative to the previous frame point cloud data
Traversing each point of the point cloud data of the current frame, and calculating to obtain a horizontal direction angle corresponding to each point according to the Z-direction projection coordinate of each pointDefining the horizontal angle of the last point in the point cloud data of the current frame as; Calculating the corresponding generation time/>, of each pointAnd interpolation ratio; Wherein, theThe Z-direction projection coordinates where the individual points are located can be expressed asFirstHorizontal direction angle corresponding to each point
According to the generation time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, and calculating to obtain the coordinate data of each point of the current frame according to the pose to finish distortion correction.
3. The positioning and mapping method based on mechanical rotation type three-dimensional laser according to claim 2, characterized in that the method further comprises:
acquiring odometer data obtained by wheel type odometer detection, and sequentially inserting the odometer data into a preset queue from the tail of the preset queue according to the sequence of detection time; wherein each piece of odometer data comprises a data time stamp and pose information;
When receiving the interpolation time stamp When the preset wheel type odometer difference model is used, the odometer data in the preset queue are deleted from the head of the queue one by one; and each time one mileometer data is deleted, temporarily storing the data timestamp/>, of the deleted mileometer dataAnd pose informationJudging whether a preset interpolation condition is met, if yes, stopping deleting, and executing interpolation operation; wherein, the preset difference condition is: only 1 milemeter data is left in a preset queue or a data timestamp/>, corresponding to the milemeter data at the head of the preset queueGreater than the interpolation timestamp
Wherein the performing interpolation operation includes:
If the interpolation time stamp If the first preset condition is met, determining and outputting the interpolation time stampCorresponding pose information; Wherein, the first preset condition is: the interpolation timestampAnd saidThe time difference value of (2) is smaller than a first preset difference value;
If the interpolation time stamp If the second preset condition is met, determining and outputting; Wherein the second preset condition is: the interpolation timestampAnd saidThe time difference value of (2) is smaller than a second preset difference value;
If the interpolation time stamp The first preset condition and the second preset condition are not satisfied, and atAndBetween them, determine and outputWhereinFor theAndPose speed between;
If the interpolation time stamp Greater thanAndAndIf the time interval of (2) is smaller than the preset time threshold, determining and outputting; IfAndIf the time interval of the interpolation time stamp is not less than the preset time threshold value, determining and outputting the interpolation time stampCorresponding pose information
The generation time corresponding to each point in the point cloud data of the current frameAnd interpolation ratioOutputting the pose corresponding to each point in the point cloud data of the current frame through a pre-constructed motion distortion correction model, wherein the method comprises the following steps:
according to the generation time corresponding to each point in the point cloud data of the current frame And interpolation ratioDetermining interpolation time stamp/>, corresponding to each pointInterpolation timestampAnd inputting a pre-constructed motion distortion correction model, and outputting the pose corresponding to each point in the point cloud data of the current frame.
4. The positioning and mapping method based on mechanical rotation type three-dimensional laser according to claim 2, wherein the preprocessing operation further comprises a down-sampling process; the step of respectively merging the preprocessed laser point cloud data to form key frames comprises the following steps:
s1041, judging whether the current key frame is empty or not after the preprocessing operation is finished on the laser point cloud data;
S1042, if yes, inserting laser point cloud data which is currently subjected to preprocessing operation into a key frame as first frame point cloud data, and taking the first frame point cloud data as a reference frame of the current key frame; calculating the point cloud center of gravity of the angular point cloud in the first frame of point cloud data Point cloud center of gravity/>, of planar point cloud; WhereinFor the number of corner points in the first frame of point cloud data,For the number of plane points in the first frame point cloud data,The first frame of point cloud data is the first/> in the corner point cloud corresponding to the first frame of point cloud dataCoordinates of the corner points,The first frame of point cloud data is the first/>, in the plane point cloud corresponding to the first frame of point cloud dataCoordinates of the individual plane points;
s1043, if not, calculating and judging whether the current frame point cloud data of which the preprocessing operation is currently finished meets a first judging condition or a second judging condition, and then executing the preset processing operation according to the judging result;
Wherein, the executing the preset processing operation according to the determination result includes:
If the first judging condition is met, transforming the pose corresponding to the point cloud data of the current frame into a reference frame coordinate system, carrying out point cloud superposition and downsampling on the point cloud data of the current frame and the reference frame to form a new reference frame, and updating the new reference frame
If the second judging condition is met, packaging the keyframes at the moment, storing the keyframes in a preset keyframe database, clearing the keyframes, inserting the point cloud data of the current frame into the cleared keyframes, and replacing the current reference frame with the point cloud data of the current frame; the key frame database is used for storing key frames formed by packing each time;
wherein, the first judging condition is: nth frame point cloud data , Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameIs less than a first distance threshold; the second judgment condition is: />, of nth frame point cloud data, Relative to a reference frame, Nth frame point cloud data, Relative to a reference frameThe sum of the distance values of the frames is not smaller than a first distance threshold, or the distance of the pose coordinates of the reference frame of the pose coordinates of the nth frame of point cloud data is larger than a second distance threshold, or the rotation angle of the nth frame of laser point cloud data is larger than a specified angle threshold.
5. The positioning and mapping method based on mechanical rotation type three-dimensional laser according to claim 4, wherein the merging the preprocessed laser point cloud data to form key frames respectively further comprises:
After the preprocessing operation is finished on the laser point cloud data, inserting the point cloud data of the current frame after the preprocessing operation into a secondary key frame, and executing the step contents of S1041, S1042 and S1043 on the primary key frame;
When the main key frame meets the second judging condition and performs the packing operation in the preset processing operation, namely, when the main key frame is packed and stored in a preset key frame database, if the packing is the first packing, recording the pose coordinate M of the current main key frame when the current main key frame is packed, and determining the relative distance between the pose coordinate M of the current main key frame and the pose coordinate of the reference frame ; If the current package is not the first package, determining the current main key frame pose, the relative distance between the main key frame pose and the main key frame pose coordinate M recorded by the previous package, updating M by using the current main key frame pose, and updating/> by using the current determined relative distance
When the relative distance between the pose coordinates and the coordinates M of the auxiliary key frame reachesWhen the secondary key frames are packaged and stored in a key frame database;
at each determination, a relative distance is derived Then, if the relative distance between the pose coordinates of the main key frame and M of the auxiliary key frame reachesBefore, a packing operation in a preprocessing operation is executed, and at the moment, the auxiliary key frame and the main key frame are packed together and stored in a key frame database;
Clearing the primary keyframe each time the primary keyframe is packed; the secondary key frame is emptied each time it is packed.
6. The positioning and mapping method based on mechanical rotation type three-dimensional laser according to claim 4, wherein the loop detection is performed on the previous key frame, and after the loop detection, the optimization processing is performed on all the key frame poses, including:
calculating the feature vector of the current newly added key frame when the key frame is newly added in the key frame database, and storing the feature vector of the current newly added key frame in a preset loop detection module;
determining a key frame corresponding to a key frame number most similar to the feature vector of the current newly added key frame based on a pre-established k-d tree, determining the key frame as a potential key frame, and if the hamming distance between the feature vector of the potential key frame and the current newly added feature vector is smaller than a third distance threshold, considering the potential key frame as a candidate key frame corresponding to the current newly added key frame;
if the candidate key frame exists, generating a large key frame, wherein the large key frame is the key frame formed by a plurality of frame point cloud data adjacent to the candidate key frame in detection time;
Calculating the relative pose between the current newly added key frame and the large key frame, and transforming the coordinates of the current newly added key frame so that the coordinate origin of the current newly added key frame is consistent with that of the large key frame; traversing each point of the current newly added key frame, calculating Euclidean distance between each point and the nearest point, and if the Euclidean distance is smaller than a fourth distance threshold, considering that matching is valid;
when traversing is completed, averaging Euclidean distances corresponding to all points which are considered to be matched effectively, if the averaged value is smaller than a fourth distance threshold value, determining that a loop is detected, determining that the current newly added key frame is a loop frame, and determining that all candidate key frames in a candidate key frame group corresponding to the loop frame are adjacent frames;
And after loop-back is detected, connecting the current newly added key frame and the candidate key frames corresponding to the current newly added key frame in a graph optimization mode, performing nonlinear pose optimization, and updating the pose of each key frame in the key frame database by using the optimized pose.
7. The method for locating and mapping based on mechanically rotating three-dimensional laser according to claim 6, wherein if the hamming distance between the feature vector of the potential keyframe and the current newly added feature vector is smaller than a third distance threshold, the potential keyframe is considered as a candidate keyframe corresponding to the current newly added keyframe, comprising:
If the Hamming distance between the feature vector of the potential key frame and the current newly-added feature vector is smaller than a third distance threshold, the difference value between the key frame number of the potential key frame and the key frame number of the current newly-added key frame is not smaller than a specified difference value, and the relative position distance between the candidate key frame and the current key frame is not larger than a specified position threshold; the potential key frame is considered to be a candidate key frame corresponding to the current newly added key frame; the key frame numbers are used for representing the synthesis time of the corresponding key frames, the difference value of the key frame numbers corresponding to any key frame can be used for representing the adjacent degree of the synthesis time of the key frames, and the smaller the difference value is, the more adjacent the key frames are.
8. A positioning and mapping system based on mechanical rotation type three-dimensional laser is characterized by comprising,
A motion distortion correction module (1) for performing distortion correction processing on the laser point cloud data based on the motion distortion correction model;
the point cloud data preprocessing module (2) is used for executing preprocessing operation on the laser point cloud data after distortion correction, wherein the preprocessing operation at least comprises ground point cloud segmentation, corner segmentation and plane point segmentation, and dynamic points are filtered in real time;
the key frame merging maintenance module (3) is used for merging the preprocessed laser point cloud data respectively to form key frames, carrying out loop detection on the key frames and the previous key frames, and carrying out optimization treatment on all the key frame poses after loop detection;
and the point cloud map generation module (4) is used for constructing and outputting an integral point cloud map based on the key frame point cloud and the optimized key frame pose.
9. A positioning and mapping device based on a mechanical rotation type three-dimensional laser, characterized by comprising a memory and a processor, wherein the memory stores a computer program capable of being loaded by the processor and executing any one of the positioning and mapping methods based on the mechanical rotation type three-dimensional laser according to claims 1 to 7.
10. A computer readable storage medium, characterized in that it stores a computer program that can be loaded by a processor and that performs any of the mechanical rotary three-dimensional laser based localization and mapping methods of claims 1 to 7.
CN202410362519.8A 2024-03-28 2024-03-28 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser Active CN117974926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410362519.8A CN117974926B (en) 2024-03-28 2024-03-28 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410362519.8A CN117974926B (en) 2024-03-28 2024-03-28 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser

Publications (2)

Publication Number Publication Date
CN117974926A true CN117974926A (en) 2024-05-03
CN117974926B CN117974926B (en) 2024-06-18

Family

ID=90858306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410362519.8A Active CN117974926B (en) 2024-03-28 2024-03-28 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser

Country Status (1)

Country Link
CN (1) CN117974926B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118758343A (en) * 2024-09-05 2024-10-11 新石器慧通(北京)科技有限公司 Loop detection method and device
CN119355684A (en) * 2024-12-26 2025-01-24 中煤科工开采研究院有限公司 Point cloud matching method, device and electronic equipment for rotating laser radar
CN119826808A (en) * 2024-12-03 2025-04-15 元辰科技发展(无锡)有限公司 Multi-sensor fusion real-time positioning and mapping system and method for automatic driving

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 A method and system for constructing indoor 3D point cloud map of mobile robot
CN114526745A (en) * 2022-02-18 2022-05-24 太原市威格传世汽车科技有限责任公司 Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
CN115479598A (en) * 2022-08-23 2022-12-16 长春工业大学 Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115773747A (en) * 2022-12-07 2023-03-10 网络通信与安全紫金山实验室 A high-precision map generation method, device, equipment and storage medium
CN115963508A (en) * 2022-05-13 2023-04-14 合肥合工安驰智能科技有限公司 Tightly-coupled SLAM method and system for laser radar and IMU
CN116337072A (en) * 2023-03-24 2023-06-27 华侨大学 A construction machine drawing, method, device, and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
CN113674399A (en) * 2021-08-16 2021-11-19 杭州图灵视频科技有限公司 A method and system for constructing indoor 3D point cloud map of mobile robot
CN114526745A (en) * 2022-02-18 2022-05-24 太原市威格传世汽车科技有限责任公司 Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN115963508A (en) * 2022-05-13 2023-04-14 合肥合工安驰智能科技有限公司 Tightly-coupled SLAM method and system for laser radar and IMU
CN115479598A (en) * 2022-08-23 2022-12-16 长春工业大学 Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115773747A (en) * 2022-12-07 2023-03-10 网络通信与安全紫金山实验室 A high-precision map generation method, device, equipment and storage medium
CN116337072A (en) * 2023-03-24 2023-06-27 华侨大学 A construction machine drawing, method, device, and readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JUNLIN HUANG 等: "VWR-SLAM: Tightly Coupled SLAM System Based on Visible Light Positioning Landmark, Wheel Odometer, and RGB-D Camera", IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 29 December 2022 (2022-12-29) *
姜文飞: "地下车库场景激光与视觉融合的智能汽车SLAM算法研究", 中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑, 15 February 2023 (2023-02-15) *
柯艳国 等: "一种基于图优化的实时3D激光SLAM算法", 机电一体化, no. 1, 15 April 2020 (2020-04-15) *
陈强 等: "基于NDT配准与轮式里程计的激光雷达运动畸变补偿算法", 高技术通信, vol. 34, no. 1, 31 January 2024 (2024-01-31) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118758343A (en) * 2024-09-05 2024-10-11 新石器慧通(北京)科技有限公司 Loop detection method and device
CN119826808A (en) * 2024-12-03 2025-04-15 元辰科技发展(无锡)有限公司 Multi-sensor fusion real-time positioning and mapping system and method for automatic driving
CN119826808B (en) * 2024-12-03 2025-11-21 元辰科技发展(无锡)有限公司 Multi-sensor fusion real-time localization and mapping system and method for autonomous driving
CN119355684A (en) * 2024-12-26 2025-01-24 中煤科工开采研究院有限公司 Point cloud matching method, device and electronic equipment for rotating laser radar

Also Published As

Publication number Publication date
CN117974926B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN117974926B (en) Positioning and mapping method and system based on mechanical rotation type three-dimensional laser
JP5430456B2 (en) Geometric feature extraction device, geometric feature extraction method, program, three-dimensional measurement device, object recognition device
Shapiro et al. Motion from point matches using affine epipolar geometry
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN113570629B (en) Semantic segmentation method and system for removing dynamic objects
JP4650751B2 (en) Method and apparatus for aligning 3D shape data
JP6761388B2 (en) Estimator and program
CN113218408B (en) 2D slam method and system for multi-sensor fusion suitable for multi-terrain
CN118518093B (en) Laser SLAM method, device and medium based on multi-frame space occupancy
CN111578959A (en) Unknown environment autonomous positioning method based on improved Hector SLAM algorithm
CN114037800A (en) Construction system, method and device of octree map and electronic equipment
CN111948658A (en) Deep water area positioning method for identifying and matching underwater landform images
US20040165776A1 (en) Means of matching 2D motion vector fields in a render, match, and refine iterative 3D scene model refinement system so as to attain directed hierarchical convergence and insensitivity to color, lighting, and textures
CN117518195A (en) Robot long-period SLAM method and system based on graph sparsity maintenance
CN117470219A (en) Ground separation method and device and feeding operation system
CN115965929A (en) Point cloud-based obstacle detection method and device and domain controller
CN119273760B (en) Repositioning method, equipment and medium based on 3D laser radar and priori map
CN120085317A (en) A natural scene positioning method and system for AGV based on 3D laser radar
CN117968682B (en) Dynamic point cloud removing method based on multi-line laser radar and inertial measurement unit
US20060020562A1 (en) Apparatus and method for estimating optical flow
CN112068547A (en) Robot positioning method and device based on AMCL and robot
CN119338849A (en) A dynamic SLAM system and method based on image segmentation
CN117968667A (en) SLAM point cloud map construction method and system for loop detection of inspection robots
CN117392268A (en) A laser scanning mapping method and system based on adaptive combination of CPD and ICP algorithms
JP2024092716A (en) Mobile system and orthoimage map generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant