[go: up one dir, main page]

CN113218408A - Multi-sensor fusion 2Dslam method and system suitable for multiple terrains - Google Patents

Multi-sensor fusion 2Dslam method and system suitable for multiple terrains Download PDF

Info

Publication number
CN113218408A
CN113218408A CN202110327244.0A CN202110327244A CN113218408A CN 113218408 A CN113218408 A CN 113218408A CN 202110327244 A CN202110327244 A CN 202110327244A CN 113218408 A CN113218408 A CN 113218408A
Authority
CN
China
Prior art keywords
particle
point cloud
2dslam
map
sensor fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110327244.0A
Other languages
Chinese (zh)
Other versions
CN113218408B (en
Inventor
江如海
王玉龙
袁胜
刘跃
丁骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhongke Zhichi Technology Co ltd
Original Assignee
Hefei Zhongke Zhichi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhongke Zhichi Technology Co ltd filed Critical Hefei Zhongke Zhichi Technology Co ltd
Priority to CN202110327244.0A priority Critical patent/CN113218408B/en
Publication of CN113218408A publication Critical patent/CN113218408A/en
Application granted granted Critical
Publication of CN113218408B publication Critical patent/CN113218408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a multi-sensor fusion 2Dslam method suitable for multiple terrains, which comprises the steps of reading data of a laser radar and an encoder, resolving the data of the encoder and processing the data of the radar; scanning line feature segmentation point clouds to obtain ground point clouds and non-ground point clouds, and calculating gradient information; projecting the three-dimensional non-ground point cloud to two dimensions according to the gradient information; and (3) filtering and fusing the encoder and the two-dimensional point cloud particles of the laser radar. A2 Dslam system suitable for multi-sensor fusion of multiple terrains is also disclosed. The method is suitable for flat, bumpy and slope terrains, breaks through the traditional 2Dslam application occasions, further expands the application scene, and improves the positioning and mapping accuracy of the algorithm by fusing multiple sensors.

Description

Multi-sensor fusion 2Dslam method and system suitable for multiple terrains
Technical Field
The invention relates to the technical field of positioning navigation and map construction, in particular to a multi-sensor fusion 2Dslam method and system suitable for multiple terrains.
Background
SLAM (Simultaneous Localization And Mapping) is mainly used for solving the problem of performing positioning navigation And Mapping when a mobile device runs in an unknown environment. For positioning and mapping, data acquisition is needed firstly, and most of the prior art adopt an encoder and a laser radar sensor for data acquisition. However, the slam method, which conventionally uses an encoder to resolve a position and a single line lidar to detect obstacles, is only suitable for flat terrain. In addition, complex terrains such as bumpy terrains and slope terrains are mapped and positioned by a conventional method, and the accuracy is poor. Therefore, the simultaneous mapping and positioning of complex terrain using an encoder and a lidar is a very studied problem.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-sensor fusion 2Dslam method and system suitable for multi-terrain, which are suitable for flat, bumpy and slope terrains and break through the traditional 2Dslam application occasions.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided a 2Dslam method for multi-sensor fusion for multiple terrains, comprising the steps of:
s1: acquiring an encoder pulse value, and resolving a motion model parameter according to the pulse value; reading a laser radar point cloud, carrying out grid mean value down-sampling on the point cloud data, and removing point cloud distortion according to a motion model;
s2, dividing laser point cloud ground points and non-ground points based on the scanning line characteristics, calculating the gradient of the ground points, and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient;
s3: estimating to obtain the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model;
s4, scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the processed two-dimensional laser radar point cloud and the particle map at the previous moment, and updating the weight;
and S5, updating the map of each particle by referring to the current laser radar scanning data according to the obtained attitude of the particle, and taking the map of the particle with the highest weight as the current map.
In a preferred embodiment of the present invention, the step S1 includes the following steps:
s1.1: acquiring a pulse value of an encoder, and solving a self motion speed V and an angular speed omega;
s1.2: reading all three-dimensional point clouds in a laser radar detection range, storing the three-dimensional point clouds in a container A, adopting grids with length, width and height being d for the point clouds in the container A, taking the mean values of all point cloud coordinates (x, y and z) and time stamps in each grid, and storing all the grid mean values in a container B;
s1.3: and converting the coordinates into t starting time according to the uniform motion model of the speed V and the angular speed omega and the point cloud timestamp in the container B, and storing the converted point cloud in the container C.
Furthermore, the value range of d is 0.005-0.001 m.
In a preferred embodiment of the present invention, the step S2 includes the following steps:
s2.1: according to the container C, the points of two adjacent scanning lines with the same ray angle are respectively assumed to be (x)1,y1,z1) And (x)2,y2,z2) Angle of inclination
Figure BDA0002995115510000021
The inclination angle is less than 10 degrees and is considered as a ground point, so that the ground point and the non-ground point are divided;
s2.2: according to the adjacent line segment space between the ground point cloud scanning lines obtained by calculation
Figure BDA0002995115510000022
Gradient α, absolute height difference H ═ z1-z2I.e. projecting non-ground three-dimensional points (x, y, z) to two-dimensional coordinates
Figure BDA0002995115510000023
And storing the two-dimensional point cloud in a container D.
In a preferred embodiment of the present invention, the step S4 includes the following steps:
s4.1: according to the laser point cloud in the container D and the (t-1) time particle map
Figure BDA0002995115510000024
Scanning and matching by adopting a gradient descent method and optimizing the pose of the particles
Figure BDA00029951155100000212
Obtaining the optimal estimated pose of the particle
Figure BDA0002995115510000025
S4.2: according to the weight of particles at time (t-1)
Figure BDA0002995115510000026
Particle map
Figure BDA00029951155100000213
And the laser point cloud Z of the container D at the time ttPosition and posture of the particle
Figure BDA0002995115510000027
According to
Figure BDA00029951155100000214
Updating to obtain the weight of the particles at the time t
Figure BDA0002995115510000028
In a preferred embodiment of the present invention, the step S5 includes the following steps:
s5.1: calculating the degree of dispersion of particles
Figure BDA0002995115510000029
If N is presenteffIf the value is less than the threshold value T (the value is 2/3n), resampling the particles, otherwise, executing the step S5.2;
s5.2: according to the laser point cloud and the particle pose in the container D at the time t
Figure BDA00029951155100000215
And time t-1 particle map
Figure BDA00029951155100000210
Updating to obtain the particle map at the time t
Figure BDA00029951155100000211
S5.3: judging whether the data is completely read, if not, returning to the step S1; and if the data processing is finished, traversing all the particles, and outputting the particle map with the maximum weight as the current map.
In order to solve the technical problem, the invention adopts another technical scheme that: a multi-sensor fusion 2Dslam system suitable for multi-terrain is provided, which mainly comprises:
the encoder data module is used for acquiring an encoder pulse value and resolving a motion model parameter according to the pulse value;
the laser radar data module is used for reading laser radar point cloud, adopting grid mean value to carry out down-sampling processing on the point cloud data and removing point cloud distortion according to the motion model;
the laser point cloud segmentation module is used for segmenting laser point cloud ground points and non-ground points, calculating the gradient of the ground points and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient;
the particle pose acquisition module is used for estimating and obtaining the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model;
the particle pose optimization module is used for scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the two-dimensional laser radar point cloud output by the laser radar data module and the particle map at the previous moment, and updating the weight;
and the particle map output module is used for updating the map of each particle by referring to the current laser radar scanning data according to the attitude of the particle obtained by the particle pose optimization module, and taking the map of the particle with the highest weight as the current map. .
In order to solve the above technical problem, the present invention further provides a multi-sensor fusion 2Dslam device suitable for multiple terrains, comprising:
a memory, a processor, and a 2Dslam method program stored on the memory and executable on the processor suitable for multi-sensor fusion of multiple terrain;
the multi-terrain multi-sensor fusion-suitable 2Dslam method program, when executed by the processor, implements the steps of the multi-terrain multi-sensor fusion-suitable 2Dslam method of any one of the above.
In order to solve the technical problem, the invention further provides a computer medium, wherein a 2Dslam method program suitable for multi-sensor fusion of multiple terrains is stored on the computer medium;
the multi-terrain multi-sensor fusion-suitable 2Dslam method program, when executed by the processor, implements the steps of the multi-terrain multi-sensor fusion-suitable 2Dslam method of any one of the above.
The invention has the beneficial effects that: the method comprises the steps of resolving encoder data and processing radar data by reading laser radar and the encoder data; scanning line feature segmentation point clouds to obtain ground point clouds and non-ground point clouds, and calculating gradient information; projecting the three-dimensional non-ground point cloud to two dimensions according to the gradient information; and (3) filtering and fusing the encoder and the two-dimensional point cloud particles of the laser radar. The method is suitable for flat, bumpy and slope terrains, breaks through the traditional 2Dslam application occasions, further expands the application scene, and improves the positioning and mapping accuracy of the algorithm by fusing multiple sensors.
Drawings
FIG. 1 is a flow chart of a 2Dslam method of the present invention for multi-sensor fusion of multiple terrain;
FIG. 2 is a block diagram of the architecture of the 2Dslam system for multi-sensor fusion for multi-terrain.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, an embodiment of the present invention includes:
a 2Dslam method for multi-sensor fusion for multiple terrain, comprising the steps of:
s1: acquiring an encoder pulse value, and resolving a motion model parameter according to the pulse value; reading a laser radar point cloud, carrying out grid mean value down-sampling on the point cloud data, and removing point cloud distortion according to a motion model; the method comprises the following specific steps:
s1.1: acquiring pulse values of a left encoder and a right encoder, and solving the self motion speed V and the angular speed omega;
s1.2: reading all three-dimensional point clouds in a laser radar detection range, storing the three-dimensional point clouds in a container A, adopting grids with length, width and height being d for the point clouds in the container A, taking the mean values of all point cloud coordinates (x, y and z) and time stamps in each grid, and storing all the grid mean values in a container B; preferably, d is 0.005-0.001 m;
s1.3: and converting the coordinates into t starting time according to the uniform motion model of the speed V and the angular speed omega and the point cloud timestamp in the container B, and storing the converted point cloud in the container C.
S2, dividing laser point cloud ground points and non-ground points based on the scanning line characteristics, calculating the gradient of the ground points, and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient; the method comprises the following specific steps:
s2.1: according to the container C, the points of two adjacent scanning lines with the same ray angle are respectively assumed to be (x)1,y1,z1) And (x)2,y2,z2) Angle of inclination
Figure BDA0002995115510000041
The inclination angle is less than 10 degrees and is considered as a ground point, so that the ground point and the non-ground point are divided;
s2.2: according to the adjacent line segment space between the ground point cloud scanning lines obtained by calculation
Figure BDA0002995115510000042
Gradient α, absolute height difference H ═ z1-z2I.e. projecting non-ground three-dimensional points (x, y, z) to two-dimensional coordinates
Figure BDA0002995115510000043
Two-dimensional point cloudStored in the container D.
S3: estimating to obtain the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model; i.e. the position and attitude of the particle according to the time t-1
Figure BDA0002995115510000045
And (5) calculating a constant motion model of the sum velocity V and the angular velocity omega, and estimating to obtain the position and posture of the particles at the time t
Figure BDA0002995115510000044
S4, scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the processed two-dimensional laser radar point cloud and the particle map at the previous moment, and updating the weight; the method comprises the following specific steps:
s4.1: according to the laser point cloud in the container D and the (t-1) time particle map
Figure BDA0002995115510000051
Scanning and matching by adopting a gradient descent method, and optimizing the pose of the particle according to the matching result of the gradient descent method
Figure BDA0002995115510000059
Obtaining the optimal estimated pose of the particle
Figure BDA0002995115510000052
S4.2: according to the weight of particles at time (t-1)
Figure BDA0002995115510000053
Particle map
Figure BDA00029951155100000510
And the laser point cloud Z of the container D at the time ttPosition and posture of the particle
Figure BDA0002995115510000054
According to
Figure BDA00029951155100000511
Updating to obtain the weight of the particles at the time t
Figure BDA0002995115510000055
And S5, updating the map of each particle by referring to the current laser radar scanning data according to the obtained attitude of the particle, and taking the map of the particle with the highest weight as the current map. The method comprises the following specific steps:
s5.1: calculating the degree of dispersion of particles
Figure BDA0002995115510000056
If N is presenteffIf the value is less than the threshold value T (generally 2/3n), resampling the particles, otherwise, executing the step S5.2;
s5.2: according to the laser point cloud and the particle pose in the container D at the time t
Figure BDA00029951155100000512
And time t-1 particle map
Figure BDA0002995115510000057
Updating to obtain the particle map at the time t
Figure BDA0002995115510000058
S5.3: judging whether the data is completely read, if not, returning to the step S1; and if the data processing is finished, traversing all the particles, and outputting the particle map with the maximum weight as the current map.
In the embodiment of the present invention, as shown in fig. 2, a 2Dslam system suitable for multi-sensor fusion of multiple terrains is further provided, which mainly includes:
the encoder data module is used for acquiring an encoder pulse value and resolving a motion model parameter according to the pulse value;
the laser radar data module is used for reading laser radar point cloud, adopting grid mean value to carry out down-sampling processing on the point cloud data and removing point cloud distortion according to the motion model;
the laser point cloud segmentation module is used for segmenting laser point cloud ground points and non-ground points, calculating the gradient of the ground points and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient;
the particle pose acquisition module is used for estimating and obtaining the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model;
the particle pose optimization module is used for scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the two-dimensional laser radar point cloud output by the laser radar data module and the particle map at the previous moment, and updating the weight;
and the particle map output module is used for updating the map of each particle by referring to the current laser radar scanning data according to the attitude of the particle obtained by the particle pose optimization module, and taking the map of the particle with the highest weight as the current map.
The 2Dslam system applicable to multi-terrain multi-sensor fusion can execute the 2Dsalm method applicable to multi-terrain multi-sensor fusion provided by the invention, can execute any combination of implementation steps of the method examples, and has corresponding functions and beneficial effects of the method.
The embodiment of the invention also provides a multi-sensor fusion 2Dslam device suitable for multiple terrains, which comprises a memory, a processor and a multi-sensor fusion 2Dslam method program which is stored on the memory and can be run on the processor and is suitable for multiple terrains;
the multi-terrain multi-sensor fusion-suitable 2Dslam method program, when executed by the processor, implements the steps of the multi-terrain multi-sensor fusion-suitable 2Dslam method of any one of the above.
The embodiment of the invention also provides a computer medium, wherein the computer medium is stored with a 2Dslam method program suitable for multi-sensor fusion of multiple terrains;
the multi-terrain multi-sensor fusion-suitable 2Dslam method program, when executed by the processor, implements the steps of the multi-terrain multi-sensor fusion-suitable 2Dslam method of any one of the above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (9)

1. A 2Dslam method for multi-sensor fusion for multiple terrain, comprising the steps of:
s1: acquiring an encoder pulse value, and resolving a motion model parameter according to the pulse value; reading a laser radar point cloud, carrying out grid mean value down-sampling on the point cloud data, and removing point cloud distortion according to a motion model;
s2, dividing laser point cloud ground points and non-ground points based on the scanning line characteristics, calculating the gradient of the ground points, and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient;
s3: estimating to obtain the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model;
s4, scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the processed two-dimensional laser radar point cloud and the particle map at the previous moment, and updating the weight;
and S5, updating the map of each particle by referring to the current laser radar scanning data according to the obtained attitude of the particle, and taking the map of the particle with the highest weight as the current map.
2. The multi-sensor fusion-enabled 2Dslam method for multi-terrain, according to claim 1, wherein the specific steps of step S1 include:
s1.1: acquiring a pulse value of an encoder, and solving a self motion speed V and an angular speed omega;
s1.2: reading all three-dimensional point clouds in a laser radar detection range, storing the three-dimensional point clouds in a container A, adopting grids with length, width and height being d for the point clouds in the container A, taking the mean values of all point cloud coordinates (x, y and z) and time stamps in each grid, and storing all the grid mean values in a container B;
s1.3: and converting the coordinates into t starting time according to the uniform motion model of the speed V and the angular speed omega and the point cloud timestamp in the container B, and storing the converted point cloud in the container C.
3. The multi-sensor fusion 2Dslam method for multi-terrain applications as defined in claim 2 wherein d ranges from 0.005 m to 0.001 m.
4. The multi-sensor fusion-enabled 2Dslam method according to claim 2, wherein the specific steps of step S2 include:
s2.1: according to the container C, the points of two adjacent scanning lines with the same ray angle are respectively assumed to be (x)1,y1,z1) And (x)2,y2,z2) Angle of inclination
Figure FDA0002995115500000011
The inclination angle is less than 10 degrees and is considered as a ground point, so that the ground point and the non-ground point are divided;
s2.2: according to the adjacent line segment space between the ground point cloud scanning lines obtained by calculation
Figure FDA0002995115500000012
Gradient α, absolute height difference H ═ z1-z2I.e. projecting non-ground three-dimensional points (x, y, z) to two-dimensional coordinates
Figure FDA0002995115500000021
And storing the two-dimensional point cloud in a container D.
5. The multi-sensor fusion-enabled 2Dslam method according to claim 4, wherein the specific steps of step S4 include:
s4.1: from the laser point clouds in the container D andt-1) time particle map
Figure FDA0002995115500000022
Scanning and matching by adopting a gradient descent method and optimizing the pose of the particles
Figure FDA0002995115500000023
Obtaining the optimal estimated pose of the particle
Figure FDA0002995115500000024
S4.2: according to the weight of particles at time (t-1)
Figure FDA0002995115500000025
Particle map
Figure FDA0002995115500000026
And the laser point cloud Z of the container D at the time ttPosition and posture of the particle
Figure FDA0002995115500000027
According to
Figure FDA0002995115500000028
Updating to obtain the weight of the particles at the time t
Figure FDA0002995115500000029
6. The multi-sensor fusion-enabled 2Dslam method according to claim 4, wherein the specific steps of step S5 include:
s5.1: calculating the degree of dispersion of particles
Figure FDA00029951155000000210
If N is presenteffIf the value is less than the threshold value T (the value is 2/3n), resampling the particles, otherwise, executing the step S5.2;
s5.2: according to time t container DLaser point cloud and particle pose in
Figure FDA00029951155000000211
And time t-1 particle map
Figure FDA00029951155000000212
Updating to obtain the particle map at the time t
Figure FDA00029951155000000213
S5.3: judging whether the data is completely read, if not, returning to the step S1; and if the data processing is finished, traversing all the particles, and outputting the particle map with the maximum weight as the current map.
7. A multi-sensor fused 2Dslam system suitable for use in multiple terrains, comprising:
the encoder data module is used for acquiring an encoder pulse value and resolving a motion model parameter according to the pulse value;
the laser radar data module is used for reading laser radar point cloud, adopting grid mean value to carry out down-sampling processing on the point cloud data and removing point cloud distortion according to the motion model;
the laser point cloud segmentation module is used for segmenting laser point cloud ground points and non-ground points, calculating the gradient of the ground points and projecting the non-three-dimensional ground points into two-dimensional ground points according to the gradient;
the particle pose acquisition module is used for estimating and obtaining the particle pose of the current moment according to the particle pose of the previous moment and the encoder data motion model;
the particle pose optimization module is used for scanning, matching and optimizing the particle pose by adopting a gradient descent method according to the two-dimensional laser radar point cloud output by the laser radar data module and the particle map at the previous moment, and updating the weight;
and the particle map output module is used for updating the map of each particle by referring to the current laser radar scanning data according to the attitude of the particle obtained by the particle pose optimization module, and taking the map of the particle with the highest weight as the current map.
8. A multi-sensor fused 2Dslam device adapted for use with multiple terrains, comprising:
a memory, a processor, and a 2Dslam method program stored on the memory and executable on the processor suitable for multi-sensor fusion of multiple terrain;
the 2Dslam method program for multi-terrain multi-sensor fusion, when executed by the processor, implementing the steps of the 2Dslam method for multi-terrain multi-sensor fusion of any of claims 1-6.
9. A computer medium having stored thereon a 2Dslam method program suitable for multi-sensor fusion of multiple terrain;
the 2Dslam method program for multi-terrain multi-sensor fusion, when executed by the processor, implementing the steps of the 2Dslam method for multi-terrain multi-sensor fusion of any of claims 1-6.
CN202110327244.0A 2021-03-26 2021-03-26 2D slam method and system for multi-sensor fusion suitable for multi-terrain Active CN113218408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110327244.0A CN113218408B (en) 2021-03-26 2021-03-26 2D slam method and system for multi-sensor fusion suitable for multi-terrain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110327244.0A CN113218408B (en) 2021-03-26 2021-03-26 2D slam method and system for multi-sensor fusion suitable for multi-terrain

Publications (2)

Publication Number Publication Date
CN113218408A true CN113218408A (en) 2021-08-06
CN113218408B CN113218408B (en) 2024-06-11

Family

ID=77084222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110327244.0A Active CN113218408B (en) 2021-03-26 2021-03-26 2D slam method and system for multi-sensor fusion suitable for multi-terrain

Country Status (1)

Country Link
CN (1) CN113218408B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113607166A (en) * 2021-10-08 2021-11-05 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN114332259A (en) * 2021-12-29 2022-04-12 福州大学 A point cloud encoding and decoding method based on vehicle lidar
CN114329978A (en) * 2021-12-29 2022-04-12 杭州鲁尔物联科技有限公司 Fusion method and device of slope units, electronic equipment and storage medium
CN114763998A (en) * 2022-03-30 2022-07-19 西安交通大学 Unknown environment parallel navigation method and system based on micro radar array
CN115841635A (en) * 2022-12-13 2023-03-24 广州赛特智能科技有限公司 Method, device and equipment for dividing ground points and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN109932713A (en) * 2019-03-04 2019-06-25 北京旷视科技有限公司 Positioning method, apparatus, computer equipment, readable storage medium and robot
CN110095788A (en) * 2019-05-29 2019-08-06 电子科技大学 A kind of RBPF-SLAM improved method based on grey wolf optimization algorithm
CN111427370A (en) * 2020-06-09 2020-07-17 北京建筑大学 A Gmapping Mapping Method for Mobile Robots Based on Sparse Pose Adjustment
US20200393566A1 (en) * 2019-06-14 2020-12-17 DeepMap Inc. Segmenting ground points from non-ground points to assist with localization of autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN109932713A (en) * 2019-03-04 2019-06-25 北京旷视科技有限公司 Positioning method, apparatus, computer equipment, readable storage medium and robot
CN110095788A (en) * 2019-05-29 2019-08-06 电子科技大学 A kind of RBPF-SLAM improved method based on grey wolf optimization algorithm
US20200393566A1 (en) * 2019-06-14 2020-12-17 DeepMap Inc. Segmenting ground points from non-ground points to assist with localization of autonomous vehicles
CN111427370A (en) * 2020-06-09 2020-07-17 北京建筑大学 A Gmapping Mapping Method for Mobile Robots Based on Sparse Pose Adjustment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YANG, JS: "Localization/Mapping Motion Control System for a Mobile Robot", SEVENTH INTERNATIONAL SYMPOSIUM ON PRECISION ENGINEERING MEASUREMENTS AND INSTRUMENTATION, vol. 8321, pages 1 - 9, XP060025024, DOI: 10.1117/12.916655 *
危双丰: "基于激光雷达的同时定位与地图构建方法综述", 计算机应用研究, vol. 37, no. 2, pages 327 - 332 *
王依人: "基于激光雷达传感器的RBPF-SLAM系统优化设计", 传感器与微系统, vol. 36, no. 9, pages 77 - 80 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113607166A (en) * 2021-10-08 2021-11-05 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
CN113607166B (en) * 2021-10-08 2022-01-07 广东省科学院智能制造研究所 Indoor and outdoor positioning method and device for autonomous mobile robot based on multi-sensor fusion
US12019453B2 (en) 2021-10-08 2024-06-25 Institute Of Intelligent Manufacturing, Gdas Multi-sensor-fusion-based autonomous mobile robot indoor and outdoor positioning method and robot
CN114332259A (en) * 2021-12-29 2022-04-12 福州大学 A point cloud encoding and decoding method based on vehicle lidar
CN114329978A (en) * 2021-12-29 2022-04-12 杭州鲁尔物联科技有限公司 Fusion method and device of slope units, electronic equipment and storage medium
CN114763998A (en) * 2022-03-30 2022-07-19 西安交通大学 Unknown environment parallel navigation method and system based on micro radar array
CN114763998B (en) * 2022-03-30 2023-08-22 西安交通大学 Unknown environment parallel navigation method and system based on micro radar array
CN115841635A (en) * 2022-12-13 2023-03-24 广州赛特智能科技有限公司 Method, device and equipment for dividing ground points and storage medium

Also Published As

Publication number Publication date
CN113218408B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN113218408A (en) Multi-sensor fusion 2Dslam method and system suitable for multiple terrains
EP3620823B1 (en) Method and device for detecting precision of internal parameter of laser radar
Weon et al. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN114565726B (en) A simultaneous localization and mapping method in unstructured dynamic environments
CN114859938B (en) Robot, dynamic obstacle state estimation method, device and computer equipment
CN112965076B (en) Multi-radar positioning system and method for robot
CN113487631A (en) Adjustable large-angle detection sensing and control method based on LEGO-LOAM
CN118230231B (en) Pose construction method and device of unmanned vehicle, electronic equipment and storage medium
Lee et al. Temporally consistent road surface profile estimation using stereo vision
CN113112491A (en) Cliff detection method and device, robot and storage medium
CN112907625A (en) Target following method and system applied to four-footed bionic robot
CN114037800A (en) Construction system, method and device of octree map and electronic equipment
CN116844124A (en) Three-dimensional target detection frame annotation method, device, electronic equipment and storage medium
CN115909262A (en) Parking space detection method, device, electronic device and storage medium
CN114419118B (en) Three-dimensional point cloud registration method, mobile device and storage medium
CN118670399A (en) Positioning method and system for automatic driving parameter adjustment in well and mining environment
CN117036447A (en) Indoor scene dense three-dimensional reconstruction method and device based on multi-sensor fusion
CN115965929A (en) Point cloud-based obstacle detection method and device and domain controller
CN115876207A (en) An unmanned driving positioning method, device and storage medium based on lidar point cloud feature matching
CN119579619A (en) A ground point cloud segmentation method for unmanned vehicles in unstructured environments
CN115685236A (en) Robot, robot skid processing method, device and readable storage medium
CN111783611A (en) Unmanned vehicle positioning method, device, unmanned vehicle and storage medium
CN118363008A (en) Robot positioning scene degradation processing method, rapid positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant