[go: up one dir, main page]

CN112630745B - Laser radar-based environment mapping method and device - Google Patents

Laser radar-based environment mapping method and device Download PDF

Info

Publication number
CN112630745B
CN112630745B CN202011551838.1A CN202011551838A CN112630745B CN 112630745 B CN112630745 B CN 112630745B CN 202011551838 A CN202011551838 A CN 202011551838A CN 112630745 B CN112630745 B CN 112630745B
Authority
CN
China
Prior art keywords
pose
equipment
frame
discrete
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011551838.1A
Other languages
Chinese (zh)
Other versions
CN112630745A (en
Inventor
魏伟
龙建睿
邢志伟
李骥
赵信宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dadao Zhichuang Technology Co ltd
Original Assignee
Shenzhen Dadao Zhichuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dadao Zhichuang Technology Co ltd filed Critical Shenzhen Dadao Zhichuang Technology Co ltd
Priority to CN202011551838.1A priority Critical patent/CN112630745B/en
Publication of CN112630745A publication Critical patent/CN112630745A/en
Application granted granted Critical
Publication of CN112630745B publication Critical patent/CN112630745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an environment mapping method and device based on a laser radar, self-mobile equipment and a storage medium, belonging to the technical field of intelligent navigation, wherein the method comprises the following steps: acquiring an ith environmental frame by using a laser radar, and determining characteristic points in the ith environmental frame; in the estimated pose range of the current equipment, the pose values are selected one by one to serve as the pose to be detected of the equipment; determining a discrete grid corresponding to each feature point according to a preset grid mapping mode and a device pose to be detected; calculating the sum of preset values of the discrete grids corresponding to all the characteristic points, and storing the sum of the preset values and the pose to be detected of the equipment correspondingly, wherein the preset values reflect probability values of the characteristic points which do not exist in each discrete grid; selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment; and constructing a current local environment map according to the actual pose of the current equipment and the feature points in the ith environment frame.

Description

Laser radar-based environment mapping method and device
Technical Field
The application relates to the technical field of intelligent navigation, in particular to an environment mapping method and device based on a laser radar, self-mobile equipment and a storage medium.
Background
With the gradual development and improvement of the intelligent technology of the machine, the self-mobile equipment gradually becomes common intelligent equipment in production and life. The current self-moving equipment can be applied to various scenes such as families, markets, factories and outdoors, and can automatically plan the travel route according to actual needs in different scenes, and the self-moving of the equipment is performed based on the planned travel route.
In particular, the user may input the destination point on the self-mobile device, or the self-mobile device may autonomously determine the destination point based on a preset configuration. And then, the self-mobile device can firstly determine the map coordinates corresponding to the destination points through the built-in environment map of the current scene, and then plan all routes reaching the map coordinates according to all passable paths recorded in the environment map. Further, the self-mobile device may determine an actual optimal route according to a user instruction or a preset routing rule, map the actual optimal route into an actual scene, and then guide the user or automatically go to the destination point.
In carrying out the present application, the inventors have found that the above-described technique has at least the following problems:
In implementing the self-movement, it is generally required that the self-movement device has the capability of analyzing the surrounding environment and making a decision autonomously based on knowledge of the environment to complete the self-movement task. Thus, the self-mobile device needs to be able to identify the scene environment first, establish a corresponding scene map, further locate the position of the device and plan the moving path. For outdoor positioning and navigation, GPS can be considered, however for indoor environment, GPS can not make corresponding judgment on indoor complex environment due to high complexity. Therefore, there is a need for an efficient identification method suitable for a plurality of different scene environments.
Disclosure of Invention
In order to effectively and accurately identify different scene environments, the embodiment of the application provides an environment mapping method and device based on a laser radar, self-mobile equipment and a storage medium. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for mapping an environment based on a lidar, where the method includes:
acquiring an ith environmental frame by using a laser radar, and determining characteristic points in the ith environmental frame;
In the estimated pose range of the current equipment, the pose values are selected one by one to serve as the pose to be detected of the equipment;
Determining a discrete grid corresponding to each characteristic point according to a preset grid mapping mode and the pose to be detected of the equipment;
Calculating the sum of preset values of all the discrete grids corresponding to the characteristic points, and storing the sum of the preset values and the pose to be detected of the equipment correspondingly, wherein the preset values reflect probability values of the characteristic points which do not exist in each discrete grid;
selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment;
and constructing a current local environment map according to the current actual pose of the equipment and the characteristic points in the ith environment frame.
Based on the technical scheme, the laser radar is utilized to continuously acquire the environment frames, the local environment map corresponding to each environment frame is continuously constructed, and finally, the local environment maps of multiple frames are spliced and combined, so that the global environment map with higher accuracy and perfect scene details can be obtained in different scenes.
Optionally, the method further comprises:
and determining the estimated pose range of the current equipment according to the equipment pose change rate, the scanning frequency of the laser radar and the actual pose of the equipment corresponding to the ith-1 environmental frame.
Based on the technical scheme, on the basis of the actual pose of the previous frame of equipment, the pose change process possibly occurring in the equipment is considered, so that a more reasonable and accurate estimated pose range can be obtained.
Optionally, the determining, according to a preset grid mapping manner and the to-be-detected pose of the device, the discrete grid corresponding to each feature point includes:
Determining the relative map pose of each characteristic point according to the equipment pose to be detected and the relative equipment pose of each characteristic point;
And determining the discrete grid corresponding to each feature point according to a preset grid mapping mode and the relative map pose.
Based on the technical scheme, the relative equipment pose of the characteristic points is converted into the relative map pose by using the equipment pose, so that the discrete grid corresponding to the characteristic points can be obtained.
Optionally, the method further comprises:
Determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0;
And setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids.
Based on the technical scheme, the preset value of each discrete grid is updated according to the mapping relation between the characteristic points in the previous frame and the discrete grids, and Euclidean distance is adopted as the preset value of the discrete grid, so that assignment updating of the discrete grids can be completed rapidly and efficiently.
Optionally, the method further comprises:
If the discrete grid a corresponding to the target feature point in the i-1 th environmental frame is different from the discrete grid b corresponding to the target feature point in the i-2 th environmental frame, setting the discrete grid corresponding to the target feature point as the discrete grid b.
Based on the technical scheme, when the discrete grids corresponding to the feature points in the front frame and the rear frame are different, the discrete grids corresponding to the feature points in the front frame are selected as the standard, so that the situation that the final environment map has larger deviation due to continuous accumulation of errors in the process of drawing can be effectively reduced.
Optionally, before the constructing the current local environment map according to the current actual pose of the device and the feature point in the ith environment frame, the method further includes:
acquiring a pose change record of equipment in a current frame through a pose sensor preset on the equipment;
and adjusting the current actual pose of the equipment according to the pose change record and the actual pose of the equipment corresponding to the i-1 th environmental frame.
Based on the technical scheme, the actual pose of the equipment is adjusted by utilizing the pose change record of the equipment, so that the accuracy of the adjusted actual pose of the equipment can be improved, and the influence caused by algorithm errors is reduced.
Optionally, before the setting the target pose value to the current actual pose of the device, the method further includes:
If the sum of the preset values is larger than a specified early warning threshold value, stopping constructing a local environment map corresponding to the current frame, and adjusting the estimated pose range of the current device according to the pose change record of the device in the ith environment frame and the actual pose of the device corresponding to the i-1 th environment frame.
Based on the technical scheme, when the mapping relation between the feature points and the discrete grid has larger deviation, the predicted pose range is redefined by utilizing the pose change record and the actual pose of the equipment of the previous frame, so that the influence of errors on the construction of the graph can be reduced.
In a second aspect, an embodiment of the present application further provides an environment mapping apparatus based on a lidar, where the apparatus includes:
The environment scanning module is used for acquiring an ith environment frame by using the laser radar and determining characteristic points in the ith environment frame;
The pose selecting module is used for selecting pose values one by one as the pose to be detected of the equipment in the estimated pose range of the current equipment;
the grid mapping module is used for determining a discrete grid corresponding to each feature point according to a preset grid mapping mode and the equipment to-be-detected pose;
the pose detection module is used for calculating the sum of preset values of the discrete grids corresponding to all the characteristic points and storing the sum of the preset values and the pose to be detected of the equipment correspondingly, wherein the preset values reflect probability values of the characteristic points which do not exist in each discrete grid;
the pose determining module is used for selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment;
and the map construction module is used for constructing a current local environment map according to the current actual pose of the equipment and the characteristic points in the ith environment frame.
Optionally, the pose selecting module is further configured to:
and determining the estimated pose range of the current equipment according to the equipment pose change rate, the scanning frequency of the laser radar and the actual pose of the equipment corresponding to the ith-1 environmental frame.
Optionally, the grid mapping module is specifically configured to:
Determining the relative map pose of each characteristic point according to the equipment pose to be detected and the relative equipment pose of each characteristic point;
And determining the discrete grid corresponding to each feature point according to a preset grid mapping mode and the relative map pose.
Optionally, the grid mapping module is further configured to:
Determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0;
And setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids.
Optionally, the grid mapping module is further configured to:
If the discrete grid a corresponding to the target feature point in the i-1 th environmental frame is different from the discrete grid b corresponding to the target feature point in the i-2 th environmental frame, setting the discrete grid corresponding to the target feature point as the discrete grid b.
Optionally, the pose determining module is further configured to:
acquiring a pose change record of equipment in a current frame through a pose sensor preset on the equipment;
and adjusting the current actual pose of the equipment according to the pose change record and the actual pose of the equipment corresponding to the i-1 th environmental frame.
Optionally, the map construction module is further configured to:
If the sum of the preset values is larger than a specified early warning threshold value, stopping constructing a local environment map corresponding to the current frame, and adjusting the estimated pose range of the current device according to the pose change record of the device in the ith environment frame and the actual pose of the device corresponding to the i-1 th environment frame.
In a third aspect, there is provided a self-mobile device comprising a processor and a memory, the memory storing at least one instruction, at least one program, code set or instruction set, the at least one instruction, the at least one program, the code set or instruction set being loaded and executed by the processor to implement the lidar-based environment mapping method of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the lidar-based environment mapping method according to the first aspect.
In summary, the application has the following beneficial effects:
by adopting the laser radar-based environment mapping method disclosed by the application, the self-mobile device can continuously scan surrounding environment frames through the laser radar, and then deduce the actual pose of the device corresponding to each frame by utilizing the mapping condition of the characteristic points and the discrete grids in the environment frames, thereby determining the actual pose of each characteristic point in the current frame on the basis of the actual pose of the device so as to complete the construction of a local environment map. In this way, the laser radar is utilized to continuously acquire the environment frames, and continuously construct the local environment map corresponding to each environment frame, and finally, the local environment maps of the multiple frames are spliced and combined, so that the global environment map with higher accuracy and perfect scene details can be obtained in different scenes.
Drawings
FIG. 1 is a flow chart of an environment mapping method based on a laser radar in an embodiment of the application;
FIG. 2 is a schematic diagram of a predicted pose range of a self-mobile device according to an embodiment of the present application;
FIG. 3 is a schematic view of a discrete grid labeled with preset values in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an environment mapping device based on a lidar in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings 1 to 4 and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides an environment map building method based on a laser radar, which can be applied to self-moving equipment (hereinafter, the self-moving equipment can be any intelligent equipment with map building, positioning, navigation and moving functions, such as an intelligent guide robot, an intelligent transport robot and the like. The map building function can be realized by detecting the environment in the scene where the mobile device is located through an environment detection device arranged on the mobile device, and creating an environment map of the current scene based on detection results, the positioning function can be realized by detecting nearby environment features by the mobile device through the environment detection device, determining the pose of the mobile device in the environment map according to the environment features, the navigation function can be realized by planning a traveling route between the mobile device and a destination point in the environment map according to the pose of the mobile device and the coordinates of the destination point, and the moving function can be realized by generating moving parameters according to the traveling route by the mobile device and driving the traveling device according to the moving parameters. Aiming at the mapping function, the embodiment provides an environment mapping method which is used for detection by using a laser radar and can be suitable for different application scenes.
The process flow shown in fig. 1 will be described in detail with reference to the specific embodiments, and the following may be included:
101, acquiring an ith environmental frame by using a laser radar, and determining characteristic points in the ith environmental frame.
In implementation, when the self-mobile device starts to build a map, the local position can be set as the origin of coordinates, and the surrounding environment of the device in the current scene is continuously scanned by using the laser radar so as to obtain a plurality of continuous environment frames, and an initial local environment map is built by using the plurality of environment frames. And then, the self-mobile device can move in the current scene and continuously utilize the surrounding environment of the laser radar scanning device in the moving process, so that the local environment maps with different positions and angles can be continuously generated by utilizing the environment frames obtained by scanning. Specifically, taking the process of creating a local environment map by using the ith environment frame as an example, the self-mobile device can firstly scan the surrounding environment by using the laser radar to obtain the ith environment frame containing the point cloud data of the objects around the current device, and then can determine all the feature points contained in the ith environment frame according to the preset feature point screening rule. The feature point filtering rules herein may be set by a technician according to actual scene requirements, which is not limited in this embodiment.
102, In the estimated pose range of the current equipment, the pose values are selected one by one to serve as the pose to be detected of the equipment.
In implementation, in the process of creating a local environment map for the ith environment frame, the self-mobile device needs to determine the current pose of the self-mobile device, and then position surrounding objects in the scene environment based on the current pose of the self-mobile device. In this way, when the self-mobile device acquires the ith environmental frame, one pose value can be selected one by one from the estimated pose range of the current device as the current pose of the self-mobile device, and the current pose can also be called as the pose to be detected of the device. Here, the estimated pose range of the device may be a range of values of poses that may be possessed by all devices derived from the mobile device according to the historical pose information of the device. It is worth mentioning that the pose ζ of the self-mobile device may have values in 6 dimensions, and specifically may include 3 coordinate values "X, Y, Z" on the X axis, the Y axis, and the Z axis in the three-dimensional space coordinate system, and 3 values "rx, ry, and rz" corresponding to three euler angles of roll angle, pitch angle, and heading angle.
Referring to fig. 2, the size of the estimated pose range may be defined as the maximum displacement in the three-dimensional space coordinate system in the X-axis, Y-axis, and Z-axis: and the maximum offset of three euler angles, roll angle, pitch angle and heading angle: the step size in the estimated pose range is set to be displacement Δs=0.05m, and the angle Δθ=0.2 °, i.e. the coordinate difference between two pose values in any axial direction in the estimated pose range should be no less than Δs, or the angle difference in any euler angle should be no less than Δθ, and the steps are set at the same time ,,,,There may be a set of values for the predicted pose range corresponding to the ith environmental frame:
And 103, determining a discrete grid corresponding to each feature point according to a preset grid mapping mode and the pose to be detected of the equipment.
In implementation, after determining all the feature points in the ith environmental frame, the self-mobile device may determine, for each feature point, a discrete grid corresponding to the feature point according to a preset grid mapping mode and in combination with the pose to be detected of the device. The discrete grid can be obtained by dividing the current scene environment into a plurality of continuous and non-overlapping three-dimensional grids according to a preset grid size, and can be expressed in the following manner: "M: rX×rY×rZ", so that it is determined in which space the three-dimensional coordinates of the feature point are located.
104, Calculating the sum of preset values of the discrete grids corresponding to all the characteristic points, and storing the sum of the preset values corresponding to the pose to be detected of the equipment.
Wherein the preset value represents a probability value reflecting that no feature point exists in each discrete grid.
In an implementation, after dividing the space around the device into a plurality of discrete grids, the self-mobile device may assign a value to each discrete grid (which may be referred to as a preset value of the discrete grid), where a specific numerical value may use a probability value of the existence of a feature point in each discrete grid as a reference, where a smaller preset value represents a higher probability that the feature point exists in the discrete grid, and conversely represents a lower probability. Therefore, after determining the discrete grids corresponding to each feature point in the ith environmental frame, the sum of preset values of the discrete grids corresponding to all feature points can be calculated, and the calculated sum of preset values can be stored corresponding to the pose to be detected of the device. It will be appreciated that each pose value in the estimated pose range mentioned in step 102 may correspond to a sum of preset values.
And 105, selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment.
In implementation, after the mobile device finishes storing the sum of preset values corresponding to each pose value in the estimated pose range, a target pose value with the minimum sum of the corresponding preset values can be selected from all the pose values, and then the target pose value is set as the current actual pose of the device. It is not difficult to understand that the sum of the preset values is minimum, namely the discrete grid corresponding to the feature points in the current environment frame is represented, the probability of the feature points existing in the whole is higher, and the mapping situation of the feature points and the discrete grid is represented to be more in line with the expectation, so that the corresponding target pose value can be considered to be more in line with the actual pose of the self-mobile equipment.
And 106, constructing a current local environment map according to the actual pose of the current device and the feature points in the ith environment frame.
In implementation, after the actual pose of the device of the i-th environmental frame is set by the self-mobile device, the pose of each feature point relative to the actual scene environment can be determined by combining the pose of the feature point in the i-th environmental frame relative to the pose of the self-mobile device, so that the current local environment map can be constructed based on the pose of the feature point relative to the actual scene environment. Furthermore, the self-mobile device can splice and combine all the local environment maps to generate an overall environment map in the current application scene.
Further, the determination mode of the estimated pose range of the current device may specifically be as follows: and determining the estimated pose range of the current equipment according to the equipment pose change rate, the scanning frequency of the laser radar and the actual pose of the equipment corresponding to the ith-1 environmental frame.
In the implementation, when determining the estimated pose range of the ith environmental frame device, the self-mobile device can firstly determine the time length between two frames according to the scanning frequency of the laser radar, then calculate the maximum range of the actual pose change of the device between the two frames by referring to the pose change rate of the device, and further can superimpose the maximum range on the basis of the actual pose of the device corresponding to the ith-1 environmental frame to determine the estimated pose range of the current device. For example, two frames are spaced 1s apart, the device pose change rate is displacement change rate v=0.2 m/s, the angle change rate is θ=4°/s, and the actual pose of the device corresponding to the i-1 th environmental frame isThe estimated pose range is
For step 103, the determination manner of the discrete grid corresponding to the feature point may specifically be as follows: determining the relative map pose of each characteristic point according to the pose to be detected of the equipment and the relative equipment pose of each characteristic point; and determining a discrete grid corresponding to each feature point according to a preset grid mapping mode and a relative map pose.
In implementation, after the feature points in the ith environmental frame are determined, the self-mobile device may calculate the pose of each feature point relative to the self-mobile device (may be referred to as a relative device pose), where the pose of each feature point is a pose value of the feature point in a three-dimensional space coordinate system constructed by using the self-mobile device as an origin and the laser radar transmitting direction as the X axis. In detail, if the relative equipment pose of the feature point k isThe pose to be measured of the equipment isThen the pose can be detected based on the equipmentDetermining a coordinate transformation matrix from a mobile device relative to a map originIncluding a rotation matrixAnd displacement vectorThereby making it possible to use a coordinate transformation matrixConverting coordinates of the characteristic point k to generate a relative map pose of the characteristic point k. And then, the self-mobile device can determine the discrete grid corresponding to each feature point according to a preset grid mapping mode and the relative map pose of the feature point. For example, the three-dimensional space coordinates are valued to satisfyIs mapped to the discrete grid 1.
Further, the assignment process for the discrete grid may refer to the following process: determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0; and setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids.
In the implementation, after the self-mobile device determines the actual pose of the device corresponding to the i-1 th environmental frame, the preset value of the discrete grid can be updated based on the relative map pose of the feature points of the i-1 th environmental frame. Specifically, the preset values of all the target discrete grids corresponding to the feature points may be set to 0, and for other discrete grids except for the target discrete grid, the euclidean distance value between the other discrete grid and the target discrete grid adjacent to the other discrete grid may be calculated, and then the minimum euclidean distance value may be set to the preset value of the other discrete grid. Referring to fig. 3, taking a plane coordinate system as an example, the numerical value marked in the discrete grid is a preset value of the discrete grid.
Further, there may be the following processes when updating the preset value of the discrete grid: if the discrete grid a corresponding to the target feature point in the i-1 th environmental frame is different from the discrete grid b corresponding to the target feature point in the i-2 th environmental frame, setting the discrete grid corresponding to the target feature point as the discrete grid b.
In implementation, when determining the discrete grid corresponding to the feature point in the i-1 th environmental frame, if the target feature point is found to exist in the i-2 th environmental frame as well, and the discrete grid b corresponding to the target feature point in the i-2 th environmental frame is not the same discrete grid as the discrete grid a corresponding to the i-1 th environmental frame, the discrete grid corresponding to the target feature point in the i-1 th environmental frame may be set as the discrete grid b.
In another embodiment, the final determined actual pose of the device may be adjusted according to the pose change condition recorded by the device, and the corresponding processing may be as follows: acquiring a pose change record of equipment in a current frame through a pose sensor preset on the equipment; and according to the pose change record and the actual pose of the equipment corresponding to the i-1 th environmental frame, adjusting the current actual pose of the equipment.
In an implementation, a pose sensor may be preset on the self-mobile device for measuring the change of the pose during its travel. The pose sensor may include at least a displacement sensing component for recording a displacement change condition of the device and a gyro sensor for recording an orientation change condition of the device, so that the measured pose change condition includes at least a displacement change condition and an orientation change condition. Based on the above, when the self-mobile device scans a plurality of environmental frames through the laser radar, the pose sensor can be used for acquiring the pose change record of each intra-frame device at the same time, namely, at least acquiring the displacement change record and the orientation change record of the intra-frame device. When the mobile device scans and generates the ith environmental frame, the pose change record of the device in the ith environmental frame can be obtained firstly, then the pose change record is overlapped by taking the actual pose of the device corresponding to the ith-1 th environmental frame as a reference, the estimated motion pose of the device corresponding to the ith environmental frame is obtained, and finally the actual pose of the device obtained in step 105 can be adjusted by utilizing the estimated motion pose of the device. The specific adjustment mode can be a mode of setting weights, namely, different weights are given to the actual pose of the equipment and the estimated pose of the motion of the equipment, then summation is carried out, and the pose value obtained by summation is taken as the final actual pose of the equipment.
In another embodiment, the error may be identified by the minimum value of the sum of the preset values corresponding to each frame, and the corresponding process may be as follows: if the sum of the preset values is larger than the specified early warning threshold value, stopping constructing a local environment map corresponding to the current frame, and adjusting the estimated pose range of the current device according to the pose change record of the device in the ith environment frame and the actual pose of the device corresponding to the ith-1 environment frame.
In implementation, after the target pose value with the minimum sum of the preset values is selected from all the pose values by the mobile device, whether the minimum value of the preset sum is larger than the early warning threshold value can be judged. If the detected position and orientation of the feature point in the current frame are larger than the detected position and orientation of the feature point in the current frame, the relevant data of the current frame can be selected to be discarded, and the construction of the local environment map corresponding to the current frame is stopped. Further, the self-mobile device can call the pose change record of the device in the ith environmental frame and the actual pose of the device corresponding to the ith-1 environmental frame, and estimate the possible pose of the device corresponding to the current frame based on the pose change record and the actual pose of the device corresponding to the ith-1 environmental frame, so that the estimated pose range of the current device can be adjusted based on the possible pose of the device. Thus, the self-mobile device can calculate the sum of preset values corresponding to the pose values in the adjusted estimated pose range again.
By adopting the laser radar-based environment mapping method disclosed by the application, the self-mobile device can continuously scan surrounding environment frames through the laser radar, and then deduce the actual pose of the device corresponding to each frame by utilizing the mapping condition of the characteristic points and the discrete grids in the environment frames, thereby determining the actual pose of each characteristic point in the current frame on the basis of the actual pose of the device so as to complete the construction of a local environment map. In this way, the laser radar is utilized to continuously acquire the environment frames, and continuously construct the local environment map corresponding to each environment frame, and finally, the local environment maps of the multiple frames are spliced and combined, so that the global environment map with higher accuracy and perfect scene details can be obtained in different scenes.
Based on the same technical concept, the embodiment of the application also provides an environment mapping device based on the laser radar, as shown in fig. 4, the device comprises:
The environment scanning module 401 is configured to acquire an ith environment frame by using a laser radar, and determine feature points in the ith environment frame;
the pose selecting module 402 is configured to select pose values one by one as pose to be detected of the device in a predicted pose range of the current device;
A grid mapping module 403, configured to determine a discrete grid corresponding to each feature point according to a preset grid mapping manner and the to-be-detected pose of the device;
The pose detection module 404 is configured to calculate a sum of preset values of discrete grids corresponding to all the feature points, and store the sum of preset values in correspondence with the pose to be detected of the device, where the preset values reflect probability values of the feature points not existing in each discrete grid;
The pose determining module 405 is configured to select a target pose value with a minimum sum of corresponding preset values from all pose values, and set the target pose value as a current actual pose of the device;
And the map construction module 406 is configured to construct a current local environment map according to the current actual pose of the device and the feature points in the ith environment frame.
Optionally, the pose selection module 402 is further configured to:
and determining the estimated pose range of the current equipment according to the equipment pose change rate, the scanning frequency of the laser radar and the actual pose of the equipment corresponding to the ith-1 environmental frame.
Optionally, the grid mapping module 403 is specifically configured to:
Determining the relative map pose of each characteristic point according to the equipment pose to be detected and the relative equipment pose of each characteristic point;
And determining the discrete grid corresponding to each feature point according to a preset grid mapping mode and the relative map pose.
Optionally, the grid mapping module 403 is further configured to:
Determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0;
And setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids.
Optionally, the grid mapping module 403 is further configured to:
If the discrete grid a corresponding to the target feature point in the i-1 th environmental frame is different from the discrete grid b corresponding to the target feature point in the i-2 th environmental frame, setting the discrete grid corresponding to the target feature point as the discrete grid b.
Optionally, the pose determining module 405 is further configured to:
acquiring a pose change record of equipment in a current frame through a pose sensor preset on the equipment;
and adjusting the current actual pose of the equipment according to the pose change record and the actual pose of the equipment corresponding to the i-1 th environmental frame.
Optionally, the map construction module 406 is further configured to:
If the sum of the preset values is larger than a specified early warning threshold value, stopping constructing a local environment map corresponding to the current frame, and adjusting the estimated pose range of the current device according to the pose change record of the device in the ith environment frame and the actual pose of the device corresponding to the i-1 th environment frame.
The self-mobile device can continuously scan surrounding environment frames through the laser radar, and deduce the actual pose of the device corresponding to each frame by utilizing the mapping condition of the characteristic points and the discrete grids in the environment frames, so that the actual pose of each characteristic point in the current frame is determined on the basis of the actual pose of the device, and the construction of a local environment map is completed. In this way, the laser radar is utilized to continuously acquire the environment frames, and continuously construct the local environment map corresponding to each environment frame, and finally, the local environment maps of the multiple frames are spliced and combined, so that the global environment map with higher accuracy and perfect scene details can be obtained in different scenes.
The embodiment of the application also provides a self-mobile device, which comprises a processor and a memory, wherein at least one instruction, at least one section of program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by the processor to realize the processing from step 101 to step 106.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the scope of the application in any way, including the abstract and drawings, in which case any feature disclosed in this specification (including abstract and drawings) may be replaced by alternative features serving the same, equivalent purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.

Claims (9)

1. A lidar-based environmental mapping method, the method comprising:
acquiring an ith environmental frame by using a laser radar, and determining characteristic points in the ith environmental frame;
In the estimated pose range of the current equipment, the pose values are selected one by one to serve as the pose to be detected of the equipment;
Determining a discrete grid corresponding to each characteristic point according to a preset grid mapping mode and the pose to be detected of the equipment;
Calculating the sum of preset values of all the discrete grids corresponding to the characteristic points, and storing the sum of the preset values and the pose to be detected of the equipment correspondingly, wherein the preset values reflect probability values of the characteristic points which do not exist in each discrete grid;
selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment;
Constructing a current local environment map according to the current actual pose of the equipment and the characteristic points in the ith environment frame;
Determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0;
And setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids.
2. The method according to claim 1, wherein the method further comprises:
and determining the estimated pose range of the current equipment according to the equipment pose change rate, the scanning frequency of the laser radar and the actual pose of the equipment corresponding to the ith-1 environmental frame.
3. The method of claim 1, wherein the determining, according to a preset grid mapping manner and the pose to be detected of the device, the discrete grid corresponding to each feature point includes:
Determining the relative map pose of each characteristic point according to the equipment pose to be detected and the relative equipment pose of each characteristic point;
And determining the discrete grid corresponding to each feature point according to a preset grid mapping mode and the relative map pose.
4. The method according to claim 1, wherein the method further comprises:
If the discrete grid a corresponding to the target feature point in the i-1 th environmental frame is different from the discrete grid b corresponding to the target feature point in the i-2 th environmental frame, setting the discrete grid corresponding to the target feature point as the discrete grid b.
5. The method according to claim 1, wherein before constructing the current local environment map according to the current device actual pose and the feature point in the i-th environment frame, the method further comprises:
acquiring a pose change record of equipment in a current frame through a pose sensor preset on the equipment;
and adjusting the current actual pose of the equipment according to the pose change record and the actual pose of the equipment corresponding to the i-1 th environmental frame.
6. The method of claim 1, wherein prior to setting the target pose value to the current device actual pose, further comprising:
If the sum of the preset values is larger than a specified early warning threshold value, stopping constructing a local environment map corresponding to the current frame, and adjusting the estimated pose range of the current device according to the pose change record of the device in the ith environment frame and the actual pose of the device corresponding to the i-1 th environment frame.
7. An environmental mapping apparatus based on lidar, the apparatus comprising:
The environment scanning module is used for acquiring an ith environment frame by using the laser radar and determining characteristic points in the ith environment frame;
The pose selecting module is used for selecting pose values one by one as the pose to be detected of the equipment in the estimated pose range of the current equipment;
the grid mapping module is used for determining a discrete grid corresponding to each feature point according to a preset grid mapping mode and the equipment to-be-detected pose; also used for: determining all target discrete grids corresponding to the feature points of the i-1 th environmental frame, and setting the preset value of all target discrete grids to be 0; setting the minimum Euclidean distance value of each other discrete grid to any target discrete grid as the preset value of the other discrete grids;
the pose detection module is used for calculating the sum of preset values of the discrete grids corresponding to all the characteristic points and storing the sum of the preset values and the pose to be detected of the equipment correspondingly, wherein the preset values reflect probability values of the characteristic points which do not exist in each discrete grid;
the pose determining module is used for selecting a target pose value with the minimum sum of corresponding preset values from all pose values, and setting the target pose value as the current actual pose of the equipment;
and the map construction module is used for constructing a current local environment map according to the current actual pose of the equipment and the characteristic points in the ith environment frame.
8. A self-mobile device, characterized in that it comprises a processor and a memory, in which at least one instruction, at least one program, code set or instruction set is stored, said at least one instruction, said at least one program, said code set or instruction set being loaded and executed by said processor to implement the lidar-based environment mapping method according to any of claims 1 to 6.
9. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the lidar-based environment mapping method of any of claims 1 to 6.
CN202011551838.1A 2020-12-24 2020-12-24 Laser radar-based environment mapping method and device Active CN112630745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011551838.1A CN112630745B (en) 2020-12-24 2020-12-24 Laser radar-based environment mapping method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011551838.1A CN112630745B (en) 2020-12-24 2020-12-24 Laser radar-based environment mapping method and device

Publications (2)

Publication Number Publication Date
CN112630745A CN112630745A (en) 2021-04-09
CN112630745B true CN112630745B (en) 2024-08-09

Family

ID=75324523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011551838.1A Active CN112630745B (en) 2020-12-24 2020-12-24 Laser radar-based environment mapping method and device

Country Status (1)

Country Link
CN (1) CN112630745B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107677279A (en) * 2017-09-26 2018-02-09 上海思岚科技有限公司 It is a kind of to position the method and system for building figure
CN110849374A (en) * 2019-12-03 2020-02-28 中南大学 Underground environment positioning method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127445A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN110462683B (en) * 2018-03-06 2022-04-12 斯坦德机器人(深圳)有限公司 Method, terminal and computer readable storage medium for tightly coupling visual SLAM
CN110276826A (en) * 2019-05-23 2019-09-24 全球能源互联网研究院有限公司 Method and system for constructing grid operation environment map
CN110866496B (en) * 2019-11-14 2023-04-07 合肥工业大学 Robot positioning and mapping method and device based on depth image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107677279A (en) * 2017-09-26 2018-02-09 上海思岚科技有限公司 It is a kind of to position the method and system for building figure
CN110849374A (en) * 2019-12-03 2020-02-28 中南大学 Underground environment positioning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112630745A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
US11747477B2 (en) Data collecting method and system
CN106969768B (en) Accurate positioning and parking method for trackless navigation AGV
CN105094130B (en) The AGV transfer robots air navigation aid and device of laser guidance map structuring
US10278333B2 (en) Pruning robot system
JP5018458B2 (en) Coordinate correction method, coordinate correction program, and autonomous mobile robot
JP6649743B2 (en) Matching evaluation device and matching evaluation method
Costante et al. Exploiting photometric information for planning under uncertainty
CN108388244A (en) Mobile-robot system, parking scheme based on artificial landmark and storage medium
KR20230137395A (en) 3D map construction method and device
KR102075844B1 (en) Localization system merging results of multi-modal sensor based positioning and method thereof
US20240393793A1 (en) Method for estimating posture of moving object by using big cell grid map, recording medium in which program for implementing same is stored, and computer program stored in medium in order to implement same
CN112506200A (en) Robot positioning method, device, robot and storage medium
JP7679208B2 (en) Information processing device, information processing method, and program
KR102481615B1 (en) Method and system for collecting data
CN116576868A (en) Multi-sensor fusion accurate positioning and autonomous navigation method
KR102506411B1 (en) Method and apparatus for estimation of location and pose on vehicle and record medium for this
KR20200043329A (en) Method and system for collecting data
KR102624644B1 (en) Method of estimating the location of a moving object using vector map
CN116086447B (en) A fusion navigation method and device for unmanned vehicle
CN112630745B (en) Laser radar-based environment mapping method and device
CN118999577A (en) Pose estimation method, pose estimation device, robot and storage medium
JP2016177742A (en) Mobile control device, landmark, and program
CN117419733A (en) Method for identifying a map of the surroundings with errors
CN115019167A (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN119690089B (en) Adaptive obstacle avoidance control system and application method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant