[go: up one dir, main page]

CN113424232A - Three-dimensional point cloud map construction method, system and equipment - Google Patents

Three-dimensional point cloud map construction method, system and equipment Download PDF

Info

Publication number
CN113424232A
CN113424232A CN201980091861.6A CN201980091861A CN113424232A CN 113424232 A CN113424232 A CN 113424232A CN 201980091861 A CN201980091861 A CN 201980091861A CN 113424232 A CN113424232 A CN 113424232A
Authority
CN
China
Prior art keywords
point cloud
data
pose data
dimensional point
initial point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980091861.6A
Other languages
Chinese (zh)
Other versions
CN113424232B (en
Inventor
钟阳
周游
朱振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113424232A publication Critical patent/CN113424232A/en
Application granted granted Critical
Publication of CN113424232B publication Critical patent/CN113424232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method, a system and equipment for constructing a three-dimensional point cloud map are provided, wherein the method comprises the following steps: acquiring multi-frame initial point cloud data acquired by a laser radar system carried on a movable platform, and acquiring pose data (301) of the laser radar system when acquiring the multi-frame initial point cloud data; determining at least one loop constraint condition, wherein the loop constraint condition is determined (302) based on the pose data which is within a first preset range and is discontinuous in time sequence of the difference between the two frames of spatial poses; processing the pose data according to the at least one loop constraint condition to obtain optimized pose data (303) of each frame of the initial point cloud data; and constructing a three-dimensional point cloud map (304) based on the initial point cloud data of multiple frames according to the optimized pose data.

Description

Three-dimensional point cloud map construction method, system and equipment Technical Field
The disclosure relates to the technical field of high-precision maps, in particular to a method, a system and equipment for constructing a three-dimensional point cloud map.
Background
A High-precision electronic Map, also called a High Definition Map (HD Map), is a Map that is specifically served for unmanned driving. Unlike the conventional navigation map, the high-precision map can provide navigation information at a Lane (Lane) level in addition to navigation information at a Road (Road) level. The method is far higher than the traditional navigation map in terms of the richness of information and the precision of the information. Map construction is used as an advanced step of a high-precision map module, the construction quality of the map construction directly influences the precision of downstream high-precision map labeling and high-precision map positioning, and the conventional map construction mode is low in precision. Therefore, it is necessary to optimize the mapping process.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method, a system, and a device for constructing a three-dimensional point cloud map, so as to solve the technical problem of low accuracy in the related art.
According to a first aspect of the embodiments of the present disclosure, a method for constructing a three-dimensional point cloud map is provided, where the method includes: acquiring multi-frame initial point cloud data acquired by a laser radar system carried on a movable platform, and acquiring pose data when the laser radar system acquires the multi-frame initial point cloud data; determining at least one loop constraint condition, wherein the loop constraint condition is determined based on the pose data which is within a first preset range and is discontinuous in time sequence of the difference between the two frames of spatial poses; processing the pose data according to the at least one loop constraint condition to obtain optimized pose data of each frame of the initial point cloud data; and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data.
According to a second aspect of the embodiments of the present disclosure, a three-dimensional point cloud map building system is provided, which includes: a processing device and a laser radar system mounted on a movable platform; the laser radar system is used for collecting multi-frame initial point cloud data; the processor is configured to determine at least one loop back constraint; processing pose data when the laser radar system collects multiple frames of initial point cloud data according to the at least one loop constraint condition to obtain optimized pose data of each frame of initial point cloud data; constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data; the loop constraint condition is determined based on the pose data that the difference between the two frames of spatial poses is within a first preset range and is discontinuous in time sequence.
According to a third aspect of the embodiments of the present disclosure, a three-dimensional point cloud mapping apparatus is provided, where the noise filtering apparatus includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the method according to any one of the embodiments of the present disclosure when executing the computer program.
According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which computer instructions are stored, and when executed, the computer instructions implement the steps of the method according to any one of the embodiments.
By applying the scheme of the embodiment of the specification, multi-frame initial point cloud data and pose data during multi-frame initial point cloud data collection are obtained, loop constraint conditions are established to optimize the pose data, optimized pose data of each frame of initial point cloud data are obtained, a three-dimensional point cloud map is constructed based on the multi-frame initial point cloud data according to the optimized pose data, centimeter-level three-dimensional reconstruction can be performed on a scene, an accurate three-dimensional point cloud map is obtained, and the map construction accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1A is a schematic diagram of a conventional map of some embodiments.
FIG. 1B is a schematic diagram of a high-precision map of some embodiments.
FIG. 2A is a schematic illustration of a navigation line of a conventional map of some embodiments.
FIG. 2B is a schematic diagram of a navigation line of a high precision map of some embodiments.
Fig. 3 is a flow chart of a three-dimensional point cloud map construction method according to the present disclosure.
Fig. 4 is a schematic diagram illustrating adjustment logic for a loop back constraint according to an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a three-dimensional point cloud mapping system according to another embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating a three-dimensional point cloud mapping apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1A and 1B are schematic diagrams of a conventional map and a high-precision map, respectively, of some embodiments. A High-precision electronic Map, also called a High Definition Map (HD Map), is a Map that is well suited for serving unmanned vehicles. Unlike the conventional navigation map, the high-precision map can provide navigation information at a Lane (Lane) level in addition to navigation information at a Road (Road) level. Fig. 2A and 2B are schematic diagrams of navigation lines of a conventional map and a high-precision map, respectively, of some embodiments. As shown in fig. 2A, only the information of the navigation roads (shown by R1 to R4 in the drawing) and the information of the road nodes (shown by N1 and N2 in the drawing) are among the navigation information of the conventional map. As shown in fig. 2B, the navigation information of the high-precision map includes not only information of navigation roads but also Lane information on each road (Lane 1 to Lane10 in the figure). It can be seen that the high-precision map is far higher than the traditional navigation map in terms of the richness of information and the precision of information. The high-precision map can provide accurate data which cannot be provided by the traditional map, so that more accurate navigation guidance information is provided for the unmanned vehicle.
The map construction is used as an antecedent step of a high-precision map module, and the construction quality of the map construction directly influences the precision of downstream high-precision map labeling and high-precision map positioning. One common mapping method may be: the method comprises the following steps that a laser radar is carried on a vehicle, a worker drives the vehicle to run in an area where a high-precision map needs to be built, and the laser radar on the vehicle scans the surrounding environment in the period; and after the driving is finished, collecting the point cloud data of the scanning surface of the laser radar, and processing to obtain the constructed map. When the method is used for constructing a high-precision map, the point cloud data acquired by the laser radar needs to be fused and superposed to form the high-precision map with sufficient point cloud density at the later stage, so that the processing can be carried out only by depending on pose data provided by a navigation system on a vehicle, but the accuracy and the stability of the navigation system depend on positioning signals, the high accuracy is usually realized in an open field, and the positioning accuracy is low in some complex scenes, such as a high-rise standing urban environment or a tunnel with shielded signals, so that the pose data provided by the navigation system can not be directly used for constructing the high-precision map sometimes.
Meanwhile, as many as tens of thousands of three-dimensional points are formed in each frame of map data, the size of map data scanned every second is dozens of frames, the data size is huge, and if the map data is directly processed, the time consumption is long, and only one section of road section of several kilometers is needed, so that the time consumption for constructing a high-precision map is about 1 week.
In summary, the problems of low map construction accuracy and low map construction efficiency exist in the construction of high-precision maps at present. Based on this, the embodiment of the disclosure provides a method, a system and equipment for constructing a three-dimensional point cloud map. In the disclosed embodiments. The movable platform may be a vehicle, drone, mobile robot, or the like device having mobile capabilities. The target area is an area where a three-dimensional point cloud map needs to be constructed, the area can be a road, a cell or an indoor space, and the initial point cloud data can be collected in the target area through a laser radar system carried on a movable platform. The loop closure (loop closure) constraint condition refers to a pose constraint condition when the laser radar system collects point cloud data at the same place at different moments. The following description will be given taking a case where the movable platform is a vehicle as an example.
As shown in fig. 3, the method for constructing a three-dimensional point cloud map according to the embodiment of the present disclosure may include:
step 301: acquiring multi-frame initial point cloud data acquired by a laser radar system carried on a movable platform, and acquiring pose data when the laser radar system acquires the multi-frame initial point cloud data;
step 302: determining at least one loop constraint condition, wherein the loop constraint condition is determined based on the pose data which is within a first preset range and is discontinuous in time sequence of the difference between the two frames of spatial poses;
step 303: processing the pose data of multiple frames according to the at least one loop constraint condition to obtain optimized pose data of the initial point cloud data of each frame;
step 304: and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data.
In step 301, first, multi-frame initial point cloud data of a target area needs to be acquired. The vehicle can be driven to the target area, and a laser radar system mounted on the vehicle collects the multi-frame initial point cloud data in the target area. For the same target area, the operation of collecting the initial point cloud data can be executed twice or more, that is, the vehicle repeatedly runs under the target area for many times, and the initial point cloud data is collected once by the laser radar system carried on the vehicle every time the vehicle runs.
Further, the initial point cloud data may be acquired at different time periods each time. For example, the initial point cloud data is collected under target area A at 8:00-9:00 for the first acquisition, at 13:00-14:00 for the second acquisition, at 17:00-18:00 for the third acquisition, and so on. Due to the fact that moving objects (such as vehicles or pedestrians) may exist in the target area, the target area can be covered comprehensively only through multiple times of acquisition; in addition, the lowest speed limit exists in some target areas, the vehicle cannot drive too slowly, the quantity of point cloud data acquired in one acquisition process may not be enough to construct a three-dimensional point cloud map, details need to be ensured through multiple acquisitions, and the number of points in a unit area is ensured to be sufficient. And the initial point cloud data is collected at different time intervals, so that the point cloud density can be increased, and the interference of moving objects, the interference of different light rays, shielding interference and the like can be reduced.
The pose data of the laser radar system during the collection of the plurality of frames of initial point cloud data can be determined through the pose data of a vehicle carrying the laser radar system. In some embodiments, a navigation system is further mounted on the vehicle, and the pose data is acquired based on navigation data of the navigation system. The Navigation System may be a GNSS (Global Navigation Satellite System), an Inertial Navigation System (INS), a Visual Inertial Navigation System (VINS), a Real-Time Kinematic (RTK) carrier-phase difference division Positioning System (RTK System), or the like, where the GNSS may include a GPS (Global Positioning System) Navigation System, a galileo Satellite Positioning System, a BeiDou Satellite Positioning System (BDS), or a GLONASS Navigation System. The pose data may include position data, i.e., the three-dimensional coordinate position of the vehicle (longitude and latitude coordinates may be employed), and attitude data, i.e., the roll angle, course angle, and pitch angle of the vehicle.
In some embodiments, the navigation system comprises a first navigation system and a second navigation system; the pose data is acquired based on first navigation data output by the first navigation system and second navigation data output by the second navigation system. The pose data are obtained through the combined navigation system comprising the first navigation system and the second navigation system, the positioning accuracy can be improved, and the accuracy of the constructed three-dimensional point cloud map is higher. The first navigation system and the second navigation system can be any type of navigation system, and the types of the first navigation system and the second navigation system can be the same or different. For example, the first navigation system may be an RTK system, the second navigation system may be an inertial navigation system, and the pose data is generated based on the RTK system data and the inertial navigation system data in a fusion manner.
In some embodiments, different weights may be set for the first navigation system and the second navigation system, respectively, and the pose data may be derived from a weighted average of the first navigation data and the second navigation data. Wherein the weight may be dynamically set according to a signal strength of the respective positioning signal of the first navigation system and the second navigation system. When the difference of the signal strengths of the positioning signals of the two navigation systems is large, setting a large weight for the navigation system with strong signal strength and setting a small weight for the navigation system with weak signal strength; when the signal strength of the positioning signals of the two navigation systems is close, the same or similar weight is set for the two navigation systems.
Because the initial point cloud data and the pose data are data acquired by two different systems, after the initial point cloud data and the pose data are acquired, the pose data and the initial point cloud data need to be aligned, and the one-to-one correspondence and matching of each frame of initial point cloud data and the pose data during the acquisition of the frame of initial point cloud data are ensured. Specifically, the pose data and the initial point cloud data may be aligned based on a global timestamp of the navigation system and a global timestamp of the lidar system such that each frame of initial point cloud data corresponds to one frame of pose data. In some embodiments of the present invention, the frequency of the lidar system may be tens of hertz, e.g., 5-20 Hz, i.e., 5 to 20 frames of initial point cloud data are collected per second; the frequency of the navigation system may be higher than that of the lidar system, for example, several hundred hertz when using an inertial navigation system and/or an RTK system. Therefore, a frame of pose data can be obtained by temporally aligning each frame of initial point cloud data based on the global time stamp. In other embodiments of the present invention, the frequency of the lidar system may also be higher than that of the navigation system, and at this time, because the pose data is less than the initial point cloud data, the initial point cloud data of a specific frame may be selectively removed, or the frame number of the pose data may be increased by using an interpolation method to achieve temporal alignment of the initial point cloud data and the pose data.
The initial point cloud data collected by the laser radar system may include moving objects such as vehicles and pedestrians, and the built three-dimensional point cloud map does not generally retain the moving objects, so that the point cloud data corresponding to the moving objects are filtered from the initial point cloud data, and after the point cloud data corresponding to the moving objects are removed, the remaining point cloud data are used for building the three-dimensional point cloud map. For example, the initial point cloud data collected on a road may include initial point cloud data corresponding to moving objects such as pedestrians and vehicles, and may also include initial point cloud data corresponding to buildings on both sides of the road, traffic signs around the road, and road guardrails, and the initial point cloud data corresponding to moving objects such as pedestrians and vehicles need to be filtered out, and only the initial point cloud data corresponding to buildings, traffic signs, and road guardrails are retained.
In step 302, the multiframe of initial point cloud data may include initial point cloud data collected by the lidar system at least twice as the movable platform passes through the same location. Two frames of initial point cloud data acquired when the movable platform passes through the same place twice can be used for establishing a loop constraint condition; when the vehicle passes the same location three or more times, then multiple loop constraints may be established. Specifically, when the vehicle carries a laser radar system to acquire map data, the vehicle can pass through the same place twice or for multiple times so as to improve the density of point clouds of a high-precision map constructed at the place, and more loop constraint conditions can be provided at the same time.
In addition to establishing a loop constraint condition by using two frames of initial point cloud data acquired when the movable platform passes through the same place twice, the loop constraint condition can be determined based on the pose data with the difference between two frames of spatial poses within a second preset range and opposite directions. The pose data with the difference of the spatial poses within the second preset range and the opposite directions generally refer to pose data of the vehicle on a forward and reverse bidirectional road corresponding to the same place, for example, pose data when the vehicle runs from south to north on the bidirectional road 1 and passes through a certain position, and pose data when the vehicle runs from north to south on the bidirectional road 1 and passes through the position again. In some scenes such as expressways, objects such as green belts are separated in the middle of roads, and whether bidirectional roads exist cannot be directly determined, so that whether a current location for acquiring initial point cloud data and a previous location for acquiring initial point cloud data are positive and negative bidirectional roads can be judged according to the position and the orientation of the initial point cloud data. And determining the pose data with the difference between the two frames of space poses in a second preset range and opposite directions through target objects which can be detected by bidirectional lanes such as signboards, peripheral buildings, tunnel portals and the like.
And a loop constraint condition is established, so that accumulated errors in the pose calculation process can be reduced, and the optimization accuracy of pose data is improved, thereby improving the accuracy of the three-dimensional point cloud map.
In some embodiments, the number of loop back constraints may be dynamically adjusted. For example, the number of loop constraints may be adjusted according to the confidence of the pose data; for another example, the number of looping constraints may be adjusted according to three-dimensional size information of at least one target object in the three-dimensional point cloud map.
In an embodiment where the number of loop constraints is adjusted according to the confidence level of the pose data, the confidence level of the pose data is used to characterize the confidence level of the pose data that is acquired. The confidence of the pose data is related to the signal strength of a positioning signal used for positioning the vehicle and the accuracy of the three-dimensional point cloud map. Therefore, the confidence of the pose data can be determined according to any one of the intensity of the positioning signal for positioning the vehicle and the accuracy of the three-dimensional point cloud map.
On one hand, if the signal intensity of the positioning signal is stronger, the confidence coefficient of the pose data is considered to be higher; conversely, if the signal strength of the positioning signal is weak, the confidence of the pose data can be considered to be low. For example, in a relatively open area, the signal strength of a GPS signal is often strong, the positioning accuracy is relatively high, and the credibility of the acquired pose data is also relatively high. In a city where a tall building is erected or in an area such as a viaduct, a tunnel and the like, the GPS signal is shielded by buildings, the signal strength is often weak, so that the positioning error is large, and the credibility of the acquired pose data is low.
On the other hand, if the acquired pose data has high accuracy, the accuracy of the three-dimensional point cloud map established according to the pose data is also high, and therefore, the confidence of the pose data can be determined according to the accuracy of the three-dimensional point cloud map. The accuracy of the three-dimensional point cloud map can be determined by the deviation between the three-dimensional coordinates of a point in the three-dimensional point cloud map and the actual coordinates of the point. The smaller the deviation, the higher the accuracy; conversely, the greater the deviation, the lower the accuracy. If the accuracy of the three-dimensional point cloud map is higher, the confidence coefficient of the pose data can be considered to be higher; conversely, if the accuracy of the three-dimensional point cloud map is low, the confidence of the pose data can be considered to be low.
Wherein the accuracy of the three-dimensional point cloud map may be determined according to: sampling a three-dimensional point cloud map within a certain range at intervals on a moving track of the movable platform; calculating the height difference of each ground point in the sampled three-dimensional point cloud map; and determining the accuracy of the three-dimensional point cloud map according to the height difference. Further, if the height difference is larger than a preset height threshold, it is determined that the accuracy of the three-dimensional point cloud map is lower than a preset accuracy threshold. And if the height difference is smaller than or equal to the height threshold, judging that the accuracy of the three-dimensional point cloud map is not lower than a preset accuracy threshold.
In an actual application scenario, according to a moving track of a vehicle, points in a square range of 1 meter by 1 meter in a three-dimensional point cloud map can be sampled every 200 meters, a height difference d of each ground point cloud in the square range is calculated, the square range can be considered to be on one plane, theoretically, each ground point in the square range should be on the same horizontal plane, the height difference is 0, if ground points with the height difference d larger than a certain threshold exist, the three-dimensional point cloud map constructed in the square range is considered to be low in precision, and the number of loop matching times needs to be increased for pose data corresponding to the three-dimensional point cloud data in the square range (namely, the number of loop constraint conditions is increased).
After the confidence of the pose data is determined, the number of loop constraints may be adjusted according to the confidence. The number of loop back constraints may be increased if the confidence is less than a preset confidence threshold. For example, the number of loop back constraints is increased from 3 to 5. The number of loop back constraints may be reduced if the confidence is greater than or equal to a preset confidence threshold. For example, at the time of data acquisition, it is possible to repeatedly acquire initial point cloud data five times in the same target area, but only the initial point cloud data acquired three times therein is employed to establish a loop back constraint.
In an embodiment of adjusting the number of the loopback constraints according to the three-dimensional size information of at least one target object in the three-dimensional point cloud map, the target object may be a landmark object in a target area, such as a traffic sign, a traffic light, a light post, etc., and the reference three-dimensional size information (length, width, height) of the target objects may be known according to national standards. A difference between the three-dimensional size information and reference three-dimensional size information of the target object may be calculated; and adjusting the number of the loop constraint conditions according to the difference value. If the difference value between the three-dimensional size information of the target object in the constructed three-dimensional point cloud map and the reference three-dimensional size information is larger than a preset size threshold, the constructed three-dimensional point cloud map is considered to have low precision, and the number of loop-back constraint conditions needs to be increased. For example, one standard size for triangular traffic signs is 70 centimeters on a side, according to national standard requirements. If the size of a certain triangular traffic sign in the three-dimensional point cloud map is greatly different from the standard size, the number of loop constraint conditions needs to be increased.
Fig. 4 is a schematic diagram illustrating adjustment logic for a loop back constraint according to an embodiment of the present disclosure. In summary, the strategy for adjusting the number of loop constraint conditions in the embodiment of the present disclosure is to spend more computing resources and more optimization times in an area with poor accuracy, and perform bidirectional road detection at the same time, thereby reducing the accumulated error.
In step 303, the pose data may be processed according to the at least one loopback constraint condition to obtain optimized pose data of each frame of the initial point cloud data.
In some embodiments, Graph-based optimization (Graph-based optimization) may be employed to solve for the optimized pose data. The Graph is composed of nodes And edges, And in Graph-based Simultaneous Localization And Mapping (Graph-based Simultaneous Localization And Mapping), a pose is a node (node) or a vertex (vertex), And the relationship between poses constitutes an edge (edge). Specifically, for example, the geometric relationship between the time t +1 and the time t constitutes an edge, or the pose transformation matrix calculated by vision may also constitute an edge. Once the graph construction is completed, the pose needs to be adjusted to meet the constraint formed by the edges as much as possible.
Therefore, the processing the pose data according to the at least one loop constraint condition to obtain the optimized pose data of each frame of the initial point cloud data includes: and solving an optimization objective function (such as the minimum sum of squared residuals) of the graph by taking the pose data as a graph vertex and the loop constraint condition as an edge of the graph to acquire the optimization pose data. Further, the relative pose relationship between the loop constraint condition and the pose data of the adjacent frames can be used as the edge of the graph together to solve the graph optimization.
The specific steps of graph optimization are as follows:
first, the position (rough pose data) acquired by the navigation system is taken as the vertex of the map.
Then, traversing all the three-dimensional point cloud data according to a time sequence, performing interframe registration on the time sequence, wherein the interframe registration can use NSS (normal space sampling), and then performing ICP (inductively coupled plasma) processing to obtain the relative pose relationship of the pose data of the front frame and the back frame, namely interframe constraint, as the edge of the graph.
Suppose that the previous frame of initial point cloud data P ═ P is collected in the first pose1,p 2,…,p nAnd acquiring the next frame of initial point cloud data Q ═ Q in the second position1,q 2,…,q nCoordinates of P and Q correspond to the coordinate system in the first position and the coordinate system in the second position, respectively. Suppose the ith point in PAnd the ith point in Q corresponds to the same point in three-dimensional space, e.g. p99And q is99Corresponding to the same point in three-dimensional space. In the absence of error, the formula for converting from the P coordinate system to the Q coordinate system is:
q i=Rp i+t (1);
wherein, R and t are respectively a rotation matrix and a translation matrix from the first pose to the second pose. Due to noise and mismatch (p)iAnd q isiWhich does not correspond to the same point in space, but the feature matching algorithm erroneously considers the two to be the same point), equation (1) does not always hold, and therefore the objective function to be minimized is:
Figure PCTCN2019129307-APPB-000001
wherein n is the number of point cloud data of each frame. Parameters R and t for representing the relative pose relationship of the pose data of the front frame and the pose data of the rear frame can be obtained through the formula (1) and the formula (2).
And traversing the initial point cloud data again, estimating the relative pose relationship between the pose data of frames of the initial point cloud data with adjacent spatial positions but discontinuous in time sequence (namely the same positions opened for the second time or the third time) by using ICP, and adding graph optimization as a loop constraint (also as an edge of the graph).
And finally, solving an optimization objective function of the graph according to the graph vertex and the graph edge established in the step, and acquiring the optimization pose data.
If it is determined in step 302 that the number of loop constraint conditions needs to be increased according to any condition, step 303 adds a new loop constraint condition when the optimized pose data is solved, and executes step 303 again according to the added loop constraint condition to solve the optimized pose data, thereby obtaining the optimized pose data with higher accuracy. And repeating the steps for a plurality of times until the accuracy of the pose optimization data reaches the standard.
In general, better performance can be obtained by loop back detection. However, in an area with few map features such as a highway section, the error of loop detection may be large, so that the error of the constructed three-dimensional point cloud map is large. In order to solve this problem, continuous multiple frames of initial point cloud data may be processed as one frame of initial point cloud data, that is, the initial point cloud data includes continuous multiple frames of point cloud data subjected to registration. Because the poses of each frame of initial point cloud data are possibly different, the frames of initial point cloud data need to be registered first. A registration process may be performed according to the optimized pose data.
Initial point cloud data within a certain distance range (such as 100m) can be accumulated according to the optimized pose data, and loop registration of the point set is performed, so that the accuracy of the loop registration is improved. Specifically, the time sequence accumulation can be performed on all three-dimensional point cloud data by using the optimized pose data according to a certain distance, the accumulated three-dimensional point cloud data is used as the registration input, the graph optimization is performed according to the mode, and the pose data accuracy is further improved.
The optimized pose data can be obtained by searching pose data in a certain range; wherein the search range of the optimized pose data is set according to the signal strength of a positioning signal for positioning the movable platform and/or the confidence of the optimized pose data. In the process of solving the map optimization, the search range of the optimized pose data is dynamically adjusted, for example, when the signal intensity of the positioning signal is poor, the search range of the optimized pose data is expanded, and otherwise, the search range of the optimized pose data can be reduced. And when the confidence coefficient is low, the searching range of the optimized pose data is expanded, otherwise, the searching range of the optimized pose data can be reduced. In actual operation, the pose data before optimization can be used as the center, and the optimized pose data can be searched in the neighborhood of the pose data. When the signal intensity of the positioning signal is poor, the neighborhood radius is enlarged; and when the signal intensity of the positioning signal is strong, the neighborhood radius is reduced.
In step 304, after the three-dimensional point cloud map is constructed, the optimization pose data can be checked, and the three-dimensional point cloud map is constructed based on the initial point cloud data of multiple frames according to the successfully checked optimization pose data. The track of the vehicle for collecting the initial point cloud data should have smoothness, so the optimized pose data can be checked according to the smoothness of the optimized pose data.
Specifically, the optimized pose data and the optimized pose data of the previous frame or frames of the optimized pose data and the average value of the optimized pose data of the next frame or frames of the optimized pose data can be calculated, and if the difference value between the optimized pose data and the average value is greater than a preset value, the optimized pose data is relatively larger (i.e., not smooth) than the adjacent optimized pose data, so that the optimized pose data is judged to be failed to check. For the k frame optimization pose data, the k-1 frame optimization data and the k +1 frame optimization pose data can be averaged, and the k frame optimization pose data is compared with the average value. And if the difference value between the optimized pose data of the kth frame and the average value is larger than a preset value, judging that the optimized pose data check fails.
The embodiment can adopt the course angle in the optimized pose data to check the optimized pose data. For example, for the optimized pose data of the k frame, calculating the average value of the course angle of the optimized pose data of the k frame, the course angle of the optimized pose data of the previous frame (the k-1 frame) and the course angle of the optimized pose data of the next frame (the k +1 frame), and if the difference between the course angle of the optimized pose data of the k frame and the average value of the course angles is more than a certain threshold (such as 1 degree), the optimized pose data is considered to be failed to be checked. In this case, the number of loop back constraints may be increased.
Generally, after the optimized pose data of the first optimization is obtained, the optimized pose data is checked, and if the optimized pose data which fails to be checked is low in confidence, the search range of the optimized pose data needs to be expanded, and/or the number of loop constraint conditions needs to be increased.
The method disclosed by the embodiment of the invention can be used for carrying out centimeter-level three-dimensional reconstruction on the target area to obtain accurate map information. In the area (such as in a tunnel and under an overhead bridge) with limited traditional navigation positioning mode, the construction of the map can be successfully completed by adjusting the number of the loop-back constraint conditions. Moreover, the GPU can be used for carrying out parallel processing on the multi-frame initial point cloud data, good parallelism is achieved, and the time required for constructing the map is shorter than that of the traditional method.
In some embodiments, as shown in fig. 5, embodiments of the present disclosure also provide a three-dimensional point cloud mapping system, including:
a processing device 501 and a laser radar system 502 mounted on a movable platform;
the laser radar system 502 is used for collecting multi-frame initial point cloud data;
the processor 501 is configured to determine at least one loop back constraint; processing a plurality of frames of the pose data according to the at least one loop constraint condition to obtain optimized pose data of each frame of the initial point cloud data and obtain optimized pose data of each frame of the initial point cloud data; constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data; the loop constraint condition is determined based on the pose data that the difference between the two frames of spatial poses is within a first preset range and is discontinuous in time sequence.
In some embodiments, the three-dimensional point cloud mapping system further comprises: a navigation system 503 mounted on the movable platform; the navigation system 503 is configured to obtain pose data of the laser radar system when acquiring multiple frames of the initial point cloud data.
In some embodiments, the navigation system 503 includes: a first navigation system and a second navigation system; the first navigation system is used for acquiring first position and attitude data when the laser radar system collects multiple frames of the initial point cloud data; the second navigation system is used for acquiring second position and attitude data when the laser radar system collects multiple frames of the initial point cloud data; the processing device is further configured to: acquiring the pose data based on the first navigation data and the second navigation data.
In some embodiments, the processing device 501 is further configured to: and aligning the pose data with the initial point cloud data.
In some embodiments, the processing device 501 aligns the pose data with the initial point cloud data, including: aligning the pose data with the initial point cloud data based on a global timestamp of the navigation system and a global timestamp of the lidar system.
In some embodiments, the plurality of frames of initial point cloud data comprises initial point cloud data acquired by the lidar system at least twice as the movable platform passes the same location.
In some embodiments, the processing device 501 is further configured to: adjusting the number of loop constraints.
In some embodiments, the processing device 501 adjusts the number of loop back constraints, including: adjusting the number of the loop constraint conditions according to the confidence of the pose data; and/or adjusting the number of the loopback constraints according to the three-dimensional size information of at least one target object in the three-dimensional point cloud map.
In some embodiments, the confidence level of the pose data is determined according to the strength of a localization signal that localizes the movable platform and/or the accuracy of the three-dimensional point cloud map.
In some embodiments, the processing device 501 is further configured to: sampling a three-dimensional point cloud map within a certain range at intervals on a moving track of the movable platform; calculating the height difference of each ground point in the sampled three-dimensional point cloud map; and determining the accuracy of the three-dimensional point cloud map according to the height difference.
In some embodiments, the processing device 501 determines the accuracy of the three-dimensional point cloud map according to the height difference, including: if the height difference is larger than a preset height threshold, judging that the accuracy of the three-dimensional point cloud map is lower than a preset accuracy threshold; and/or if the height difference is smaller than or equal to the height threshold, judging that the accuracy of the three-dimensional point cloud map is not lower than a preset accuracy threshold.
In some embodiments, the processing device 501 adjusts the number of loop constraints according to the confidence of the pose data, including: if the confidence is smaller than a preset confidence threshold, increasing the number of the loop-back constraint conditions; and/or reducing the number of the loop-back constraint conditions if the confidence is greater than or equal to a preset confidence threshold.
In some embodiments, the processing device 501 adjusts the number of looping constraints according to the three-dimensional size information of at least one target object in the three-dimensional point cloud map, including: calculating a difference between the three-dimensional size information and reference three-dimensional size information of the target object; and adjusting the number of the loop constraint conditions according to the difference value.
In some embodiments, the loop constraint is also determined based on the pose data having a difference between two frame spatial poses within a second preset range and opposite in orientation.
In some embodiments, the initial point cloud data comprises a registered continuous plurality of frames of point cloud data.
In some embodiments, the processing device 501 is further configured to: and filtering out the point cloud data corresponding to the moving object from the initial point cloud data.
In some embodiments, the processing device 501 processes the pose data according to the at least one loop constraint condition to obtain optimized pose data of each frame of the initial point cloud data, including: and solving an optimization objective function of the graph by taking the pose data as a graph vertex and taking the loop constraint condition as an edge of the graph so as to acquire the optimization pose data.
In some embodiments, the processing device 501 solves the optimization objective function of the graph with the pose data as graph vertices and the loop constraints as graph edges to obtain the optimization pose data, including: and solving an optimization objective function of the graph by taking the pose data as a graph vertex and taking the loop constraint condition and the relative pose relation between adjacent frame pose data as edges of the graph so as to acquire the optimization pose data.
In some embodiments, the optimized pose data is looked up from a range of pose data; wherein the search range of the optimized pose data is set according to the intensity of a positioning signal for positioning the movable platform and/or the confidence of the optimized pose data.
In some embodiments, the processing device 501 constructs a three-dimensional point cloud map based on multiple frames of the initial point cloud data according to the optimized pose data, including: checking the optimized pose data; and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the successfully verified optimized pose data.
In some embodiments, the processing device 501 checks the optimized pose data, including: calculating the average value of the optimized pose data and the optimized pose data of a plurality of adjacent frames; calculating a difference between the optimized pose data and the average; and checking the optimized pose data according to the difference value.
In some embodiments, the processing device 501 checks the optimized pose data according to the difference value, including: if the difference is larger than a preset value, the optimized pose data is judged to be successfully verified, and/or if the difference is smaller than or equal to the preset value, the optimized pose data is judged to be failed to be verified.
In some embodiments, the processing device 501 constructs a three-dimensional point cloud map based on multiple frames of the initial point cloud data according to the optimized pose data, including: registering multiple frames of the initial point cloud data according to the optimized pose data; and constructing the three-dimensional point cloud map according to the registered multi-frame initial point cloud data.
The specific embodiment of the method executed by the processing device 501 in the three-dimensional point cloud map building system is described in detail in the embodiments of the three-dimensional point cloud map building method disclosed in the present disclosure, and details are not repeated here.
In some embodiments, the present disclosure also provides a three-dimensional point cloud mapping apparatus comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any embodiment when executing the program.
Fig. 6 is a schematic diagram illustrating a hardware structure of a more specific three-dimensional point cloud mapping apparatus according to an embodiment of the present disclosure, where the apparatus may include: a processor 601, a memory 602, an input/output interface 603, a communication interface 604, and a bus 605. Wherein the processor 601, the memory 602, the input/output interface 603 and the communication interface 604 are communicatively connected to each other within the device via a bus 605.
The processor 601 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 602 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 602 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 602 and called by the processor 602 for execution.
The input/output interface 603 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 604 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 605 includes a path that transfers information between the various components of the device, such as processor 601, memory 602, input/output interface 603, and communication interface 604.
It should be noted that although the above-mentioned device only shows the processor 601, the memory 602, the input/output interface 603, the communication interface 604 and the bus 605, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
In some embodiments, the present disclosure also provides a computer-readable storage medium having stored thereon a number of computer instructions which, when executed, implement the steps of the method of any of the embodiments.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
From the above description of the embodiments, it is clear to those skilled in the art that the embodiments of the present disclosure can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also falls within the scope disclosed in the present specification.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (48)

  1. A three-dimensional point cloud map construction method is characterized by comprising the following steps:
    acquiring multi-frame initial point cloud data acquired by a laser radar system carried on a movable platform, and acquiring pose data when the laser radar system acquires the multi-frame initial point cloud data;
    determining at least one loop constraint condition, wherein the loop constraint condition is determined based on the pose data which is within a first preset range and is discontinuous in time sequence of the difference between the two frames of spatial poses;
    processing the pose data of multiple frames according to the at least one loop constraint condition to obtain optimized pose data of the initial point cloud data of each frame;
    and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data.
  2. The method according to claim 1, characterized in that a navigation system is further mounted on the movable platform, and the pose data is acquired based on navigation data of the navigation system.
  3. The method of claim 2, wherein the navigation system comprises a first navigation system and a second navigation system;
    the acquiring of the pose data when the laser radar system collects multiple frames of the initial point cloud data comprises the following steps:
    acquiring first navigation data output by the first navigation system and second navigation data output by the second navigation system;
    acquiring the pose data based on the first navigation data and the second navigation data.
  4. The method of claim 2, further comprising:
    and aligning the pose data with the initial point cloud data.
  5. The method of claim 4, wherein the aligning the pose data with the initial point cloud data comprises:
    aligning the pose data with the initial point cloud data based on a global timestamp of the navigation system and a global timestamp of the lidar system.
  6. The method of claim 1, wherein the plurality of frames of initial point cloud data comprises initial point cloud data collected by the lidar system at least twice as the movable platform passes through the same location.
  7. The method of claim 1, further comprising:
    adjusting the number of loop constraints.
  8. The method of claim 7, wherein the adjusting the number of loop constraints comprises:
    adjusting the number of the loop constraint conditions according to the confidence of the pose data; and/or
    And adjusting the number of the loop back constraint conditions according to the three-dimensional size information of at least one target object in the three-dimensional point cloud map.
  9. The method of claim 8, wherein the confidence level of the pose data is determined based on the strength of a localization signal that localizes the movable platform and/or the accuracy of the three-dimensional point cloud map.
  10. The method of claim 9, further comprising:
    sampling a three-dimensional point cloud map within a certain range at intervals on a moving track of the movable platform;
    calculating the height difference of each ground point in the sampled three-dimensional point cloud map;
    and determining the accuracy of the three-dimensional point cloud map according to the height difference.
  11. The method of claim 10, wherein determining the accuracy of the three-dimensional point cloud map from the height difference comprises:
    if the height difference is larger than a preset height threshold, judging that the accuracy of the three-dimensional point cloud map is lower than a preset accuracy threshold; and/or
    And if the height difference is smaller than or equal to the height threshold, judging that the accuracy of the three-dimensional point cloud map is not lower than a preset accuracy threshold.
  12. The method of claim 8, wherein the adjusting the number of loop constraints according to the confidence level of the pose data comprises:
    if the confidence is smaller than a preset confidence threshold, increasing the number of the loop-back constraint conditions; and/or
    And if the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, reducing the number of the loop-back constraint conditions.
  13. The method of claim 8, wherein the adjusting the number of looping constraints according to the three-dimensional size information of the at least one target object in the three-dimensional point cloud map comprises:
    calculating a difference between the three-dimensional size information and reference three-dimensional size information of the target object;
    and adjusting the number of the loop constraint conditions according to the difference value.
  14. The method according to claim 1, wherein the loop constraint is further determined based on the pose data having a difference between two frame spatial poses within a second preset range and opposite in orientation.
  15. The method of claim 1, wherein the initial point cloud data comprises a registered continuous plurality of frames of point cloud data.
  16. The method of claim 1, further comprising:
    and filtering out the point cloud data corresponding to the moving object from the initial point cloud data.
  17. The method of claim 1, wherein the processing the pose data according to the at least one loopback constraint to obtain optimized pose data for each frame of the initial point cloud data comprises:
    and solving an optimization objective function of the graph by taking the pose data as a graph vertex and the loop constraint condition as an edge of the graph so as to acquire the optimization pose data.
  18. The method of claim 17, wherein solving an optimization objective function of a graph with the pose data as graph vertices and the loop constraints as graph edges to obtain the optimized pose data comprises:
    and solving an optimization objective function of the graph by taking the pose data as a graph vertex and taking the loop constraint condition and the relative pose relation between adjacent frame pose data as edges of the graph so as to acquire the optimization pose data.
  19. The method of claim 17, wherein the optimized pose data is looked up from a range of pose data; wherein the search range of the optimized pose data is set according to the intensity of a positioning signal for positioning the movable platform and/or the confidence of the optimized pose data.
  20. The method of claim 1, wherein constructing a three-dimensional point cloud map based on a plurality of frames of the initial point cloud data according to the optimized pose data comprises:
    checking the optimized pose data;
    and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the successfully verified optimized pose data.
  21. The method according to claim 20, wherein the verifying the optimized pose data comprises:
    calculating the average value of the optimized pose data and the optimized pose data of a plurality of adjacent frames;
    calculating a difference between the optimized pose data and the average;
    and checking the optimized pose data according to the difference value.
  22. The method according to claim 21, wherein the verifying the optimized pose data according to the difference value comprises:
    if the difference value is larger than a preset value, judging that the optimization pose data is verified successfully, and/or
    And if the difference value is less than or equal to the preset value, judging that the verification of the optimized pose data fails.
  23. The method of claim 1, wherein constructing a three-dimensional point cloud map based on a plurality of frames of the initial point cloud data according to the optimized pose data comprises:
    registering multiple frames of the initial point cloud data according to the optimized pose data;
    and constructing the three-dimensional point cloud map according to the registered multi-frame initial point cloud data.
  24. A three-dimensional point cloud map construction system, comprising:
    a processing device and a laser radar system mounted on a movable platform;
    the laser radar system is used for collecting multi-frame initial point cloud data;
    the processor is configured to determine at least one loop back constraint; processing a plurality of frames of the pose data according to the at least one loop constraint condition to obtain optimized pose data of each frame of the initial point cloud data and obtain optimized pose data of each frame of the initial point cloud data; constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the optimized pose data; the loop constraint condition is determined based on the pose data that the difference between the two frames of spatial poses is within a first preset range and is discontinuous in time sequence.
  25. The three-dimensional point cloud mapping system of claim 24, further comprising:
    a navigation system mounted on the movable platform;
    the navigation system is used for acquiring pose data when the laser radar system collects multiple frames of the initial point cloud data.
  26. The three-dimensional point cloud mapping system of claim 25, wherein the navigation system comprises:
    a first navigation system and a second navigation system;
    the first navigation system is used for acquiring first position and attitude data when the laser radar system collects multiple frames of the initial point cloud data;
    the second navigation system is used for acquiring second position and attitude data when the laser radar system collects multiple frames of the initial point cloud data;
    the processing device is further configured to:
    acquiring the pose data based on the first navigation data and the second navigation data.
  27. The three-dimensional point cloud mapping system of claim 25, wherein the processing device is further configured to:
    and aligning the pose data with the initial point cloud data.
  28. The three-dimensional point cloud mapping system of claim 27, wherein the processing device aligns the pose data with the initial point cloud data, comprising:
    aligning the pose data with the initial point cloud data based on a global timestamp of the navigation system and a global timestamp of the lidar system.
  29. The three-dimensional point cloud mapping system of claim 24, wherein the plurality of frames of initial point cloud data comprise initial point cloud data collected by the lidar system at least twice as the movable platform passes through the same location.
  30. The three-dimensional point cloud mapping system of claim 24, wherein the processing device is further configured to:
    adjusting the number of loop constraints.
  31. The three-dimensional point cloud mapping system of claim 30, wherein the processing device adjusts the number of loop constraints, comprising:
    adjusting the number of the loop constraint conditions according to the confidence of the pose data; and/or
    And adjusting the number of the looping constraint conditions according to the three-dimensional size information of at least one target object in the three-dimensional point cloud map.
  32. The three-dimensional point cloud mapping system of claim 31, wherein the confidence level of the pose data is determined by the strength of a localization signal that localizes the movable platform and/or the accuracy of the three-dimensional point cloud map.
  33. The three-dimensional point cloud mapping system of claim 32, wherein the processing device is further configured to:
    sampling a three-dimensional point cloud map within a certain range at intervals on a moving track of the movable platform;
    calculating the height difference of each ground point in the sampled three-dimensional point cloud map;
    and determining the accuracy of the three-dimensional point cloud map according to the height difference.
  34. The three-dimensional point cloud mapping system of claim 33, wherein the processing device determines the accuracy of the three-dimensional point cloud map from the height difference, comprising:
    if the height difference is larger than a preset height threshold, judging that the accuracy of the three-dimensional point cloud map is lower than a preset accuracy threshold; and/or
    And if the height difference is smaller than or equal to the height threshold, judging that the accuracy of the three-dimensional point cloud map is not lower than a preset accuracy threshold.
  35. The three-dimensional point cloud mapping system of claim 31, wherein the processing device adjusts the number of loop constraints according to the confidence level of the pose data, comprising:
    if the confidence is smaller than a preset confidence threshold, increasing the number of the loop-back constraint conditions; and/or
    And if the confidence coefficient is greater than or equal to a preset confidence coefficient threshold value, reducing the number of the loop-back constraint conditions.
  36. The three-dimensional point cloud mapping system of claim 31, wherein the processing device adjusts the number of looping constraints according to three-dimensional size information of at least one target object in the three-dimensional point cloud map, comprising:
    calculating a difference between the three-dimensional size information and reference three-dimensional size information of the target object;
    and adjusting the number of the loop constraint conditions according to the difference value.
  37. The three-dimensional point cloud mapping system of claim 24, wherein the loop constraint is further determined based on the pose data having a difference between two frame spatial poses within a second preset range and opposite orientation.
  38. The three-dimensional point cloud mapping system of claim 24, wherein the initial point cloud data comprises a registered series of frames of point cloud data.
  39. The three-dimensional point cloud mapping system of claim 24, wherein the processing device is further configured to:
    and filtering out the point cloud data corresponding to the moving object from the initial point cloud data.
  40. The three-dimensional point cloud mapping system of claim 24, wherein the processing device processes the pose data according to the at least one loop constraint condition to obtain optimized pose data for each frame of the initial point cloud data, comprising:
    and solving an optimization objective function of the graph by taking the pose data as a graph vertex and the loop constraint condition as an edge of the graph so as to acquire the optimization pose data.
  41. The three-dimensional point cloud mapping system of claim 40, wherein solving an optimization objective function of a graph with the pose data as graph vertices and the loop constraints as graph edges to obtain the optimized pose data comprises:
    and solving an optimization objective function of the graph by taking the pose data as a graph vertex and taking the loop constraint condition and the relative pose relation between adjacent frame pose data as edges of the graph so as to acquire the optimization pose data.
  42. The three-dimensional point cloud mapping system of claim 40, wherein the optimized pose data is looked up from a range of pose data; wherein the search range of the optimized pose data is set according to the intensity of a positioning signal for positioning the movable platform and/or the confidence of the optimized pose data.
  43. The system of claim 24, wherein the processing device constructs a three-dimensional point cloud map based on a plurality of frames of the initial point cloud data according to the optimized pose data, comprising:
    checking the optimized pose data;
    and constructing a three-dimensional point cloud map based on the initial point cloud data of multiple frames according to the successfully verified optimized pose data.
  44. The system according to claim 42, wherein the processing device verifies the optimized pose data, comprising:
    calculating the average value of the optimized pose data and the optimized pose data of a plurality of adjacent frames;
    calculating a difference between the optimized pose data and the average;
    and checking the optimized pose data according to the difference value.
  45. The system according to claim 44, wherein the processing device verifies the optimized pose data according to the difference values, comprising:
    if the difference value is larger than a preset value, judging that the optimization pose data is verified successfully, and/or
    And if the difference value is less than or equal to the preset value, judging that the verification of the optimized pose data fails.
  46. The three-dimensional point cloud map building system of claim 24, wherein the processing device builds a three-dimensional point cloud map based on a plurality of frames of the initial point cloud data according to the optimized pose data, comprising:
    registering multiple frames of the initial point cloud data according to the optimized pose data;
    and constructing the three-dimensional point cloud map according to the registered multi-frame initial point cloud data.
  47. A three-dimensional point cloud mapping apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the method of any one of claims 1 to 23.
  48. A computer readable storage medium having stored thereon computer instructions, wherein the computer instructions, when executed, implement the steps of the method of any one of claims 1 to 23.
CN201980091861.6A 2019-12-27 2019-12-27 Three-dimensional point cloud map construction method, system and equipment Active CN113424232B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/129307 WO2021128297A1 (en) 2019-12-27 2019-12-27 Method, system and device for constructing three-dimensional point cloud map

Publications (2)

Publication Number Publication Date
CN113424232A true CN113424232A (en) 2021-09-21
CN113424232B CN113424232B (en) 2024-03-15

Family

ID=76573535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980091861.6A Active CN113424232B (en) 2019-12-27 2019-12-27 Three-dimensional point cloud map construction method, system and equipment

Country Status (2)

Country Link
CN (1) CN113424232B (en)
WO (1) WO2021128297A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985436A (en) * 2021-11-04 2022-01-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle three-dimensional map construction and positioning method and device based on SLAM
CN114067067A (en) * 2021-11-11 2022-02-18 北京云迹科技有限公司 High-precision map drawing method, device, medium and equipment
CN114283200A (en) * 2021-12-30 2022-04-05 北京三快在线科技有限公司 Pose determination method and device, storage medium and electronic equipment
CN114935338A (en) * 2022-01-24 2022-08-23 杭州申昊科技股份有限公司 Interactive map construction method
CN115952139A (en) * 2023-03-14 2023-04-11 武汉芯云道数据科技有限公司 Multi-frame three-dimensional image processing method and system for mobile equipment
CN116086468A (en) * 2022-12-30 2023-05-09 杭州飞步科技有限公司 Vehicle positioning method and device, vehicle and storage medium
CN117351167A (en) * 2023-12-05 2024-01-05 深圳市其域创新科技有限公司 Three-dimensional map construction method, device, equipment and storage medium based on GPU

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671527B (en) * 2021-07-23 2024-08-06 国电南瑞科技股份有限公司 Accurate operation method and device for improving distribution network live working robot
CN115685236A (en) * 2021-07-31 2023-02-03 深圳市普渡科技有限公司 Robot, robot skid processing method, device and readable storage medium
CN113639745B (en) * 2021-08-03 2023-10-20 北京航空航天大学 Point cloud map construction method, device and storage medium
CN113671530B (en) * 2021-08-06 2024-01-12 北京京东乾石科技有限公司 Pose determination method and device, storage medium and electronic equipment
CN113689497B (en) * 2021-08-11 2024-08-20 影石创新科技股份有限公司 Pose optimization method, pose optimization device, pose optimization equipment and storage medium
CN113706676B (en) * 2021-08-26 2024-01-16 京东鲲鹏(江苏)科技有限公司 Model self-supervision training method and device for point cloud data
CN113534097B (en) * 2021-08-27 2023-11-24 北京工业大学 Optimization method suitable for rotation axis laser radar
CN113744236B (en) * 2021-08-30 2024-05-24 阿里巴巴达摩院(杭州)科技有限公司 Loop detection method, device, storage medium and computer program product
CN113776515B (en) * 2021-08-31 2022-06-10 南昌工学院 A robot navigation method, device, computer equipment and storage medium
CN113808196B (en) * 2021-09-09 2025-02-21 浙江商汤科技开发有限公司 Plane fusion positioning method, device, electronic equipment and storage medium
CN113920125B (en) * 2021-09-09 2025-03-21 北京三快在线科技有限公司 Laser point cloud file generation method, device, electronic device and storage medium
CN113768419B (en) * 2021-09-17 2023-06-23 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN113920258B (en) * 2021-09-26 2025-03-04 北京三快在线科技有限公司 Map generation method, device, medium and electronic equipment
CN114137953A (en) * 2021-10-12 2022-03-04 杭州电子科技大学 Power inspection robot system based on three-dimensional laser radar and image building method
CN113936109B (en) * 2021-10-15 2025-06-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for processing high-precision map point cloud data
CN113984065B (en) * 2021-10-27 2024-11-19 山东亚历山大智能科技有限公司 A reflector map generation method and system for indoor robots
CN114119887B (en) * 2021-10-29 2025-03-07 杭州涂鸦信息技术有限公司 Method, device, computer equipment and storage medium for displaying the position of a sweeping machine
CN113936046B (en) * 2021-11-02 2025-02-21 北京京东乾石科技有限公司 Object positioning method, device, electronic device and computer readable medium
CN114063099B (en) * 2021-11-10 2025-05-23 厦门大学 Positioning method and device based on RGBD
CN114170297A (en) * 2021-11-26 2022-03-11 阿波罗智能技术(北京)有限公司 Vehicle pose processing method, device, electronic device and autonomous driving vehicle
CN114234976A (en) * 2021-11-29 2022-03-25 山东恒创智控科技有限公司 A robot localization method and system based on improved SLAM and multi-sensor fusion
CN114219908A (en) * 2021-12-14 2022-03-22 北京经纬恒润科技股份有限公司 Map construction method and device
CN114463512B (en) * 2021-12-24 2023-04-07 广州极飞科技股份有限公司 Point cloud data processing method, vectorization method and device
CN114322987B (en) * 2021-12-27 2024-02-23 北京三快在线科技有限公司 Method and device for constructing high-precision map
CN114325667B (en) * 2022-01-05 2025-07-18 上海三一重机股份有限公司 External parameter calibration method and device for combined navigation equipment and laser radar
CN114359499B (en) * 2022-01-11 2025-05-13 东北大学 Lane 3D reconstruction system based on rotating radar and IMU
CN114419268B (en) * 2022-01-20 2024-06-28 湖北亿咖通科技有限公司 Track edge connecting method for incremental map construction, electronic equipment and storage medium
CN114494075A (en) * 2022-02-14 2022-05-13 北京路凯智行科技有限公司 Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium
CN114706070A (en) * 2022-02-22 2022-07-05 惠州市德赛西威智能交通技术研究院有限公司 Automatic parking space searching method and system based on 4D millimeter wave radar
CN114720963A (en) * 2022-03-04 2022-07-08 北京三快在线科技有限公司 Parameter correction method and device
CN114723672B (en) * 2022-03-09 2024-08-20 杭州易现先进科技有限公司 Method, system, device and medium for three-dimensional reconstruction data acquisition and verification
CN114659513B (en) * 2022-03-11 2024-04-09 北京航空航天大学 Unstructured road-oriented point cloud map construction and maintenance method
CN114372981B (en) * 2022-03-21 2022-06-17 季华实验室 T-shaped workpiece weld joint identification method and device, electronic equipment and storage medium
CN114782689B (en) * 2022-04-01 2025-02-07 东南大学 A point cloud plane segmentation method based on multi-frame data fusion
CN114782640B (en) * 2022-04-01 2024-11-26 东南大学 Fast laser SLAM algorithm based on intensity information matching of quadruped robot
CN114742921B (en) * 2022-04-11 2025-06-27 智道网联科技(北京)有限公司 Laser radar mapping optimization method, device, electronic equipment, and storage medium
CN114972678B (en) * 2022-04-29 2024-07-19 武汉大学 Method for reconstructing rapid three-dimensional model of closed space
CN115113225A (en) * 2022-06-09 2022-09-27 阿马尔(上海)机器人有限公司 Mobile robot relocation method and mobile robot based on data fusion
CN115079202B (en) * 2022-06-16 2024-08-27 智道网联科技(北京)有限公司 Laser radar image construction method and device, electronic equipment and storage medium
CN115127543A (en) * 2022-07-22 2022-09-30 上海于万科技有限公司 A method and system for removing abnormal edges in laser mapping optimization
CN117554990B (en) * 2022-08-03 2025-06-06 北京氢源智能科技有限公司 Lidar SLAM positioning and navigation method and unmanned aerial vehicle system
CN115423970A (en) * 2022-08-17 2022-12-02 武汉中仪物联技术股份有限公司 Pipeline scanning method and device, electronic equipment and storage medium
CN115409986A (en) * 2022-08-29 2022-11-29 广州高新兴机器人有限公司 Laser SLAM loop detection method and device based on point cloud semantics and robot
CN115937383B (en) * 2022-09-21 2023-10-10 北京字跳网络技术有限公司 Method, device, electronic equipment and storage medium for rendering image
CN115451941B (en) * 2022-09-23 2025-03-28 广州高新兴机器人有限公司 A scanning method, device and medium based on walking of people
CN115616606A (en) * 2022-09-29 2023-01-17 新石器慧通(北京)科技有限公司 Point cloud map establishment method, device, computer equipment and storage medium
CN115937551A (en) * 2022-10-19 2023-04-07 浙江静远电力实业有限公司 Optimization method considering multiple constraint mobile measurement point cloud position precision
CN116091564A (en) * 2022-11-16 2023-05-09 北京百度网讯科技有限公司 Confidence determining method, device, equipment and storage medium based on local map
CN115830550A (en) * 2022-12-08 2023-03-21 亿咖通(湖北)技术有限公司 Method and device for detecting motion state of target
CN116007658A (en) * 2023-02-02 2023-04-25 潍柴动力股份有限公司 Combined calibration method, device and controller of laser radar and integrated navigation system
CN116819544B (en) * 2023-07-03 2024-11-15 广州赛特智能科技有限公司 Laser positioning method, device, equipment and storage medium
CN116908818B (en) * 2023-07-13 2024-05-28 广东喜讯智能科技有限公司 Laser radar calibration method and device based on RTK unmanned aerial vehicle and storage medium
CN117671008B (en) * 2023-12-14 2025-01-28 安徽蔚来智驾科技有限公司 Position and posture estimation method, readable storage medium and intelligent device
CN117761717B (en) * 2024-02-21 2024-05-07 天津大学四川创新研究院 Automatic loop three-dimensional reconstruction system and operation method
CN118149796B (en) * 2024-05-09 2024-08-09 广汽埃安新能源汽车股份有限公司 Map construction method and device under degradation environment, electronic equipment and storage medium
CN118224980B (en) * 2024-05-23 2024-07-23 烟台军诺智能科技有限公司 Target pose measurement method and system based on wireless optical communication
CN118710723A (en) * 2024-07-09 2024-09-27 长江三峡集团实业发展(北京)有限公司 Tunnel anchor identification method, device, equipment and medium based on three-dimensional laser

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
CN109507677A (en) * 2018-11-05 2019-03-22 浙江工业大学 A kind of SLAM method of combination GPS and radar odometer
US20190219700A1 (en) * 2017-11-17 2019-07-18 DeepMap Inc. Iterative closest point process based on lidar with integrated motion estimation for high definition maps
US20190323843A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating a high precision map, apparatus and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180088234A1 (en) * 2016-09-27 2018-03-29 Carnegie Mellon University Robust Localization and Localizability Prediction Using a Rotating Laser Scanner
CN108267141B (en) * 2016-12-30 2023-01-10 法法汽车(中国)有限公司 Road point cloud data processing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
US20190219700A1 (en) * 2017-11-17 2019-07-18 DeepMap Inc. Iterative closest point process based on lidar with integrated motion estimation for high definition maps
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
US20190323843A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating a high precision map, apparatus and storage medium
CN109507677A (en) * 2018-11-05 2019-03-22 浙江工业大学 A kind of SLAM method of combination GPS and radar odometer

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985436A (en) * 2021-11-04 2022-01-28 广州中科云图智能科技有限公司 Unmanned aerial vehicle three-dimensional map construction and positioning method and device based on SLAM
CN114067067A (en) * 2021-11-11 2022-02-18 北京云迹科技有限公司 High-precision map drawing method, device, medium and equipment
CN114283200A (en) * 2021-12-30 2022-04-05 北京三快在线科技有限公司 Pose determination method and device, storage medium and electronic equipment
CN114935338A (en) * 2022-01-24 2022-08-23 杭州申昊科技股份有限公司 Interactive map construction method
CN116086468A (en) * 2022-12-30 2023-05-09 杭州飞步科技有限公司 Vehicle positioning method and device, vehicle and storage medium
CN115952139A (en) * 2023-03-14 2023-04-11 武汉芯云道数据科技有限公司 Multi-frame three-dimensional image processing method and system for mobile equipment
CN117351167A (en) * 2023-12-05 2024-01-05 深圳市其域创新科技有限公司 Three-dimensional map construction method, device, equipment and storage medium based on GPU
CN117351167B (en) * 2023-12-05 2024-07-09 深圳市其域创新科技有限公司 Three-dimensional map construction method, device, equipment and storage medium based on GPU
WO2025119250A1 (en) * 2023-12-05 2025-06-12 深圳市其域创新科技有限公司 Gpu-based three-dimensional map construction method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
CN113424232B (en) 2024-03-15
WO2021128297A1 (en) 2021-07-01

Similar Documents

Publication Publication Date Title
CN113424232B (en) Three-dimensional point cloud map construction method, system and equipment
US11915099B2 (en) Information processing method, information processing apparatus, and recording medium for selecting sensing data serving as learning data
US12019173B2 (en) Lane-level navigation system for ground vehicles with lidar and cellular signals
JP2022542289A (en) Mapping method, mapping device, electronic device, storage medium and computer program product
US10352718B2 (en) Discovering points of entry to a location
CN104848867B (en) The pilotless automobile Combinated navigation method of view-based access control model screening
KR20200121274A (en) Method, apparatus, and computer readable storage medium for updating electronic map
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
US20220270358A1 (en) Vehicular sensor system calibration
JP2019145089A (en) Method and device for fusing point cloud data
CN112380312A (en) Laser map updating method based on grid detection, terminal and computer equipment
CN109556569B (en) Topographic map surveying and mapping method and device
CN113838129B (en) Method, device and system for obtaining pose information
CN113989451A (en) High-precision map construction method and device and electronic equipment
CN114119886A (en) High-precision map point cloud reconstruction method and device, vehicle, equipment and storage medium
EP3789797B1 (en) Method and apparatus for evaluating data, device, and computer-readable storage medium
CN110018503B (en) Vehicle positioning method and positioning system
CN110223223A (en) Street scan method, device and scanner
Han et al. Carla-loc: synthetic slam dataset with full-stack sensor setup in challenging weather and dynamic environments
Lucks et al. Improving trajectory estimation using 3D city models and kinematic point clouds
Tamimi et al. Performance Assessment of a Mini Mobile Mapping System: Iphone 14 pro Installed on a e-Scooter
CN118111423A (en) Low-cost heterogeneous sensor map building and positioning optimization method based on vehicle-road cooperation
EP4160153B1 (en) Methods and systems for estimating lanes for a vehicle
Gu et al. Ubiquitous and Low-Cost Generation of Elevation Pseudo Ground Control Points
Li et al. AutoMine: A Multimodal Dataset for Robot Navigation in Open‐Pit Mines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240515

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Patentee after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right