[go: up one dir, main page]

CN115773747A - A high-precision map generation method, device, equipment and storage medium - Google Patents

A high-precision map generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115773747A
CN115773747A CN202211566846.2A CN202211566846A CN115773747A CN 115773747 A CN115773747 A CN 115773747A CN 202211566846 A CN202211566846 A CN 202211566846A CN 115773747 A CN115773747 A CN 115773747A
Authority
CN
China
Prior art keywords
point cloud
frame
current
laser point
estimation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211566846.2A
Other languages
Chinese (zh)
Other versions
CN115773747B (en
Inventor
沈嘉成
巫远
黄永明
夏井新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Network Communication and Security Zijinshan Laboratory
Original Assignee
Network Communication and Security Zijinshan Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Communication and Security Zijinshan Laboratory filed Critical Network Communication and Security Zijinshan Laboratory
Priority to CN202211566846.2A priority Critical patent/CN115773747B/en
Publication of CN115773747A publication Critical patent/CN115773747A/en
Application granted granted Critical
Publication of CN115773747B publication Critical patent/CN115773747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a high-precision map generation method, a high-precision map generation device, a high-precision map generation equipment and a high-precision map generation storage medium, wherein a first pose estimation value is estimated according to IMU data between a recent key frame and a current laser point cloud frame and the pose of a last laser point cloud frame after back-end optimization; according to the first pose estimation value and IMU data corresponding to the time for scanning the current laser point cloud, carrying out distortion removal on the current laser point cloud and extracting a feature point; obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and if loop is detected, estimating a second attitude estimation value; obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a posture estimation value optimized at the rear end of the current frame; and splicing the current laser point cloud frame subjected to distortion removal into a historical point cloud map according to the pose estimation value optimized by the rear end to generate a static point cloud map, so that the map precision is improved.

Description

一种高精度地图生成方法、装置、设备及存储介质A high-precision map generation method, device, equipment and storage medium

技术领域technical field

本发明涉及计算机视觉图像处理技术领域,尤其涉及一种高精度地图生成方法、装置、设备及存储介质。The present invention relates to the technical field of computer vision image processing, in particular to a high-precision map generation method, device, equipment and storage medium.

背景技术Background technique

随着人工智能技术的发展,通过融合激光雷达,摄像头,GPS多传感器进行建图定位,是智能无人车自动导航的重要基础。现有的普通导航地图一般为道路级地图,包含地图信息少,精度为米级。而高精度地图可以精确到10-20cm级别,并且包含交通标志、车道中心线、斑马线等地图信息,可以为自动驾驶提供更全面的数据基础。With the development of artificial intelligence technology, the integration of lidar, camera, and GPS multi-sensors for mapping and positioning is an important basis for the automatic navigation of intelligent unmanned vehicles. The existing general navigation maps are generally road-level maps, which contain less map information and have meter-level accuracy. The high-precision map can be accurate to the level of 10-20cm, and contains map information such as traffic signs, lane centerlines, and zebra crossings, which can provide a more comprehensive data basis for autonomous driving.

现有的基于激光雷达的高精度地图建图方式,在遇到长距离相似场景例如隧道时,会出现地图漂移现象,鲁棒性差,易受单传感器误差影响。通常的高精地图建图方式,采用搭载单/双目/深度相机或2D/3D激光雷达的设备进行数据采集,再对采集到的数据进行相邻数据帧匹配,通常采用ICP算法(Iterative Closest Point)或NDT(NormalDistributions Transform)匹配算法进行计算,获得设备(即集成有上述传感器、相机或雷达的装置,也相当于自动驾驶的车体)经过的轨迹(轨迹包括:设备的三维位置和姿态,简称位姿),一般将相邻帧匹配计算得到的轨迹称为前端里程计。通过前端里程计拼接相机或激光雷达获得的点云数据,就可以获得局部点云地图。由于前端里程计由相邻帧数据计算得来,不可避免会存在误差,在设备前进过程中,误差也会不断累积。为了防止累积误差过大导致地图偏移,需要对局部地图中所有点云数据进行联合优化,其中可能还会加入IMU(Inertial Measurement Unit)数据(包括加速度和角速度)、RTK(Real - timekinematic)数据(即实时动态测量技术,是以载波相位观测为根据的实时差分GPS技术,可以输出1-2cm精度经纬高数据的传感器)、回环检测数据等进行辅助优化,该联合优化部分通常被成为后端优化线程。最后利用优化出来的轨迹拼接对应点云数据,得到完整地图,这个过程被称为地图处理线程。此外,上文提到的回环检测是一种常见的抑制累积误差的手段。具体的,设备在采集数据过程中会经过曾经到达过的地点,此时通过回环检测判断当前帧点云数据和历史地图数据的相似性,如果相似性大于阈值,则判断设备到达历史轨迹点。此时会启用相邻数据帧匹配策略,通过ICP或其他匹配算法,计算当前帧到历史地图的位置姿态变换,用以消除累积误差。因此一个高精地图建图方式中,通常包含前端里程计、后端优化、回环检测、地图处理等线程。The existing high-precision map construction method based on lidar, when encountering long-distance similar scenes such as tunnels, will experience map drift, poor robustness, and is susceptible to single-sensor errors. The usual high-precision map construction method uses a device equipped with a single/binocular/depth camera or 2D/3D lidar for data collection, and then performs adjacent data frame matching on the collected data, usually using the ICP algorithm (Iterative Closest Point) or NDT (NormalDistributions Transform) matching algorithm to obtain the trajectory of the device (that is, the device integrated with the above-mentioned sensors, cameras or radars, which is also equivalent to the vehicle body of automatic driving) (the trajectory includes: the three-dimensional position and attitude of the device , referred to as pose), the trajectory calculated by matching adjacent frames is generally called the front-end odometer. The local point cloud map can be obtained by splicing the point cloud data obtained by the camera or lidar through the front-end odometer. Since the front-end odometer is calculated from adjacent frame data, there will inevitably be errors, and the errors will continue to accumulate during the advancement of the device. In order to prevent the map from drifting due to excessive cumulative errors, it is necessary to jointly optimize all point cloud data in the local map, which may also include IMU (Inertial Measurement Unit) data (including acceleration and angular velocity), RTK (Real - time kinematic) data (That is, real-time dynamic measurement technology, based on carrier phase observation, real-time differential GPS technology, sensors that can output 1-2cm precision longitude and latitude height data), loopback detection data, etc. for auxiliary optimization, the joint optimization part is usually called the back-end Optimize threads. Finally, the optimized trajectory is used to stitch the corresponding point cloud data to obtain a complete map. This process is called a map processing thread. In addition, the loop detection mentioned above is a common means of suppressing accumulated errors. Specifically, in the process of collecting data, the device will pass through the places it has arrived. At this time, the similarity between the current frame point cloud data and the historical map data is judged by loopback detection. If the similarity is greater than the threshold, it is judged that the device has reached the historical track point. At this time, the adjacent data frame matching strategy will be enabled, and the position and attitude transformation from the current frame to the historical map will be calculated through ICP or other matching algorithms to eliminate accumulated errors. Therefore, a high-precision map construction method usually includes threads such as front-end odometer, back-end optimization, loop detection, and map processing.

在上述的前端里程计过程中,通过相邻帧数据匹配方式,得到位姿估计值,若采取ICP算法,相邻帧之间的所有点到点之间进行匹配,容易造成误匹配,降低位姿估计的精度,进而影响地图精度,且计算量很大;若采取NDT算法保存体素地图,会导致无法进行回环检测,大大影响长距离建图精度。同时,大多数算法在回环检测线程直接采用ICP算法作为匹配相似帧策略,回环检测精度低,鲁棒性差,例如在激光雷达扫描角度相差较大的场景下无法形成回环。In the above-mentioned front-end odometer process, the estimated value of the pose is obtained through the matching method of adjacent frame data. If the ICP algorithm is adopted, all points between adjacent frames are matched, which is likely to cause false matching and reduce the position. The accuracy of attitude estimation will affect the accuracy of the map, and the calculation is very large; if the NDT algorithm is used to save the voxel map, the loop detection will not be able to be performed, which will greatly affect the accuracy of long-distance mapping. At the same time, most algorithms directly use the ICP algorithm as a matching similar frame strategy in the loopback detection thread. The accuracy of loopback detection is low and the robustness is poor. For example, loopbacks cannot be formed in scenarios with large differences in lidar scanning angles.

发明内容Contents of the invention

本发明公开了一种高精度地图生成方法、装置、设备及存储介质,解决了现有地图生成方法精度低、鲁棒性差,导致位姿估计不准确,进而影响地图生成精度的问题。The invention discloses a high-precision map generation method, device, equipment and storage medium, which solves the problem that the existing map generation method has low precision and poor robustness, which leads to inaccurate pose estimation and further affects the map generation accuracy.

本发明实施例的第一方面,提供一种高精度地图生成方法,包括:According to the first aspect of the embodiments of the present invention, a method for generating a high-precision map is provided, including:

获取当前激光点云帧数据,以及IMU数据;Obtain current laser point cloud frame data and IMU data;

若当前激光点云帧数据是激光雷达采集的第一帧数据,则将其作为第一个关键帧,通过所述第一个关键帧生成点云地图,重新执行上述步骤;否则,根据最近关键帧到当前激光点云帧之间的IMU数据和上一激光点云帧经后端优化的位姿,估计得到当前激光点云帧的第一位姿估计值;If the current laser point cloud frame data is the first frame data collected by the laser radar, it is used as the first key frame, and the point cloud map is generated through the first key frame, and the above steps are re-executed; otherwise, according to the latest key frame The IMU data between the frame and the current laser point cloud frame and the back-end optimized pose of the last laser point cloud frame are estimated to obtain the first pose estimation value of the current laser point cloud frame;

根据所述第一位姿估计值以及扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;According to the first pose estimation value and the IMU data corresponding to the scanning time of the current laser point cloud, the current laser point cloud is de-distorted to obtain the current laser point cloud frame after de-distortion, and extract feature points therefrom;

根据去畸变后的当前激光点云帧得到当前描述子,根据所述当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值;否则,未检测到回环,将第二位姿估计值置为空;Obtain the current descriptor according to the current laser point cloud frame after de-distortion, judge whether to detect the loop closure according to the current descriptor, if the loop closure is detected, estimate the second pose estimation value of the current laser point cloud frame; otherwise, If no loop closure is detected, set the second pose estimate to null;

根据所述第一位姿估计值和所述特征点得到第三位姿估计值;将所述第二位姿估计值、第三位姿估计值输入因子图,得到当前帧的后端优化的位姿估计值;Obtain a third pose estimation value according to the first pose estimation value and the feature points; input the second pose estimation value and the third pose estimation value into the factor graph to obtain the back-end optimized value of the current frame Pose estimate;

判断当前激光点云帧是否可以作为关键帧,若可以,则根据所述当前帧的后端优化的位姿估计值将所述去畸变后的当前激光点云帧拼接到历史点云地图中,去除动态物体,生成静态点云地图;若不可以,则重新执行上述步骤,直到停止生成地图。Judging whether the current laser point cloud frame can be used as a key frame, if possible, splicing the de-distorted current laser point cloud frame into the historical point cloud map according to the pose estimation value optimized by the back end of the current frame, Remove dynamic objects and generate a static point cloud map; if not, perform the above steps again until the map generation is stopped.

本发明实施例的第二方面,提供一种高精度地图生成装置,包括:The second aspect of the embodiments of the present invention provides a high-precision map generation device, including:

采集模块,用于获取当前激光点云帧数据,以及IMU数据;The acquisition module is used to acquire the current laser point cloud frame data and IMU data;

IMU预处理模块,用于若当前激光点云帧数据是激光雷达采集的第一帧数据,则将其作为第一个关键帧,发送至地图生成模块,并返回采集模块;否则,根据最近关键帧到当前激光点云帧之间的IMU数据和后端优化模块反馈的上一激光点云帧经后端优化的位姿,估计得到当前激光点云帧的第一位姿估计值;The IMU preprocessing module is used for if the current laser point cloud frame data is the first frame data collected by the laser radar, then send it as the first key frame to the map generation module and return to the acquisition module; otherwise, according to the latest key frame The IMU data between the frame and the current laser point cloud frame and the back-end optimized pose of the previous laser point cloud frame fed back by the back-end optimization module are estimated to obtain the first pose estimation value of the current laser point cloud frame;

前端里程计模块,用于根据所述第一位姿估计值以及扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;The front-end odometer module is used to de-distort the current laser point cloud according to the first estimated pose value and the IMU data corresponding to the scanning time of the current laser point cloud, to obtain the de-distorted current laser point cloud frame, and Extract feature points from it;

回环检测模块,用于根据去畸变后的当前激光点云帧得到当前描述子,根据所述当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值;否则,未检测到回环,将第二位姿估计值置为空;The loopback detection module is used to obtain the current descriptor according to the current laser point cloud frame after dedistortion, and judge whether to detect a loopback according to the current descriptor. If a loopback is detected, it is estimated to obtain the second bit of the current laser point cloud frame pose estimate; otherwise, no loop closure is detected, and the second pose estimate is set to null;

后端优化模块,用于根据所述第一位姿估计值和所述特征点得到第三位姿估计值;将所述第二位姿估计值、第三位姿估计值输入因子图,得到当前帧的后端优化的位姿估计值;The back-end optimization module is used to obtain the third pose estimation value according to the first pose estimation value and the feature points; input the second pose estimation value and the third pose estimation value into the factor graph to obtain Backend-optimized pose estimates for the current frame;

地图生成模块,用于若当前帧为第一帧,则通过所述第一个关键帧生成点云地图,否则,判断当前帧是否可以作为关键帧,若可以,则根据当前帧的后端优化的位姿估计值将所述去畸变后的当前激光点云帧拼接到历史点云地图中,去除动态物体后生成静态点云地图;若不可以,则返回采集模块,直到停止生成地图。The map generation module is used to generate a point cloud map through the first key frame if the current frame is the first frame, otherwise, judge whether the current frame can be used as a key frame, and if so, optimize according to the backend of the current frame Splice the de-distorted current laser point cloud frame into the historical point cloud map, and generate a static point cloud map after removing dynamic objects; if not, return to the acquisition module until the map generation is stopped.

本发明实施例的第三方面,提供一种地图生成设备,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序执行本发明实施例中的任一所述的高精度地图生成方法。The third aspect of the embodiments of the present invention provides a map generation device, including: a memory, a processor, and a computer program, the computer program is stored in the memory, and the processor runs the computer program to execute the implementation of the present invention. The high-precision map generation method described in any one of the examples.

本发明实施例的第四方面,提供一种可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用于实现本发明实施例中的任一所述的高精度地图生成方法。According to the fourth aspect of the embodiments of the present invention, a readable storage medium is provided, and a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, it is used to implement any one of the embodiments of the present invention. The high-precision map generation method described above.

有益效果:本发明根据离当前激光点云帧时间上最近的关键帧到当前激光点云帧之间的IMU数据和上一激光点云帧经后端优化的位姿进行估计,得到当前激光点云帧的第一位姿估计值;根据第一位姿估计值以及雷达扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;由于采用了经后端优化后的位姿对第一位姿进行估计,使得估计的第一位姿值更加精确,也使得去畸变后的当前激光点云帧更加准确;Beneficial effects: the present invention estimates the current laser point according to the IMU data between the key frame closest to the current laser point cloud frame in time and the pose optimized by the backend of the last laser point cloud frame The first pose estimation value of the cloud frame; according to the first pose estimation value and the IMU data corresponding to the radar scanning time of the current laser point cloud, the current laser point cloud is de-distorted to obtain the current laser point cloud frame after de-distortion , and extract feature points from it; since the first pose is estimated by the back-end optimized pose, the estimated first pose value is more accurate, and the current laser point cloud frame after dedistortion is more accurate. precise;

根据去畸变后的当前激光点云帧得到当前描述子;根据当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值,否则,未检测到回环,将第二位姿估计值置为空;本发明构建描述子来检测回环,而不是根据帧中所有的点来检测回环,减少误检率,提高回环检测精度,鲁棒性好,进而提高后续生成点云地图的精度。Obtain the current descriptor according to the current laser point cloud frame after dedistortion; judge whether a loop is detected according to the current descriptor, if a loop is detected, estimate the second pose estimation value of the current laser point cloud frame, otherwise, not detect to the loop, and set the second pose estimation value to be empty; the present invention constructs a descriptor to detect the loop, instead of detecting the loop according to all points in the frame, reduces the false detection rate, improves the accuracy of the loop detection, and has good robustness. In turn, the accuracy of subsequent generation of point cloud maps is improved.

再根据第一位姿估计值和所述特征点得到第三位姿估计值;将第二位姿估计值、第三位姿估计值输入因子图,得到当前激光点云帧经后端优化的位姿;判断当前激光点云帧是否可以作为关键帧,若可以作为关键帧,则根据其后端优化的位姿将所述去畸变后的当前激光点云拼接到历史点云地图中,去除动态物体后生成静态点云地图。Then obtain the third pose estimate according to the first pose estimate and the feature points; input the second pose estimate and the third pose estimate into the factor graph to obtain the current laser point cloud frame optimized by the backend Pose; determine whether the current laser point cloud frame can be used as a key frame, and if it can be used as a key frame, then according to the back-end optimized pose, the de-distorted current laser point cloud is spliced into the historical point cloud map, and the Generate static point cloud maps after dynamic objects.

本发明不仅接收IMU数据,还接收经过后端优化处理的位姿,一同对当前激光点云进行第一位姿估计,得到的第一位姿估计值更加准确;然后对产生运动畸变的当前激光点云进行矫正处理,并提取特征点,进而矫正后的位姿以及提取的特征点更加精确;通过描述子,降低误匹配,提高了位姿估计的精度,进而提高生成地图的精度,且计算量很小。The present invention not only receives the IMU data, but also receives the pose after back-end optimization processing, and performs the first pose estimation on the current laser point cloud together, and the obtained first pose estimation value is more accurate; The point cloud is rectified, and the feature points are extracted, so that the corrected pose and the extracted feature points are more accurate; through the descriptor, the mismatch is reduced, the accuracy of pose estimation is improved, and the accuracy of the generated map is improved, and the calculation Portions are small.

本发明提供的高精度地图生成装置、设备和存储介质同样具备上述有益效果。The high-precision map generation device, equipment and storage medium provided by the present invention also have the above-mentioned beneficial effects.

附图说明Description of drawings

图1是本发明实施例提供的一种高精度地图生成方法示意图;FIG. 1 is a schematic diagram of a method for generating a high-precision map provided by an embodiment of the present invention;

图2是本发明实施例提供的一种高精度地图生成装置示意图。Fig. 2 is a schematic diagram of a high-precision map generation device provided by an embodiment of the present invention.

具体实施方式Detailed ways

以下结合附图和实施例对本发明的一种高精度地图生成方法、装置、设备及存储介质做进一步的说明和解释。A method, device, device, and storage medium for generating a high-precision map of the present invention will be further described and explained below in conjunction with the drawings and embodiments.

所述关键帧为经筛选后用于生成点云地图的激光点云帧;所述点云地图由若干关键帧生成。The key frame is a filtered laser point cloud frame used to generate a point cloud map; the point cloud map is generated by several key frames.

如图1所示,本实施例提供的一种高精度地图生成方法,包括如下步骤:As shown in Figure 1, a method for generating a high-precision map provided in this embodiment includes the following steps:

步骤一,获取激光雷达采集的激光点云数据作为当前激光点云帧(以下简称当前帧)以及IMU传感器采集的IMU数据;Step 1: Obtain the laser point cloud data collected by the lidar as the current laser point cloud frame (hereinafter referred to as the current frame) and the IMU data collected by the IMU sensor;

进一步的,还可以一并获取RTK传感器采集的当前经纬高数据;Furthermore, the current latitude and longitude data collected by the RTK sensor can also be obtained together;

具体的,IMU传感器采集到的IMU数据包括设备的角速度和加速度;此处的设备为集成有传感器、雷达的装置,也可以理解为自动驾驶的车体。Specifically, the IMU data collected by the IMU sensor includes the angular velocity and acceleration of the device; the device here is a device integrated with sensors and radars, and can also be understood as a self-driving car body.

激光雷达每扫描一圈采集的激光点称为一帧激光点云数据,由于激光点云数据采集频率低于IMU数据采集频率,所以在两帧激光点云数据之间会包含多组IMU数据。The laser point collected by the laser radar for each scanning circle is called a frame of laser point cloud data. Since the laser point cloud data collection frequency is lower than the IMU data collection frequency, multiple sets of IMU data will be included between two frames of laser point cloud data.

将激光雷达和IMU传感器开始采集后每次采集的数据分别保存。其中,IMU数据均保存到数据处理队列Opt_Queue中。Save the data collected each time after the lidar and IMU sensors start collecting. Among them, the IMU data are all stored in the data processing queue Opt_Queue.

步骤二,若当前激光点云帧数据是激光雷达采集的第一帧数据,则将其作为第一个关键帧,通过所述第一个关键帧生成点云地图,重新执行上述步骤;否则,根据最近关键帧到当前激光点云帧之间的IMU数据和上一激光点云帧经后端优化的位姿,估计得到当前激光点云帧的第一位姿估计值;Step 2, if the current laser point cloud frame data is the first frame data collected by the laser radar, then use it as the first key frame, generate a point cloud map through the first key frame, and re-execute the above steps; otherwise, According to the IMU data between the latest key frame and the current laser point cloud frame and the back-end optimized pose of the last laser point cloud frame, estimate the first pose estimation value of the current laser point cloud frame;

也就是:若所述激光点云数据不是激光雷达采集的第一帧数据,则根据第K-1关键帧到当前激光点云帧之间的IMU数据、上一激光点云帧经后端优化后的位姿进行估计,得到当前激光点云帧的第一位姿估计值;否则,将所述激光点云数据作为第一帧关键帧保存,将其对应的优化后的位姿设置为单位矩阵,通过该第一帧关键帧生成点云地图(通过后续的地图生成模块生成),返回步骤一,直到停止生成地图。That is: if the laser point cloud data is not the first frame data collected by the laser radar, then according to the IMU data between the K-1 key frame and the current laser point cloud frame, the last laser point cloud frame is optimized by the backend The final pose is estimated to obtain the first pose estimation value of the current laser point cloud frame; otherwise, the laser point cloud data is saved as the first frame key frame, and the corresponding optimized pose is set as unit Matrix, generate a point cloud map through the first key frame (generated by the subsequent map generation module), and return to step 1 until the map generation is stopped.

本步骤在IMU预处理模块中执行。This step is performed in the IMU preprocessing module.

关键帧为用于生成高精度地图的激光点云帧,是按照设定的距离从所有已采集的激光点云帧中选择的特征点(即下文提及的角特征点和面特征点)较多的点云帧,通过关键帧生成地图,是为了降低地图的存储空间及计算量。将关键帧编号,编号范围为1~K-1,本发明后续需要判断当前激光点云帧是否可以作为第K帧关键帧,若可以,则用于拼接到历史点云地图中,生成新的点云地图;The key frame is the laser point cloud frame used to generate a high-precision map, which is the feature point selected from all the collected laser point cloud frames according to the set distance (that is, the corner feature point and the surface feature point mentioned below). There are many point cloud frames, and the map is generated through key frames to reduce the storage space and calculation amount of the map. Number the key frames, and the number range is 1~K-1. The present invention needs to determine whether the current laser point cloud frame can be used as the Kth frame key frame. If so, it will be used to splicing into the historical point cloud map to generate a new point cloud map;

位姿即自动驾驶的车体的位姿,也可以简称为设备位姿。The pose refers to the pose of the vehicle body for autonomous driving, and may also be referred to as the device pose for short.

优化后的位姿设置为单位矩阵表示设备位姿没有变换。The optimized pose is set to the identity matrix indicating that the device pose has no transformation.

具体的,所述根据第K-1关键帧到当前激光点云帧之间的IMU数据、后端优化后的位姿进行估计,得到当前激光点云帧的第一位姿估计值,包括:Specifically, the estimation is performed according to the IMU data between the K-1 key frame and the current laser point cloud frame, and the pose after back-end optimization, to obtain the first pose estimation value of the current laser point cloud frame, including:

根据第K-1关键帧到当前激光点云帧时间戳之间的IMU数据,以及后端优化模块传来的优化后的位姿,进行联合的非线性优化,目标为获得最大概率的当前激光点云帧的第一位姿估计值,此处的优化算法采用高斯牛顿法;According to the IMU data between the K-1 key frame and the timestamp of the current laser point cloud frame, and the optimized pose from the back-end optimization module, a joint nonlinear optimization is performed, with the goal of obtaining the maximum probability of the current laser The first pose estimation value of the point cloud frame, the optimization algorithm here adopts the Gauss-Newton method;

步骤三,根据所述第一位姿估计值以及扫描当前激光点云时间内对应的IMU数据,对当前帧的激光点云进行去畸变,得到去畸变后的当前帧激光点云数据,并从中提取出特征点。Step 3: Dedistort the laser point cloud of the current frame according to the estimated value of the first pose and the corresponding IMU data within the scanning time of the current laser point cloud, obtain the laser point cloud data of the current frame after dedistortion, and obtain Feature points are extracted.

本步骤在前端里程计模块中执行。This step is performed in the front-end odometer module.

对当前帧的激光点云进行去畸变,得到去畸变后的当前帧激光点云数据,并从中提取出特征点,包括:Dedistort the laser point cloud of the current frame to obtain the laser point cloud data of the current frame after dedistortion, and extract feature points from it, including:

(1)计算当前激光点云帧中的每个点被激光雷达扫描到的时间;(1) Calculate the time when each point in the current laser point cloud frame is scanned by the laser radar;

具体的,激光雷达在扫描一圈的时间范围内也对应有IMU数据;每一帧激光雷达点云数据的时间戳为开始扫描的时刻,在每次扫描结束时,激光雷达会发送一个结束时刻,因此,可以算出每个点被扫描到的时间。Specifically, the lidar also has IMU data corresponding to the time range of scanning a circle; the time stamp of each frame of lidar point cloud data is the moment when the scan starts, and at the end of each scan, the lidar will send an end time , therefore, the time for each point to be scanned can be calculated.

(2)根据扫描当前激光点云时间内对应的IMU数据,以及每个点的所述扫描到的时间,计算每个点在激光雷达中的相对旋转量;根据K-1关键帧到当前帧的位姿变化,计算每个点的相对位移量;根据所述相对旋转量和相对位移量,对当前帧的激光点云进行去畸变,得到去畸变后的当前帧激光点云数据;(2) Calculate the relative rotation of each point in the laser radar according to the corresponding IMU data within the scanning time of the current laser point cloud and the scanning time of each point; according to the K-1 key frame to the current frame According to the relative rotation amount and the relative displacement amount, the laser point cloud of the current frame is de-distorted to obtain the laser point cloud data of the current frame after de-distortion;

即分别将相对旋转量和相对位移量加到激光点的原始点云坐标中,将因设备运动而产生观测畸变的点纠正到正确的位姿;That is, the relative rotation and relative displacement are added to the original point cloud coordinates of the laser point, and the point that is observed to be distorted due to equipment movement is corrected to the correct pose;

其中,相对旋转量=扫描当前激光点云时间内对应的IMU数据中的角速度*(每个点扫描到的时间-开始扫描时间);Among them, the relative rotation amount = the angular velocity in the IMU data corresponding to the scanning time of the current laser point cloud * (the scanning time of each point - the scanning start time);

相对位移量=速度*(每个点扫描到的时间-开始扫描时间),速度=(所述第一位姿估计值中的三维位置-第K-1关键帧对应的后端优化后的位姿中的三维位置)/当前帧与第K-1关键帧的两帧时间间隔。Relative displacement = speed * (time to scan each point - start scanning time), speed = (3D position in the first pose estimation value - back-end optimized position corresponding to the K-1th key frame The three-dimensional position in the pose)/the two-frame time interval between the current frame and the K-1th key frame.

(3)对去畸变后的当前帧激光点云数据,计算每个点的曲率,曲率超过角阈值的点作为角特征点提取,曲率小于面阈值的点作为面特征点提取,最终得到了角特征点集合和面特征点集合,统称为特征点集合;(3) For the current frame of laser point cloud data after de-distortion, calculate the curvature of each point, the point whose curvature exceeds the angle threshold is extracted as a corner feature point, and the point whose curvature is smaller than the surface threshold is extracted as a surface feature point, and finally the angle The feature point set and the surface feature point set are collectively referred to as the feature point set;

其中,曲率越小,代表当前点属于平面(墙面、地面)的可能性较大,曲率越大,代表当前点属于物体边缘的可能性越大。Among them, the smaller the curvature, the greater the possibility that the current point belongs to the plane (wall, ground), and the greater the curvature, the greater the possibility that the current point belongs to the edge of the object.

计算每个点的曲率,示例为:计算当前点与邻近的若干个点(例如,10个点)的深度差值之和。角阈值大于面阈值,均根据经验值得到;Calculate the curvature of each point, for example: calculate the sum of the depth differences between the current point and several adjacent points (for example, 10 points). The angle threshold is greater than the surface threshold, which are obtained based on empirical values;

进一步地,在提取特征点时,将点云按照角度划分的区域均匀取点,防止特征点过于聚集。Furthermore, when extracting feature points, the point cloud is evenly selected according to the area divided by angle, so as to prevent the feature points from being too concentrated.

比如,将扫描一圈的即360度的点云数据划分为6个区域,每个区域中的角特征点和面特征点尽量均匀分布,若某个区域中角特征点过多,则去除一部分曲率较小的角特征点;若某个区域中面特征点过多,则去除一部分曲率较大的面特征点。这样可以防止特征点过于集中到某些区域。For example, the 360-degree point cloud data scanned in one circle is divided into 6 regions, and the corner feature points and surface feature points in each region are distributed as evenly as possible. If there are too many corner feature points in a certain region, some of them will be removed. Corner feature points with smaller curvature; if there are too many surface feature points in a certain area, remove some of the surface feature points with larger curvature. This prevents feature points from being too concentrated in certain areas.

另外,从去畸变后的当前帧激光点云数据中分离出地面点,在去畸变后的当前帧激光点云数据为所有地面点做标记。In addition, ground points are separated from the current frame of laser point cloud data after dedistortion, and all ground points are marked on the current frame of laser point cloud data after dedistortion.

步骤四,根据去畸变后的当前激光点云帧得到当前描述子,根据所述当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值;否则,未检测到回环,将第二位姿估计值置为空;Step 4: Obtain the current descriptor according to the current laser point cloud frame after dedistortion, judge whether a loop closure is detected according to the current descriptor, if a loop closure is detected, estimate and obtain the second pose estimation value of the current laser point cloud frame ; Otherwise, no loop closure is detected, and the second pose estimation value is set to null;

具体的:specific:

根据去畸变后的当前帧激光点云帧得到当前描述子,根据所述当前描述子判断是否检测出回环,若检测出回环,则从历史点云地图中选择与去畸变后的当前帧激光点距离小于设定距离阈值的点作为匹配点对,估计得到第二位姿估计值;否则,将第二位姿估计值置为空,执行下一步;Obtain the current descriptor according to the laser point cloud frame of the current frame after dedistortion, judge whether to detect a loopback according to the current descriptor, if a loopback is detected, select the laser point of the current frame after dedistortion from the historical point cloud map Points whose distance is less than the set distance threshold are used as matching point pairs, and the second pose estimation value is estimated; otherwise, the second pose estimation value is set to empty, and the next step is performed;

通过上述回环检测,对初始的位姿估计进行了优化,得到第二位姿估计值,使得后续生成地图精度更高。Through the above-mentioned loop closure detection, the initial pose estimation is optimized, and the second pose estimation value is obtained, which makes the subsequent map generation more accurate.

本步骤在回环检测模块中执行。历史点云地图为当前已保存的最新的点云地图,相对于本方法将要生成的地图为历史地图。This step is performed in the loop detection module. The historical point cloud map is the latest point cloud map that is currently saved, and the map to be generated by this method is a historical map.

具体的,根据去畸变后的当前帧激光点云帧得到描述子,包括:Specifically, the descriptor is obtained according to the laser point cloud frame of the current frame after dedistortion, including:

将去畸变后的当前帧激光点云帧数据,按照雷达扫描角度、深度划分为若干个扇形区域,记录每个区域中所有点的发射强度的最大值,将所有区域中的发射强度的最大值保存到数组中,数组的长度等于划分区域的个数;比如划分为120个扇形区域,则得到一个长度为120的数组,数组中的值即反射强度,将该数组简称为描述子。Divide the de-distorted current frame laser point cloud frame data into several fan-shaped areas according to the radar scanning angle and depth, record the maximum value of the emission intensity of all points in each area, and divide the maximum value of the emission intensity in all areas Save to an array, the length of the array is equal to the number of divided regions; for example, if it is divided into 120 fan-shaped regions, an array with a length of 120 will be obtained, and the value in the array is the reflection intensity. This array is referred to as a descriptor for short.

生成历史地图的每个关键帧也对应了一个描述子。Each keyframe that generates the history map also corresponds to a descriptor.

即,所述描述子为:按照雷达扫描角度、深度划分为若干个扇形区域,记录每个区域中所有激光点的发射强度的最大值,将所有区域中的发射强度的最大值组成的数组。That is, the descriptor is: divided into several fan-shaped areas according to the radar scanning angle and depth, recording the maximum value of the emission intensity of all laser points in each area, and an array composed of the maximum value of the emission intensity in all areas.

具体的,估计得到第二位姿估计值,步骤包括:Specifically, estimating and obtaining a second pose estimation value, the steps include:

(1)通过所述第一位姿估计值将去畸变后的当前帧激光点云投影到历史点云地图的坐标系中,判断投影后的当前帧激光点云与历史点云地图的重合度是否大于重合度阈值,若大于,则说明初步检测到回环,执行下一步(2),否则,判断未检测到回环,说明没有到达过历史点云地图中的场景。(1) Project the de-distorted current frame laser point cloud into the coordinate system of the historical point cloud map through the first pose estimation value, and judge the coincidence degree between the projected current frame laser point cloud and the historical point cloud map Whether it is greater than the coincidence degree threshold, if it is greater, it means that a loop is initially detected, and the next step (2) is performed; otherwise, it is judged that no loop is detected, indicating that the scene in the historical point cloud map has not been reached.

例如,设定当前帧激光点云中有80%的点在历史点云地图范围内,则认为检测到回环,说明有可能到达过历史点云地图中的场景,此时设定重合度阈值为80%。此步骤,先进行重合度判断,可以提高回环的检测精度和效率。For example, if 80% of the points in the laser point cloud of the current frame are within the range of the historical point cloud map, it is considered that a loop is detected, indicating that it is possible to reach the scene in the historical point cloud map. At this time, the coincidence degree threshold is set to 80%. In this step, the coincidence degree is judged first, which can improve the detection accuracy and efficiency of loop closure.

(2)将所述当前描述子与历史点云地图中的每个关键帧对应的描述子分别进行比较,得到每个关键帧对应的相似度,如果某相似度大于回环阈值,则判断为回环,在对应关键帧中选择与去畸变后的当前帧激光点的距离小于设定距离阈值的点,组成匹配点对,根据帧间匹配算法如:ICP算法(Iterative Closest Point)计算当前激光点云帧的第二位姿估计值;否则,判断为未检测到回环。(2) Compare the current descriptor with the descriptor corresponding to each key frame in the historical point cloud map to obtain the similarity corresponding to each key frame. If a certain similarity is greater than the loop-closing threshold, it is judged as loop-closing , in the corresponding key frame, select the point whose distance from the laser point in the current frame after dedistortion is less than the set distance threshold to form a matching point pair, and calculate the current laser point cloud according to the inter-frame matching algorithm such as: ICP algorithm (Iterative Closest Point) The second pose estimation value of the frame; otherwise, it is judged that no loop closure is detected.

需要说明的是,估计得到第二位姿估计值的步骤可以只包括上述步骤(2)的过程。在回环检测部分,现有技术直接通过ICP算法进行回环检测的匹配,误检率较高。本发明通过点云的反射强度数据构建描述子,将当前描述子与历史点云地图中的每个关键帧的描述子进行比较,判断是否回环,能够提高回环检测精度,减少误检率,通过检测到回环,进一步优化位姿,进而提高了后续生成点云地图的精度。It should be noted that the step of estimating and obtaining the second pose estimation value may only include the process of the above step (2). In the loop detection part, in the prior art, the matching of the loop detection is directly performed through the ICP algorithm, and the false detection rate is high. The present invention constructs a descriptor through the reflection intensity data of the point cloud, compares the current descriptor with the descriptor of each key frame in the historical point cloud map, and judges whether it is looped, which can improve the accuracy of loopback detection and reduce the false detection rate. The loop is detected, and the pose is further optimized, thereby improving the accuracy of the subsequent point cloud map generation.

在执行上述步骤(2)之前还可以先进行步骤(1)中的重合度判断,可以进一步提高回环的检测精度和效率。Before performing the above step (2), the coincidence degree judgment in the step (1) can be performed first, which can further improve the detection accuracy and efficiency of the loop closure.

步骤五,根据所述第一位姿估计值和所述特征点得到第三位姿估计值;将所述第二位姿估计值、第三位姿估计值输入因子图,得到当前帧的后端优化的位姿估计值。Step 5: Obtain a third estimated pose value according to the first estimated pose value and the feature points; input the second estimated pose value and the third estimated pose value into the factor graph to obtain the post End-optimized pose estimation.

本步骤在后端优化模块中执行。This step is performed in the backend optimization module.

所述得到第三位姿估计值,具体包括:The obtaining the third pose estimation value specifically includes:

(1)根据最接近当前帧的若干关键帧拼接成局部地图。具体的:拼接时,最接近当前帧的若干关键帧对应的位姿值为后端优化模块优化过的精确位姿值。(1) Stitch a local map based on several key frames closest to the current frame. Specifically: during splicing, the pose values corresponding to several key frames closest to the current frame are the precise pose values optimized by the back-end optimization module.

具体的:如果步骤四中未发现回环,则是提取最接近当前帧的设定距离或设定时间内的若干个关键帧拼接成局部地图,如果发现回环,则提取最接近当前帧的设定距离或设定时间内的若干个关键帧以及提取闭环容器中的回环对应的历史关键帧拼接成局部地图,这样可以提高地图生成精度;闭环容器中保存有检测出回环的对应的历史关键帧。Specifically: if no loop is found in step 4, extract the closest set distance to the current frame or several key frames within the set time to stitch together a local map; if a loop is found, extract the setting closest to the current frame Several keyframes within the distance or set time and the historical keyframes corresponding to the loopbacks in the extracted closed-loop container are spliced into a local map, which can improve the accuracy of map generation; the closed-loop container stores the corresponding historical keyframes that detect loopbacks.

(2)对局部地图进行体素滤波,并下采样,即降低局部地图中点数量以降低计算时间,然后将当前帧中的特征点通过所述第一位姿估计值转换到局部地图坐标系中,在局部地图中搜索特征点的最近点,通过因子图计算得到所述当前激光点云帧的第三位姿估计值。(2) Perform voxel filtering on the local map and downsample, that is, reduce the number of points in the local map to reduce the calculation time, and then convert the feature points in the current frame to the local map coordinate system through the first pose estimation value In , the nearest point of the feature point is searched in the local map, and the third pose estimation value of the current laser point cloud frame is obtained by calculating the factor graph.

具体的:对于角特征点,搜索局部地图中最近的若干个点(如2个或者3个点)组成线段,计算点到线的最小距离,作为优化所需的点线残差;对于面特征点,则搜索最近若干个点(如5个点),以这些点构建平面,计算点到面的最小距离,作为优化所需的点面残差。计算完成后将所有残差(所有角特征点和面特征点对应的残差)输入GTSAM库构建的非线性优化器即因子图(现有的)中,输出当前激光点云帧的第三位姿估计值。Specifically: for corner feature points, search for the nearest points (such as 2 or 3 points) in the local map to form a line segment, and calculate the minimum distance from point to line as the point-line residual for optimization; for surface features point, then search for the nearest points (such as 5 points), construct a plane with these points, and calculate the minimum distance from the point to the plane as the point-plane residual required for optimization. After the calculation is completed, all residuals (residuals corresponding to all corner feature points and surface feature points) are input into the nonlinear optimizer built by the GTSAM library, that is, the factor graph (existing), and the third digit of the current laser point cloud frame is output attitude estimate.

进一步地,对该第三位姿估计值进行速度判断,如果第K-1关键帧到当前帧的运动速度与第K-2关键帧到K-1关键帧的运动速度的比值大于预设的速度比值(如2倍关系),则判断当前帧出现退化现象,即在长距离相似场景如隧道中会出现的轨迹漂移现象。然后舍弃计算出的第三位姿估计值,根据匀速重新计算第三位姿估计值,即:默认使用匀速模型,将K-2关键帧到K-1关键帧的位姿变换作为K-1关键帧到当前帧的位姿变换;其中,K-1关键帧到当前帧的运动速度为:第K-1关键帧到当前帧的位姿变换/当前帧到第K-1关键帧的时间差;Further, a speed judgment is performed on the third pose estimation value, if the ratio of the motion speed from the K-1 key frame to the current frame to the motion speed from the K-2 key frame to the K-1 key frame is greater than the preset Speed ratio (such as 2 times relationship), it is judged that the current frame is degraded, that is, the trajectory drift that will appear in long-distance similar scenes such as tunnels. Then discard the calculated third pose estimation value, and recalculate the third pose estimation value according to the uniform velocity, that is: the default uniform velocity model is used, and the pose transformation from the K-2 key frame to the K-1 key frame is used as K-1 The pose transformation from the key frame to the current frame; among them, the motion speed from the K-1 key frame to the current frame is: the pose transformation from the K-1 key frame to the current frame/the time difference from the current frame to the K-1 key frame ;

根据匀速重新计算第三位姿估计值,具体的为:Recalculate the estimated value of the third pose according to the constant velocity, specifically:

重新计算的第三位姿估计值=第K-1关键帧的位姿+(K-2关键帧到K-1关键帧的速度)*(当前帧到K-1关键帧的时间差)。The recalculated third pose estimate = the pose of the K-1 keyframe + (the speed from the K-2 keyframe to the K-1 keyframe) * (the time difference from the current frame to the K-1 keyframe).

因子图中还存储之前所有的历史位姿数据,最后通过GTSAM库对上述第二位姿估计值、第三位姿估计值进行联合因子图优化,这样通过因子图得到后端优化后的位姿,称为:当前激光点云帧的后端优化后的位姿估计值,作为最终当前帧的位姿数据保存为地图的轨迹;本发明的因子图输入的参数量较少,因此计算量较少。The factor graph also stores all previous historical pose data, and finally performs joint factor graph optimization on the above-mentioned second pose estimate and third pose estimate through the GTSAM library, so that the back-end optimized pose can be obtained through the factor graph , which is called: the estimated value of the pose after the back-end optimization of the current laser point cloud frame is saved as the track of the map as the pose data of the final current frame; the input parameter amount of the factor map of the present invention is less, so the amount of calculation is relatively small. few.

进一步的,为了使得后端优化后的位姿更加精确,可根据步骤一获取的RTK传感器采集的当前经纬高数据,与K-1关键帧的经纬高数据联合计算出K-1关键帧到当前帧的位姿变换矩阵(即通过GPS信息得到的位姿变换),再将位姿变换矩阵与上述第二位姿估计值、第三位姿估计值一并输入因子图,得到更加精确的后端优化后的位姿。Further, in order to make the post-optimized pose more accurate, the current latitude and longitude data collected by the RTK sensor obtained in step 1 can be combined with the latitude and longitude data of the K-1 key frame to calculate the K-1 key frame to the current The pose transformation matrix of the frame (that is, the pose transformation obtained through GPS information), and then input the pose transformation matrix together with the above-mentioned second and third pose estimation values into the factor graph to obtain a more accurate post End-optimized pose.

步骤六、判断当前激光点云帧是否可以作为关键帧,若可以,则根据所述当前帧的后端优化的位姿估计值将所述去畸变后的当前激光点云帧拼接到历史点云地图中,去除动态物体,生成静态点云地图;若不可以,则重新执行上述步骤,直到停止生成地图。Step 6. Determine whether the current laser point cloud frame can be used as a key frame. If so, splicing the de-distorted current laser point cloud frame into the historical point cloud according to the pose estimation value optimized by the back end of the current frame In the map, remove the dynamic object and generate a static point cloud map; if not, perform the above steps again until the map generation stops.

其中,判断当前帧是否为关键帧的方法:若当前帧与K-1关键帧的距离大于关键帧阈值(比如一米),并且当前帧中特征点的数量大于设定的特征点阈值,则判断为关键帧。根据当前帧的后端优化后的位姿中的三维位置与K-1关键帧优化后三维位置的差值计算距离;Among them, the method of judging whether the current frame is a key frame: if the distance between the current frame and the K-1 key frame is greater than the key frame threshold (such as one meter), and the number of feature points in the current frame is greater than the set feature point threshold, then It is judged as a key frame. Calculate the distance according to the difference between the three-dimensional position in the optimized pose of the back end of the current frame and the optimized three-dimensional position of the K-1 key frame;

通过去除动态物体,例如车和行人,得到静态的高精点云地图。去除动态物体采用现有的方法。By removing dynamic objects, such as cars and pedestrians, a static high-precision point cloud map is obtained. Removing dynamic objects adopts existing methods.

本发明根据离当前激光点云帧时间上最近的关键帧到当前激光点云帧之间的IMU数据和上一激光点云帧经后端优化的位姿进行估计,得到当前激光点云帧的第一位姿估计值;根据第一位姿估计值以及雷达扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;由于采用了经后端优化后的位姿对第一位姿进行估计,使得估计的第一位姿值更加精确,也使得去畸变后的当前激光点云帧更加准确;The present invention estimates the IMU data between the key frame closest to the current laser point cloud frame in time and the current laser point cloud frame and the back-end optimized pose of the previous laser point cloud frame, and obtains the current laser point cloud frame The first pose estimation value; according to the first pose estimation value and the IMU data corresponding to the radar scanning time of the current laser point cloud, the current laser point cloud is de-distorted to obtain the current laser point cloud frame after de-distortion, and from it Feature points are extracted; since the first pose is estimated using the back-end optimized pose, the estimated first pose value is more accurate, and the current laser point cloud frame after dedistortion is more accurate;

根据去畸变后的当前激光点云帧得到当前描述子;根据当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值,否则,未检测到回环,将第二位姿估计值置为空;本发明构建描述子来检测回环,而不是根据帧中所有的点来检测回环,减少误检率,提高回环检测精度,鲁棒性好,进而提高点云地图的精度。Obtain the current descriptor according to the current laser point cloud frame after dedistortion; judge whether a loop is detected according to the current descriptor, if a loop is detected, estimate the second pose estimation value of the current laser point cloud frame, otherwise, not detect to the loop, and set the second pose estimation value to be empty; the present invention constructs a descriptor to detect the loop, instead of detecting the loop according to all points in the frame, reduces the false detection rate, improves the accuracy of the loop detection, and has good robustness. This improves the accuracy of the point cloud map.

再根据第一位姿估计值和所述特征点得到第三位姿估计值;将第二位姿估计值、第三位姿估计值输入因子图,得到当前激光点云帧经后端优化的位姿;判断当前激光点云帧是否可以作为关键帧,若可以作为关键帧,则根据其后端优化的位姿将所述去畸变后的当前激光点云拼接到历史点云地图中,去除动态物体后生成静态点云地图。Then obtain the third pose estimate according to the first pose estimate and the feature points; input the second pose estimate and the third pose estimate into the factor graph to obtain the current laser point cloud frame optimized by the backend Pose; determine whether the current laser point cloud frame can be used as a key frame, and if it can be used as a key frame, then according to the back-end optimized pose, the de-distorted current laser point cloud is spliced into the historical point cloud map, and the Generate static point cloud maps after dynamic objects.

本发明不仅接收IMU数据,还接收经过后端优化处理的位姿,一同对当前激光点云进行第一位姿估计,得到的第一位姿估计值更加准确;然后对产生运动畸变的当前激光点云进行矫正处理,并提取特征点,进而矫正后的位姿以及提取的特征点更加精确;通过描述子,降低误匹配,提高了位姿估计的精度,进而提高生成地图的精度,且计算量很小。The present invention not only receives the IMU data, but also receives the pose after back-end optimization processing, and performs the first pose estimation on the current laser point cloud together, and the obtained first pose estimation value is more accurate; The point cloud is rectified, and the feature points are extracted, so that the corrected pose and the extracted feature points are more accurate; through the descriptor, the mismatch is reduced, the accuracy of pose estimation is improved, and the accuracy of the generated map is improved, and the calculation Portions are small.

如图2所示,本发明实施例还提供一种高精度地图生成装置,包括:As shown in Figure 2, the embodiment of the present invention also provides a high-precision map generation device, including:

采集模块,用于获取当前激光点云帧数据,以及IMU数据;The acquisition module is used to acquire the current laser point cloud frame data and IMU data;

IMU预处理模块,用于若当前激光点云帧数据是激光雷达采集的第一帧数据,则将其作为第一个关键帧,发送至地图生成模块,并返回采集模块;否则,根据最近关键帧到当前激光点云帧之间的IMU数据和后端优化模块反馈的上一激光点云帧经后端优化的位姿,估计得到当前激光点云帧的第一位姿估计值;The IMU preprocessing module is used for if the current laser point cloud frame data is the first frame data collected by the laser radar, then send it as the first key frame to the map generation module and return to the acquisition module; otherwise, according to the latest key frame The IMU data between the frame and the current laser point cloud frame and the back-end optimized pose of the previous laser point cloud frame fed back by the back-end optimization module are estimated to obtain the first pose estimation value of the current laser point cloud frame;

前端里程计模块,用于根据所述第一位姿估计值以及扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;The front-end odometer module is used to de-distort the current laser point cloud according to the first estimated pose value and the IMU data corresponding to the scanning time of the current laser point cloud, to obtain the de-distorted current laser point cloud frame, and Extract feature points from it;

回环检测模块,用于根据去畸变后的当前激光点云帧得到当前描述子,根据所述当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值;否则,未检测到回环,将第二位姿估计值置为空;The loopback detection module is used to obtain the current descriptor according to the current laser point cloud frame after dedistortion, and judge whether to detect a loopback according to the current descriptor. If a loopback is detected, it is estimated to obtain the second bit of the current laser point cloud frame pose estimate; otherwise, no loop closure is detected, and the second pose estimate is set to null;

后端优化模块,用于根据所述第一位姿估计值和所述特征点得到第三位姿估计值;将所述第二位姿估计值、第三位姿估计值输入因子图,得到当前帧的后端优化的位姿估计值;The back-end optimization module is used to obtain the third pose estimation value according to the first pose estimation value and the feature points; input the second pose estimation value and the third pose estimation value into the factor graph to obtain Backend-optimized pose estimates for the current frame;

地图生成模块,用于若当前帧为第一帧,则通过所述第一个关键帧生成点云地图,否则,判断当前帧是否可以作为关键帧,若可以,则根据当前帧的后端优化的位姿估计值将所述去畸变后的当前激光点云帧拼接到历史点云地图中,去除动态物体后生成静态点云地图;若不可以,则返回采集模块,直到停止生成地图。The map generation module is used to generate a point cloud map through the first key frame if the current frame is the first frame, otherwise, judge whether the current frame can be used as a key frame, and if so, optimize according to the backend of the current frame Splice the de-distorted current laser point cloud frame into the historical point cloud map, and generate a static point cloud map after removing dynamic objects; if not, return to the acquisition module until the map generation is stopped.

各个模块中采用的具体方法步骤同上述地图生成方法,此处不再赘述。The specific method steps adopted in each module are the same as the above-mentioned map generation method, and will not be repeated here.

本发明实施例提供的一种高精度地图生成装置,IMU预处理模块根据离当前激光点云帧时间上最近的关键帧到当前激光点云帧之间的IMU数据和上一激光点云帧经后端优化的位姿进行估计,得到当前激光点云帧的第一位姿估计值;前端里程计模块根据第一位姿估计值以及雷达扫描当前激光点云时间内对应的IMU数据,对当前激光点云进行去畸变,得到去畸变后的当前激光点云帧,并从中提取出特征点;由于采用了经后端优化后的位姿对第一位姿进行估计,使得估计的第一位姿值更加精确,也使得去畸变后的当前激光点云帧更加准确;回环检测模块根据去畸变后的当前激光点云帧得到当前描述子;根据当前描述子判断是否检测出回环,若检测出回环,则估计得到当前激光点云帧的第二位姿估计值,否则,未检测到回环,将第二位姿估计值置为空;本发明构建描述子来检测回环,而不是根据帧中所有的点来检测回环,减少误检率,提高回环检测精度,鲁棒性好,进而提高点云地图的精度。后端优化模块根据第一位姿估计值和所述特征点得到第三位姿估计值;将第二位姿估计值、第三位姿估计值输入因子图,得到当前激光点云帧经后端优化的位姿;地图生成模块判断当前激光点云帧是否可以作为关键帧,若可以作为关键帧,则根据其后端优化的位姿将所述去畸变后的当前激光点云拼接到历史点云地图中,去除动态物体后生成静态点云地图。In the high-precision map generation device provided by the embodiment of the present invention, the IMU preprocessing module is based on the IMU data between the key frame closest to the current laser point cloud frame and the current laser point cloud frame and the previous laser point cloud frame. The back-end optimized pose is estimated to obtain the first pose estimation value of the current laser point cloud frame; the front-end odometer module calculates the current The laser point cloud is de-distorted, and the current laser point cloud frame after de-distortion is obtained, and the feature points are extracted from it; since the first pose is estimated by the back-end optimized pose, the estimated first position The attitude value is more accurate, which also makes the current laser point cloud frame after dedistortion more accurate; the loopback detection module obtains the current descriptor according to the current laser point cloud frame after dedistortion; judges whether to detect the loopback according to the current descriptor, if detected loopback, then estimate the second pose estimation value of the current laser point cloud frame, otherwise, no loopback is detected, and the second pose estimation value is set to empty; the present invention constructs a descriptor to detect loopback instead of according to All points are used to detect loops, reduce the false detection rate, improve the accuracy of loop detection, and have good robustness, thereby improving the accuracy of point cloud maps. The back-end optimization module obtains the third pose estimation value according to the first pose estimation value and the feature points; the second pose estimation value and the third pose estimation value are input into the factor graph to obtain the current laser point cloud frame after end-optimized pose; the map generation module judges whether the current laser point cloud frame can be used as a key frame, and if it can be used as a key frame, then splicing the de-distorted current laser point cloud into the history according to the back-end optimized pose In the point cloud map, a static point cloud map is generated after removing dynamic objects.

本发明不仅接收IMU数据,还接收经过后端优化处理的位姿,一同对当前激光点云进行第一位姿估计,得到的第一位姿估计值更加准确;然后对产生运动畸变的当前激光点云进行矫正处理,并提取特征点,进而矫正后的位姿以及提取的特征点更加精确;通过描述子,降低误匹配,提高了位姿估计的精度,进而提高生成地图的精度,且计算量很小。The present invention not only receives the IMU data, but also receives the pose after back-end optimization processing, and performs the first pose estimation on the current laser point cloud together, and the obtained first pose estimation value is more accurate; The point cloud is rectified, and the feature points are extracted, so that the corrected pose and the extracted feature points are more accurate; through the descriptor, the mismatch is reduced, the accuracy of pose estimation is improved, and the accuracy of the generated map is improved, and the calculation Portions are small.

本发明实施例提供了一种地图生成设备,包括:存储器、处理器以及计算机程序,所述计算机程序存储在所述存储器中,所述处理器运行所述计算机程序执行上述实施例中的任一所述的高精度地图生成方法。An embodiment of the present invention provides a map generation device, including: a memory, a processor, and a computer program, the computer program is stored in the memory, and the processor runs the computer program to execute any one of the above-mentioned embodiments The high-precision map generation method described above.

本发明实施例提供了一种可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用于实现上述实施例中的任一所述的高精度地图生成方法。An embodiment of the present invention provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, it is used to realize the high-precision map described in any one of the above-mentioned embodiments generate method.

以上所述仅是本发明的优选实施方式,应当指出:对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications are also possible. It should be regarded as the protection scope of the present invention.

Claims (14)

1. A high-precision map generation method is characterized by comprising the following steps:
acquiring current laser point cloud frame data and IMU data;
if the current laser point cloud frame data is first frame data acquired by a laser radar, taking the current laser point cloud frame data as a first key frame, generating a point cloud map through the first key frame, and executing the steps again; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame after back-end optimization;
according to the first pose estimation value and IMU data corresponding to the time for scanning the current laser point cloud, carrying out distortion removal on the current laser point cloud to obtain a current laser point cloud frame after distortion removal, and extracting feature points from the current laser point cloud frame;
obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and if loop is detected, estimating to obtain a second position attitude estimation value of the current laser point cloud frame; otherwise, if no loop is detected, setting the second position estimation value to be null;
obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic point; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain a pose estimation value optimized at the rear end of the current frame;
judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to the pose estimation value optimized at the rear end of the current frame, removing a dynamic object, and generating a static point cloud map; and if not, re-executing the steps until the map generation is stopped.
2. The method as claimed in claim 1, wherein obtaining the current descriptor from the undistorted current laser point cloud frame comprises:
dividing points in the current laser point cloud frame after distortion removal into a plurality of areas according to the radar scanning angle and depth;
the maximum value of the emission intensity of the laser spot in each area is saved into an array, which is taken as the current descriptor.
3. The method as claimed in claim 1, wherein the estimating of the second pose estimation value of the current laser point cloud frame includes:
respectively comparing the current descriptor with the descriptor corresponding to each key frame in the historical point cloud map to obtain the similarity corresponding to each key frame;
and if the similarity is greater than a loop threshold, judging to detect a loop, selecting points with the distance from the points in the undistorted current laser cloud frame to be less than a set distance threshold from the corresponding key frame to form a matching point pair, and estimating according to a matching algorithm to obtain a second pose estimation value of the current laser point cloud frame.
4. A high accuracy map generation method according to claim 1, further comprising, before detecting the loop, preliminarily detecting the loop, including:
and projecting the undistorted current laser point cloud frame to a coordinate system of a historical point cloud map through the first pose estimation value, judging whether the coincidence degree of the projected current laser point cloud and the historical point cloud map is greater than a coincidence degree threshold value, if so, indicating that loop is preliminarily detected, otherwise, judging that no loop is detected.
5. The method as claimed in claim 1, wherein the step of performing the distortion removal on the current laser point cloud to obtain a current laser point cloud frame after the distortion removal comprises:
calculating the time of each point in the current laser point cloud frame scanned by the laser radar;
calculating the relative rotation amount of each point in the laser radar according to the corresponding IMU data in the current laser point cloud scanning time and the scanning time of each point;
calculating the relative displacement of each point according to the pose change from the nearest key frame to the current laser point cloud frame;
and according to the relative rotation amount and the relative displacement amount, carrying out distortion removal on points in the current laser point cloud frame to obtain the current laser point cloud frame after distortion removal.
6. The method as claimed in claim 1, wherein extracting feature points from the undistorted current laser point cloud frame comprises:
and calculating the curvature of each point of the current laser point cloud frame after distortion removal, taking the point with the curvature exceeding a preset angle threshold value as an angle characteristic point, and taking the point with the curvature smaller than a preset surface threshold value as a surface characteristic.
7. A high-precision map generation method according to claim 1, wherein the obtaining a third pose estimation value according to the first pose estimation value and the feature point comprises:
splicing a plurality of key frames closest to the current laser point cloud frame into a local map;
and converting the characteristic points into the local map coordinate system through the first pose estimation value, calculating point-line residual errors of the angular characteristic points and point-surface residual errors of the surface characteristic points, and inputting the point-line residual errors and the point-surface residual errors into a factor graph to obtain a third pose estimation value of the current laser point cloud frame.
8. A high accuracy map generating method according to claim 7,
for the angle feature point, searching a plurality of points which are closest to the angle feature point in the local map to form a line segment, and calculating the minimum distance from the angle feature point to the line segment as the point line residual error;
and for the surface feature point, searching a plurality of points which are closest to the surface feature point in the local map, constructing a plane, and calculating the minimum distance from the surface feature point to the plane as the point-surface residual error.
9. A high-precision map generation method according to claim 1, further comprising, after obtaining the third pose estimation value:
and judging the speed of the third posture estimation value, if the ratio of the moving speed from the latest key frame to the current frame to the moving speed from the next-nearest key frame to the latest key frame is greater than a preset speed ratio, judging that the current frame has a degradation phenomenon, and recalculating the third posture estimation value according to the uniform speed.
10. The method as claimed in claim 1, wherein the determining whether the current laser point cloud frame can be used as a key frame comprises:
if the distance between the current laser point cloud frame and the nearest key frame is larger than the key frame threshold value, judging that the current laser point cloud frame is a key frame;
or, if the distance between the current laser point cloud frame and the nearest key frame is greater than the key frame threshold value, and the number of the feature points in the current frame is greater than the set feature point threshold value, determining that the current laser point cloud frame is the key frame.
11. A high-precision map generation method according to claim 1, further comprising:
and acquiring current longitude and latitude height data, calculating a pose transformation matrix from the latest key frame to the current frame by combining the current longitude and latitude height data with the longitude and latitude height data of the latest key frame, and inputting the pose transformation matrix, the second pose estimation value and the third pose estimation value into a factor graph to obtain a pose estimation value optimized at the rear end of the current frame.
12. A high-precision map generation apparatus, comprising:
the acquisition module is used for acquiring current laser point cloud frame data and IMU data;
the IMU preprocessing module is used for taking the current laser point cloud frame data as a first key frame if the current laser point cloud frame data is first frame data acquired by a laser radar, sending the first key frame to the map generating module and returning the first key frame to the acquisition module; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame fed back by the back-end optimization module through back-end optimization;
the front-end odometer module is used for carrying out distortion removal on the current laser point cloud according to the first position posture estimation value and IMU data corresponding to the time of scanning the current laser point cloud to obtain a current laser point cloud frame after distortion removal and extracting feature points from the current laser point cloud frame;
the loop detection module is used for obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second position posture estimation value of the current laser point cloud frame if the loop is detected; otherwise, if no loop is detected, setting the second position and posture estimation value to be null;
the rear-end optimization module is used for obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a rear-end optimized posture estimation value of the current frame;
the map generation module is used for generating a point cloud map through the first key frame if the current frame is the first frame, otherwise, judging whether the current frame can be used as the key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to a pose estimation value optimized by the rear end of the current frame, and generating a static point cloud map after removing a dynamic object; and if not, returning to the acquisition module until the generation of the map is stopped.
13. A map generation apparatus characterized by comprising: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the high accuracy map generation method of any one of claims 1 to 11.
14. A readable storage medium, wherein a computer program is stored in the readable storage medium, and when executed by a processor, the computer program is configured to implement the high accuracy map generation method according to any one of claims 1 to 11.
CN202211566846.2A 2022-12-07 2022-12-07 A high-precision map generation method, device, equipment and storage medium Active CN115773747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211566846.2A CN115773747B (en) 2022-12-07 2022-12-07 A high-precision map generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211566846.2A CN115773747B (en) 2022-12-07 2022-12-07 A high-precision map generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115773747A true CN115773747A (en) 2023-03-10
CN115773747B CN115773747B (en) 2025-05-30

Family

ID=85391690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211566846.2A Active CN115773747B (en) 2022-12-07 2022-12-07 A high-precision map generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115773747B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036477A (en) * 2023-08-14 2023-11-10 松灵机器人(深圳)有限公司 Map expansion methods and related equipment
CN117974926A (en) * 2024-03-28 2024-05-03 苏州艾吉威机器人有限公司 A positioning and mapping method and system based on mechanical rotating three-dimensional laser
CN119714257A (en) * 2024-12-10 2025-03-28 广东汇天航空航天科技有限公司 Repositioning method, repositioning device, repositioning apparatus, repositioning medium and repositioning product
WO2025123422A1 (en) * 2023-12-14 2025-06-19 安徽蔚来智驾科技有限公司 Pose estimation method, readable storage medium, and intelligent device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN110827395A (en) * 2019-09-09 2020-02-21 广东工业大学 A real-time positioning and map construction method suitable for dynamic environment
CN110910389A (en) * 2019-10-30 2020-03-24 中山大学 Laser SLAM loop detection system and method based on graph descriptor
US20200109954A1 (en) * 2017-06-30 2020-04-09 SZ DJI Technology Co., Ltd. Map generation systems and methods
CN114088104A (en) * 2021-07-23 2022-02-25 武汉理工大学 Map generation method under automatic driving scene
CN114187418A (en) * 2021-12-14 2022-03-15 北京易航远智科技有限公司 Loop detection method, point cloud map construction method, electronic device and storage medium
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200109954A1 (en) * 2017-06-30 2020-04-09 SZ DJI Technology Co., Ltd. Map generation systems and methods
CN109064506A (en) * 2018-07-04 2018-12-21 百度在线网络技术(北京)有限公司 Accurately drawing generating method, device and storage medium
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN110827395A (en) * 2019-09-09 2020-02-21 广东工业大学 A real-time positioning and map construction method suitable for dynamic environment
CN110910389A (en) * 2019-10-30 2020-03-24 中山大学 Laser SLAM loop detection system and method based on graph descriptor
CN114088104A (en) * 2021-07-23 2022-02-25 武汉理工大学 Map generation method under automatic driving scene
CN114187418A (en) * 2021-12-14 2022-03-15 北京易航远智科技有限公司 Loop detection method, point cloud map construction method, electronic device and storage medium
CN115265523A (en) * 2022-09-27 2022-11-01 泉州装备制造研究所 Robot simultaneous positioning and mapping method, device and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑国贤 等: "机器人室内环境自主探索与地图构建方法", 《控制工程》, 31 October 2020 (2020-10-31), pages 1743 - 1750 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036477A (en) * 2023-08-14 2023-11-10 松灵机器人(深圳)有限公司 Map expansion methods and related equipment
WO2025123422A1 (en) * 2023-12-14 2025-06-19 安徽蔚来智驾科技有限公司 Pose estimation method, readable storage medium, and intelligent device
CN117974926A (en) * 2024-03-28 2024-05-03 苏州艾吉威机器人有限公司 A positioning and mapping method and system based on mechanical rotating three-dimensional laser
CN119714257A (en) * 2024-12-10 2025-03-28 广东汇天航空航天科技有限公司 Repositioning method, repositioning device, repositioning apparatus, repositioning medium and repositioning product

Also Published As

Publication number Publication date
CN115773747B (en) 2025-05-30

Similar Documents

Publication Publication Date Title
CN115773747B (en) A high-precision map generation method, device, equipment and storage medium
EP3886048B1 (en) Slam map joining method and system
CN110389348B (en) Positioning and Navigation Method and Device Based on LiDAR and Binocular Camera
CN115131420B (en) Visual SLAM method and device based on keyframe optimization
CN111830953B (en) Vehicle self-positioning method, device and system
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
CN113838129B (en) Method, device and system for obtaining pose information
CN105674993A (en) Binocular camera-based high-precision visual sense positioning map generation system and method
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN116608873A (en) Multi-sensor fusion positioning mapping method for automatic driving vehicle
CN111768489B (en) An indoor navigation map construction method and system
CN110599545B (en) Feature-based dense map construction system
US12236661B2 (en) Method for generating 3D reference points in a map of a scene
JP2020153956A (en) Moving body position estimation system and moving body position estimation method
CN116242374B (en) A SLAM positioning method based on multi-sensor fusion of direct method
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN114565674A (en) Pure visual positioning method and device for urban structured scene of automatic driving vehicle
CN115638787B (en) A digital map generation method, computer readable storage medium and electronic equipment
WO2020135325A1 (en) Mobile device positioning method, device and system, and mobile device
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment
CN117289298A (en) Multi-machine collaborative online mapping method, system and terminal equipment based on lidar
CN115761164A (en) Method and device for generating inverse perspective IPM image
CN115494845A (en) Navigation method, device, unmanned vehicle and storage medium based on depth camera
CN115908539A (en) A target volume automatic measurement method and device, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: No. 9 Mozhou East Road, Nanjing City, Jiangsu Province, 211111

Applicant after: Zijinshan Laboratory

Address before: No. 9 Mozhou East Road, Jiangning Economic Development Zone, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: Purple Mountain Laboratories

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant