CN117761704B - Method and system for estimating relative positions of multiple robots - Google Patents
Method and system for estimating relative positions of multiple robots Download PDFInfo
- Publication number
- CN117761704B CN117761704B CN202311668765.8A CN202311668765A CN117761704B CN 117761704 B CN117761704 B CN 117761704B CN 202311668765 A CN202311668765 A CN 202311668765A CN 117761704 B CN117761704 B CN 117761704B
- Authority
- CN
- China
- Prior art keywords
- robot
- point cloud
- point
- cluster
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005457 optimization Methods 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 23
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 claims abstract description 6
- 238000001914 filtration Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000001149 cognitive effect Effects 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 230000007774 longterm Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
一种多机器人相对位置的估计方法及系统,通过配置每个机器人采用激光雷达采集一定时间段内传感器的周围点云信息后,转换为齐次变换矩阵形式的相对距离和角度信息的同时,对点云信息进行聚类运算生成对应的标识符;再将每个机器人的相对距离和角度信息通过配对优化算法进行匹配,匹配方法的实现通过构造优化模型并使用非线性最小二乘法的Levenberg‑Marquardt算法求解得到聚类标识和每个机器人的对应关系。本发明使用可靠的点云聚类识别技术和非线性优化方法能够使多机器人集群实现高效的任务协作、更加精确、鲁棒的协同定位。提高多机器人整体的认知水平和团队效率。
A method and system for estimating the relative positions of multiple robots, wherein each robot is configured to use a laser radar to collect the surrounding point cloud information of the sensor within a certain period of time, and then the information is converted into relative distance and angle information in the form of a homogeneous transformation matrix, and the point cloud information is clustered to generate a corresponding identifier; the relative distance and angle information of each robot is then matched through a pairing optimization algorithm, and the matching method is implemented by constructing an optimization model and using the Levenberg-Marquardt algorithm of the nonlinear least squares method to solve the corresponding relationship between the cluster identifier and each robot. The present invention uses reliable point cloud clustering recognition technology and nonlinear optimization methods to enable a multi-robot cluster to achieve efficient task collaboration and more accurate and robust collaborative positioning. The overall cognitive level and team efficiency of multiple robots are improved.
Description
技术领域Technical Field
本发明涉及的是一种工业自动化领域的技术,具体是一种多机器人相对位置的估计方法及系统。The present invention relates to a technology in the field of industrial automation, in particular to a method and system for estimating the relative positions of multiple robots.
背景技术Background Art
多机器人的定位和协同问题是目前复杂且棘手的工业难题,多机器人协同定位是指一组机器人共同确定他们相对于全局坐标系的位置,以实现协同任务。在过去的技术中,使用的方法可以概括为集中式定位、全局定位和集体控制方法。但现有集中式定位技术会将得到的误差进行累计,将导致累积误差的不断扩大,影响定位效果;全局定位技术则要求每个机器人分别需要使用完全一致的算法和配置,这将导致整体一致性难以满足的问题;而集体控制技术则完全固定参数设置,不具备迁移性和可维护性的不足。The positioning and coordination of multiple robots is a complex and thorny industrial problem. Multi-robot collaborative positioning refers to a group of robots jointly determining their positions relative to the global coordinate system to achieve collaborative tasks. In the past technology, the methods used can be summarized as centralized positioning, global positioning and collective control methods. However, the existing centralized positioning technology will accumulate the errors obtained, which will lead to the continuous expansion of the cumulative errors and affect the positioning effect; the global positioning technology requires each robot to use completely consistent algorithms and configurations, which will lead to the problem of overall consistency being difficult to meet; and the collective control technology has completely fixed parameter settings and does not have the shortcomings of portability and maintainability.
发明内容Summary of the invention
本发明针对现有技术的上述问题,提出一种多机器人相对位置的估计方法及系统,使用可靠的点云聚类识别技术和非线性优化方法能够使多机器人集群实现高效的任务协作、更加精确、鲁棒的协同定位。提高多机器人整体的认知水平和团队效率。In view of the above problems in the prior art, the present invention proposes a method and system for estimating the relative positions of multiple robots, which uses reliable point cloud clustering recognition technology and nonlinear optimization methods to enable multi-robot clusters to achieve efficient task collaboration and more accurate and robust collaborative positioning, thereby improving the overall cognitive level and team efficiency of multiple robots.
本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:
本发明涉及一种多机器人相对位置的估计方法,通过配置每个机器人采用激光雷达采集一定时间段内传感器的周围点云信息后,转换为齐次变换矩阵形式的相对距离和角度信息的同时,对点云信息进行聚类运算生成对应的标识符;再将每个机器人的相对距离和角度信息通过配对优化算法进行匹配,匹配方法的实现通过构造优化模型并使用非线性最小二乘法的Levenberg-Marquardt算法求解得到聚类标识和每个机器人的对应关系。The present invention relates to a method for estimating the relative positions of multiple robots. Each robot is configured to use a laser radar to collect point cloud information around a sensor within a certain period of time, and the information is converted into relative distance and angle information in the form of a homogeneous transformation matrix. At the same time, a clustering operation is performed on the point cloud information to generate a corresponding identifier. The relative distance and angle information of each robot is then matched through a pairing optimization algorithm. The matching method is implemented by constructing an optimization model and solving the Levenberg-Marquardt algorithm of a nonlinear least squares method to obtain the corresponding relationship between the cluster identifier and each robot.
本发明涉及一种实现上述方法的系统,包括:点云预处理模块、点云聚类模块、消息通信模块和配对算法模块、其中:点云预处理模块根据激光雷达传感器得到的点云信息进行地面滤波处理,得到过滤掉地面点的滤地点云后,再将地面滤波之后得到的点云进行场景的筛选,删除部分离散点得到更加集中和精准的目标点云信息;点云聚类模块将目标点云信息进行聚类运算,得到每个聚类离自身坐标系的距离和角度信息;消息通信模块根据每个机器人中消息队列中的信息,通过局域网进行及时的发布自身点云聚类后得到的聚类结果并接收其他机器人的点云聚类后得到的聚类结果;配对算法模块根据聚类结果构建非线性优化函数,根据最小化不同机器人之间的距离和角度关系来完成机器人之间的配对,使得每个机器人得到和其他机器人的相对位置关系。The present invention relates to a system for implementing the above method, comprising: a point cloud preprocessing module, a point cloud clustering module, a message communication module and a pairing algorithm module, wherein: the point cloud preprocessing module performs ground filtering processing according to the point cloud information obtained by the laser radar sensor, obtains the filtered point cloud after filtering out the ground points, and then screens the scene on the point cloud obtained after the ground filtering, deletes some discrete points to obtain more concentrated and accurate target point cloud information; the point cloud clustering module performs clustering operation on the target point cloud information to obtain the distance and angle information of each cluster from its own coordinate system; the message communication module timely publishes the clustering results obtained after clustering its own point cloud through a local area network according to the information in the message queue of each robot, and receives the clustering results obtained after clustering the point clouds of other robots; the pairing algorithm module constructs a nonlinear optimization function according to the clustering results, and completes the pairing between robots according to minimizing the distance and angle relationship between different robots, so that each robot obtains a relative position relationship with other robots.
技术效果Technical Effects
本发明通过基于非线性优化函数的配对,可以实现机器人在完全未知的环境下确认自己的位置,并且可以实现多机器人交互进而帮助其他机器人也确认彼此之间的位置,这个办法相较于其他的协同定位算法而言,可以极大的降低对环境信息的依赖和传感器的依赖,实际的应用场景是,有多个机器人的复杂环境下,只要有一种任何一台机器人确立的自己的全局位置,就能够通过本发明的相对位置估计算法得到所有其他机器人的全局位置,进而能够执行更加复杂的任务。The present invention can realize the robot to confirm its own position in a completely unknown environment through pairing based on nonlinear optimization functions, and can realize multi-robot interaction to help other robots confirm each other's positions. Compared with other collaborative positioning algorithms, this method can greatly reduce the dependence on environmental information and sensors. The actual application scenario is that in a complex environment with multiple robots, as long as any robot establishes its own global position, the global position of all other robots can be obtained through the relative position estimation algorithm of the present invention, thereby being able to perform more complex tasks.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明流程图;Fig. 1 is a flow chart of the present invention;
图2为实施例效果示意图。FIG. 2 is a schematic diagram of the effect of the embodiment.
具体实施方式DETAILED DESCRIPTION
如图1所示,为本实施例涉及一种一种多机器人相对位置的估计方法,具体包括:As shown in FIG1 , this embodiment relates to a method for estimating relative positions of multiple robots, which specifically includes:
步骤1)设置每个机器人分别获取激光雷达得到的点云信息,并对其进行地面滤波和场景分割,具体包括:Step 1) Set each robot to obtain the point cloud information obtained by the laser radar, and perform ground filtering and scene segmentation on it, including:
1.1去除孤立点:对得到的原始点云进行遍历,删除点云中z轴(高度)最高和最低的一定比例的图像。同时通过最近邻搜索删除一些与其他点云距离较远的点,完成去除孤立点的工作。1.1 Remove isolated points: Traverse the original point cloud and delete a certain proportion of the highest and lowest images on the z-axis (height) in the point cloud. At the same time, delete some points that are far away from other point clouds through nearest neighbor search to complete the work of removing isolated points.
1.2地面滤波:使用PTD算法,定义一个包含坐标X_top,Y_top,X_bottom,Y_bottom的三角形的包围盒,把整个点云数据划分成:其中:宽度w和高度h。1.2 Ground filtering: Use the PTD algorithm to define a triangular bounding box containing the coordinates X_top, Y_top, X_bottom, Y_bottom, and divide the entire point cloud data into: Among them: width w and height h.
1.3从最低的点作为种子点开始PTD遍历:计算三角形中的潜在点判定,已知潜在点只可能在三角形的边和顶点上,当坡度S的倾角超过阈值,则该潜在点不可能在这个面上,故这个三角形外的点排除为地面点后进行下一个点的判断,该坡度S的倾角 其中:A,B为三角形bounding box的一边,P点为点云图像中不属于三角形bounding box的外点,PA,AB为向量,||.||为向量的二范式,以点P到三角形bounding box构成的面的角度作为三角形的倾角θPA。1.3 Start PTD traversal from the lowest point as the seed point: Calculate the potential point determination in the triangle. It is known that the potential point can only be on the edge and vertex of the triangle. When the inclination of the slope S exceeds the threshold, the potential point cannot be on this surface. Therefore, the point outside the triangle is excluded as a ground point and the next point is determined. The inclination of the slope S is Among them: A, B is a side of the triangle bounding box, point P is an external point in the point cloud image that does not belong to the triangle bounding box, PA, AB are vectors, ||.|| is the second normal form of the vector, and the angle from point P to the surface formed by the triangle bounding box is taken as the inclination angle θ PA of the triangle.
步骤2)设置每个机器人分别对得到的点云进行聚类运算并对每个聚类赋予一个唯一的标识符,代表感知到的其他机器人,具体包括:Step 2) Set each robot to perform clustering operations on the obtained point cloud and assign a unique identifier to each cluster to represent other robots perceived, including:
2.1)采用欧式距离聚类,通过设定点云聚类之间的距离作为外部阈值和点云聚类的内部距离作为内部阈值;2.1) Using Euclidean distance clustering, by setting the distance between point cloud clusters as the external threshold and the internal distance of point cloud clusters as the internal threshold;
2.2)选择点云的任一点P,对其他点进行遍历,将欧式距离小于内部阈值的作为P点的聚类内点,遍历完成后将这些点标记上聚类中心P和聚类的唯一ID。2.2) Select any point P of the point cloud, traverse other points, and take the points whose Euclidean distance is less than the internal threshold as the cluster internal points of point P. After the traversal is completed, these points are marked with the cluster center P and the unique ID of the cluster.
2.3)通过该聚类外的点为中心,重复上述过程,最后将得到的聚类中心遍历。2.3) Repeat the above process with the point outside the cluster as the center, and finally traverse the obtained cluster center.
2.4)当存在聚类中心存在欧氏距离小于外部阈值的其他聚类中心时,则将两个聚类合并,并重新重复寻找聚类内点的过程。直到程序找不到新的聚类,即根据距离和聚类中点的个数来过滤出合适的聚类,并将聚类中心的坐标和与自身距离输出到下一个模块。2.4) When there are other cluster centers with Euclidean distance less than the external threshold, the two clusters are merged and the process of finding the inner points of the cluster is repeated again until the program cannot find a new cluster, that is, it filters out the appropriate cluster based on the distance and the number of points in the cluster, and outputs the coordinates of the cluster center and the distance to itself to the next module.
步骤3)设置每个机器人分别计算自身与其他机器人体的相对距离和角度信息,每个机器人分别使用消息队列通信机制与其他机器人交换各自观测到的聚类标识符和对应的相对距离和角度信息。Step 3) Each robot is set to calculate the relative distance and angle information between itself and other robot bodies, and each robot uses the message queue communication mechanism to exchange the cluster identifiers and corresponding relative distance and angle information observed by each robot with other robots.
所述的相对距离和角度信息使用齐次变换矩阵表示。The relative distance and angle information is represented by a homogeneous transformation matrix.
所述的消息队列通信机制,具体包括:The message queue communication mechanism specifically includes:
i)发布消息队列:机器人将自己收集到的数据进行打包,以1Hz的频率发布到ROS的局域网中i) Publish message queue: The robot packages the data it collects and publishes it to the ROS local area network at a frequency of 1Hz
ii)订阅消息队列:机器人将开放自己的局域网端口,程序能够通过局域网侦听到其他机器人发布的数据,同时建立一个队列数据结构进行存储,每隔10s将核对数据包的完整性,如果数据完整将把数据注入到配对优化算法中。ii) Subscribe to message queue: The robot will open its own LAN port. The program can listen to the data released by other robots through the LAN and establish a queue data structure for storage. The integrity of the data packet will be checked every 10 seconds. If the data is complete, it will be injected into the pairing optimization algorithm.
步骤4)设置每个机器人根据各自的配对优化算法,通过相对距离和角度信息的对偶性建立约束条件并进行求解,具体包括:Step 4) Setting each robot to establish and solve constraints based on its own pairing optimization algorithm through the duality of relative distance and angle information, specifically including:
4.1)构造优化模型:欧几里得距离建立优化目标:minimize||HBX*HXA-HAB||2,其中:||.||表示欧几里得范数,表示两个矩阵之间的差异;机器人A视角下的齐次变换矩阵为HAB,机器人B视角下观察到机器人X在B视角下的齐次变换矩阵为HBX,以机器人X在A视角下的齐次变换矩阵HXA为变量。4.1) Construct optimization model: Euclidean distance establishes optimization target: minimize||H BX *H XA -H AB || 2 , where: ||.|| represents the Euclidean norm, which indicates the difference between two matrices; the homogeneous transformation matrix from the perspective of robot A is H AB , and the homogeneous transformation matrix of robot X from the perspective of robot B is H BX , and the homogeneous transformation matrix of robot X from the perspective of robot A is H XA as the variable.
4.2)使用非线性最小二乘法的Levenberg-Marquardt算法对步骤4.1的目标进行求解,最优的反变换矩阵H_XA,即表明X为本机器人视角下的A机器人,具体为:将优化问题变形为minimize J(HXA)=||HBX*HXA-HAB||2,其中:J(HXA)表示优化问题的目标函数;先初始化HXA的初始值,然后计算目标函数关于HXA的梯度和Hessian矩阵,再使用Levenberg-Marquardt规则进行更新并调整估计值,使得目标函数达到要求。4.2) Use the Levenberg-Marquardt algorithm of nonlinear least squares method to solve the objective of step 4.1. The optimal inverse transformation matrix H_XA, which indicates that X is robot A from the perspective of this robot, is specifically: transform the optimization problem into minimize J(H XA )=|| HBX *H XA -H AB || 2 , where: J(H XA ) represents the objective function of the optimization problem; first initialize the initial value of H XA , then calculate the gradient and Hessian matrix of the objective function with respect to H XA , and then use the Levenberg-Marquardt rule to update and adjust the estimated value so that the objective function meets the requirements.
所述的Levenberg-Marquardt规则具体为: 其中:为更新后的估计的HXA矩阵;为上一次迭代中的HXA矩阵的估计;H()为目标函数J(HXA)关于HXA的Hessian二阶导数矩阵,以描述目标函数的曲率;Grad()为目标函数J(JXA)关于HXA的梯度,即J对HXA的一阶导数;λ为一个正则化参数,即Levenberg-Marquardt参数,I为单位矩阵。The Levenberg-Marquardt rule is specifically: in: is the updated estimated H XA matrix; is the estimate of the H XA matrix in the previous iteration; H() is the Hessian second-order derivative matrix of the objective function J(H XA ) with respect to H XA , which describes the curvature of the objective function; Grad() is the gradient of the objective function J(J XA ) with respect to H XA , that is, the first-order derivative of J with respect to H XA ; λ is a regularization parameter, that is, the Levenberg-Marquardt parameter, and I is the unit matrix.
当λ较小时,算法更趋向于使用牛顿法,而当λ较大时,算法更趋向于使用梯度下降法。When λ is small, the algorithm tends to use Newton's method more, and when λ is large, the algorithm tends to use gradient descent more.
步骤5)设置每个机器人向局域网中发布得到的聚类标识和机器人id的配对关系,发布配对信息告知机器人和其他机器人的相对位置和角度信息,供后续节点进行优化和调度。Step 5) Set each robot to publish the obtained pairing relationship between the cluster identifier and the robot ID to the local area network, and publish the pairing information to inform the robot of the relative position and angle information with other robots for subsequent node optimization and scheduling.
所述的配对信息是指:比如机器人A接收到机器人B发来的聚类的位置信息H_BX,机器人A通过配对算法知道自己的一个点云聚类H_AX就是代表机器人B,那么A会发布自己的与机器人B配对的点云聚类的ID和机器人B发出的点云中属于机器人A的点云聚类的ID。The pairing information means: for example, robot A receives the cluster location information H_BX sent by robot B. Robot A knows through the pairing algorithm that one of its point cloud clusters H_AX represents robot B. Then A will publish the ID of its point cloud cluster paired with robot B and the ID of the point cloud cluster belonging to robot A in the point cloud sent by robot B.
经过具体实际实验,采用汽车底盘和小型自主机器人作为两个机器人平台,两个平台均搭载激光雷达,并且能够通过局域网进行通信,实验环境为机器人A和机器人B,实验一是以机器人A为基准,通过点云聚类得到的信息不通过配对的位置信息作为观测到的机器人B的位置;实验二是二者通过点云聚类后的配对优化之后,将这一对分别观察到对方的距离和方位进行平均之后得到的机器人B的位置。将实验一和实验二得到的位置分别和机器人B的GPS真值位置转化成的绝对坐标进行比较,具体为:将配对优化后得到的两个视角下的距离和方位进行平均处理,针对仅通过汽车底盘的自身定位自主机器人,显著降低了长时间运行下产生的定位偏差。在本次的实验中,这个系统能够稳定的运行直到自主机器人的电量不足。After specific actual experiments, a car chassis and a small autonomous robot were used as two robot platforms. Both platforms were equipped with laser radars and could communicate through a local area network. The experimental environment was robot A and robot B. Experiment 1 was based on robot A, and the information obtained through point cloud clustering was used as the observed position of robot B without pairing. Experiment 2 was the position of robot B obtained by averaging the distance and orientation observed by the pair after pairing optimization after point cloud clustering. The positions obtained in Experiments 1 and 2 were compared with the absolute coordinates converted from the GPS true position of robot B. Specifically, the distances and orientations from the two perspectives obtained after pairing optimization were averaged, which significantly reduced the positioning deviation caused by long-term operation for the autonomous robot that only used the car chassis for self-positioning. In this experiment, the system was able to run stably until the autonomous robot ran out of power.
如图2所示,使用本方法的平均误差为0.0263,而传统方法的平均误差为0.0270。传统方法随着运行时间的增长,误差有着明显的上升趋势;但是本发明运用的算法的误差较为稳定。由此看出,使用本方法能够在长时间运行的过程中保持一个稳定的定位误差。As shown in Figure 2, the average error of the method is 0.0263, while the average error of the traditional method is 0.0270. The error of the traditional method has a clear upward trend as the running time increases; however, the error of the algorithm used in the present invention is relatively stable. It can be seen that the method can maintain a stable positioning error during long-term operation.
上述具体实施可由本领域技术人员在不背离本发明原理和宗旨的前提下以不同的方式对其进行局部调整,本发明的保护范围以权利要求书为准且不由上述具体实施所限,在其范围内的各个实现方案均受本发明之约束。The above-mentioned specific implementation can be partially adjusted in different ways by those skilled in the art without departing from the principle and purpose of the present invention. The protection scope of the present invention shall be based on the claims and shall not be limited by the above-mentioned specific implementation. Each implementation scheme within its scope shall be subject to the constraints of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311668765.8A CN117761704B (en) | 2023-12-07 | 2023-12-07 | Method and system for estimating relative positions of multiple robots |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311668765.8A CN117761704B (en) | 2023-12-07 | 2023-12-07 | Method and system for estimating relative positions of multiple robots |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117761704A CN117761704A (en) | 2024-03-26 |
CN117761704B true CN117761704B (en) | 2024-08-13 |
Family
ID=90317245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311668765.8A Active CN117761704B (en) | 2023-12-07 | 2023-12-07 | Method and system for estimating relative positions of multiple robots |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117761704B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109709801A (en) * | 2018-12-11 | 2019-05-03 | 智灵飞(北京)科技有限公司 | A kind of indoor unmanned plane positioning system and method based on laser radar |
CN116184431A (en) * | 2022-12-04 | 2023-05-30 | 之江实验室 | Method, device and system for constructing UAV autonomous navigation and positioning map |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8843236B2 (en) * | 2012-03-15 | 2014-09-23 | GM Global Technology Operations LLC | Method and system for training a robot using human-assisted task demonstration |
JP6415190B2 (en) * | 2014-09-03 | 2018-10-31 | キヤノン株式会社 | ROBOT DEVICE, ROBOT CONTROL PROGRAM, RECORDING MEDIUM, AND ROBOT DEVICE CONTROL METHOD |
DE102018105544A1 (en) * | 2018-03-09 | 2019-09-12 | Jena-Optronik Gmbh | A method for initializing a tracking algorithm, method for training an artificial neural network, computer program product, computer-readable storage medium and data carrier signal for carrying out such methods and apparatus for data processing |
CN111161412B (en) * | 2019-12-06 | 2024-02-09 | 苏州艾吉威机器人有限公司 | Three-dimensional laser mapping method and system |
CN111563442B (en) * | 2020-04-29 | 2023-05-02 | 上海交通大学 | Slam method and system for fusing point cloud and camera image data based on laser radar |
CN113792566B (en) * | 2020-05-25 | 2024-05-17 | 华为技术有限公司 | A laser point cloud processing method and related equipment |
CN112308961B (en) * | 2020-11-05 | 2022-06-10 | 湖南大学 | A Fast and Robust 3D Reconstruction Method for Robots Based on Hierarchical Gaussian Mixture Model |
WO2022197544A1 (en) * | 2021-03-15 | 2022-09-22 | Omron Corporation | Method and apparatus for updating an environment map used by robots for self-localization |
CN112926514A (en) * | 2021-03-26 | 2021-06-08 | 哈尔滨工业大学(威海) | Multi-target detection and tracking method, system, storage medium and application |
CN114004869B (en) * | 2021-09-18 | 2025-04-11 | 浙江工业大学 | A positioning method based on 3D point cloud registration |
CN114419147A (en) * | 2021-11-16 | 2022-04-29 | 新兴际华集团有限公司 | Rescue robot intelligent remote human-computer interaction control method and system |
CN115855062B (en) * | 2022-12-07 | 2025-07-15 | 重庆理工大学 | An autonomous mapping and path planning method for indoor mobile robots |
CN115741720B (en) * | 2022-12-13 | 2025-07-08 | 合肥工业大学 | Zero calibration system and method for robot based on binocular vision technology and LM algorithm |
CN116128966A (en) * | 2023-02-28 | 2023-05-16 | 上海交通大学 | A Semantic Localization Method Based on Environmental Objects |
CN116758153B (en) * | 2023-05-31 | 2025-05-02 | 重庆大学 | Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot |
-
2023
- 2023-12-07 CN CN202311668765.8A patent/CN117761704B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109709801A (en) * | 2018-12-11 | 2019-05-03 | 智灵飞(北京)科技有限公司 | A kind of indoor unmanned plane positioning system and method based on laser radar |
CN116184431A (en) * | 2022-12-04 | 2023-05-30 | 之江实验室 | Method, device and system for constructing UAV autonomous navigation and positioning map |
Also Published As
Publication number | Publication date |
---|---|
CN117761704A (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107123164B (en) | Three-dimensional reconstruction method and system for keeping sharp features | |
CN110363861B (en) | Laser radar point cloud-based field crop three-dimensional reconstruction method | |
CN113470089B (en) | A method and system for cross-domain co-location and mapping based on 3D point cloud | |
CN110136072A (en) | Point cloud noise removal method, denoising system, computer equipment and storage medium | |
CN113192054A (en) | Method and system for detecting and positioning complex parts based on 2-3D vision fusion | |
CN109410330A (en) | One kind being based on BIM technology unmanned plane modeling method | |
CN113379841B (en) | A laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof | |
CN114018248A (en) | An Odometer Method and Mapping Method Fusion Code Wheel and LiDAR | |
WO2024007485A1 (en) | Aerial-ground multi-vehicle map fusion method based on visual feature | |
WO2023072269A1 (en) | Object tracking | |
CN112348950B (en) | A method for generating topological map nodes based on the distribution characteristics of laser point cloud | |
CN118334259A (en) | Mapping method and system integrating enhanced multi-line laser radar and IMU | |
CN118396875A (en) | A point cloud denoising method, device and application based on Mesh | |
CN117761704B (en) | Method and system for estimating relative positions of multiple robots | |
CN114972640B (en) | A method for cable 3D reconstruction and parameter calculation based on point cloud | |
CN113702941A (en) | Point cloud speed measurement method based on improved ICP (inductively coupled plasma) | |
WO2024213132A1 (en) | Unmanned vehicle localization, and autonomous vehicle | |
JP2024127840A (en) | SLAM method | |
CN118212361A (en) | Three-dimensional modeling method and device for hydropower station, storage medium and electronic device | |
CN117765095A (en) | Unmanned aerial vehicle camera and laser radar calibration method and system based on structural characteristics | |
Vemprala et al. | Collaborative localization for micro aerial vehicles | |
KR20110098252A (en) | How to determine camera pose | |
CN116358548A (en) | Ad hoc network formation navigation method, system and storage medium based on relative positioning | |
Tang et al. | A non-supporting printing algorithm for fused filament fabrication of multi-branch structure | |
CN115409903A (en) | Camera calibration method, device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |