[go: up one dir, main page]

CN117949966B - Mobile robot tracking control method based on laser radar and odometer - Google Patents

Mobile robot tracking control method based on laser radar and odometer Download PDF

Info

Publication number
CN117949966B
CN117949966B CN202410162551.1A CN202410162551A CN117949966B CN 117949966 B CN117949966 B CN 117949966B CN 202410162551 A CN202410162551 A CN 202410162551A CN 117949966 B CN117949966 B CN 117949966B
Authority
CN
China
Prior art keywords
mobile robot
point cloud
cloud data
follower
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410162551.1A
Other languages
Chinese (zh)
Other versions
CN117949966A (en
Inventor
张婧
付希超
盖文东
张璐
贺凯迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202410162551.1A priority Critical patent/CN117949966B/en
Publication of CN117949966A publication Critical patent/CN117949966A/en
Application granted granted Critical
Publication of CN117949966B publication Critical patent/CN117949966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the technical field of mobile robot tracking control, and discloses a mobile robot tracking control method based on a laser radar and an odometer. The mobile robot of the follower acquires laser radar information and converts the laser radar information into point cloud information; the follower mobile robot preprocesses the point cloud information; distinguishing a pilot mobile robot point cloud and an obstacle point cloud by utilizing a relative motion principle; calculating the pose of the navigator mobile robot relative to the follower mobile robot by using the plane geometric relationship; and continuously tracking the point cloud of the pilot mobile robot by using the principle of space position continuity and the historical pose of the pilot mobile robot. The invention realizes the stable tracking of the follower mobile robot to the navigator mobile robot in an unknown environment, and improves the safety and continuous tracking performance of the mobile robot in a complex environment.

Description

一种基于激光雷达和里程计的移动机器人跟踪控制方法A tracking control method for mobile robots based on laser radar and odometer

技术领域Technical Field

本发明属于移动机器人跟踪控制技术领域,具体涉及一种基于激光雷达和里程计的移动机器人跟踪控制方法,适用于在未知障碍物存在的环境中实现移动机器人的稳定跟踪。The present invention belongs to the technical field of mobile robot tracking control, and specifically relates to a mobile robot tracking control method based on laser radar and odometer, which is suitable for realizing stable tracking of the mobile robot in an environment with unknown obstacles.

背景技术Background Art

随着自主控制技术的发展,移动机器人需要在各种复杂环境中实现稳定的跟踪控制,以确保持续跟踪性能。传统的移动机器人跟踪控制方法,面临着障碍物重复检测、移动机器人本体运动干扰以及跟踪目标丢失并重新建立跟踪等困难,且一些传感器还可能会受到环境噪声的干扰。因此,有必要提出一种基于激光雷达和里程计的移动机器人跟踪控制方法,通过有效的数据处理以及相对运动原理,以提高移动机器人在复杂环境中的自主控制能力。With the development of autonomous control technology, mobile robots need to achieve stable tracking control in various complex environments to ensure continuous tracking performance. Traditional mobile robot tracking control methods face difficulties such as repeated obstacle detection, interference with the motion of the mobile robot itself, and loss of tracking targets and re-establishing tracking. In addition, some sensors may also be interfered by environmental noise. Therefore, it is necessary to propose a mobile robot tracking control method based on laser radar and odometer to improve the autonomous control ability of mobile robots in complex environments through effective data processing and relative motion principles.

发明内容Summary of the invention

本发明的目的在于提出一种基于激光雷达和里程计的移动机器人跟踪控制方法,以实现在未知障碍物环境下跟随者移动机器人对领航者移动机器人的稳定跟踪,从而显著提升移动机器人在复杂环境中的持续跟踪性能。The purpose of the present invention is to propose a mobile robot tracking control method based on laser radar and odometer, so as to realize stable tracking of a leader mobile robot by a follower mobile robot in an environment with unknown obstacles, thereby significantly improving the continuous tracking performance of the mobile robot in a complex environment.

本发明为了实现上述目的,采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical scheme:

一种基于激光雷达和里程计的移动机器人跟踪控制方法,其针对的移动机器人跟踪控制系统包括跟随者移动机器人与领航者移动机器人,所述方法包括如下步骤:A mobile robot tracking control method based on laser radar and odometer, wherein the mobile robot tracking control system includes a follower mobile robot and a leader mobile robot, and the method comprises the following steps:

步骤1.定义用于描述障碍物区域的数据结构,使用链表对所述数据结构进行存储,并将该链表分别命名为历史障碍物区域,同时定义用于描述领航者移动机器人历史位姿状态的数据结构;Step 1. Define a data structure for describing an obstacle area, use a linked list to store the data structure, and name the linked list as a historical obstacle area, and define a data structure for describing the historical posture state of the navigator mobile robot;

步骤2.获取跟随者移动机器人激光雷达传感器数据生成原始点云数据,同时考虑激光雷达传感器噪声、激光雷达传感器数据采样密度以及跟随者移动机器人运动干扰,并融合跟随者移动机器人里程计传感器数据,对原始点云数据进行预处理,得到期望点云数据;Step 2. Obtain the laser radar sensor data of the follower mobile robot to generate the original point cloud data, while considering the laser radar sensor noise, the sampling density of the laser radar sensor data and the motion interference of the follower mobile robot, and fuse the odometer sensor data of the follower mobile robot to pre-process the original point cloud data to obtain the expected point cloud data;

步骤3.排除位于历史障碍物区域中的期望点云数据,得到有效点云数据,同时基于领航者移动机器人历史位姿状态,利用相对运动原理或空间位置连续性原理,跟随者移动机器人区分领航者移动机器人点云数据和障碍物点云数据;Step 3. Eliminate the expected point cloud data located in the historical obstacle area to obtain valid point cloud data. At the same time, based on the historical posture state of the leader mobile robot, the follower mobile robot distinguishes the point cloud data of the leader mobile robot from the point cloud data of the obstacle by using the principle of relative motion or the principle of spatial position continuity;

步骤4.若步骤3无法对领航者移动机器人点云数据成功识别,清空领航者移动机器人历史位姿状态变量并返回步骤2;若步骤3对领航者移动机器人点云数据成功识别,则继续执行步骤5;Step 4. If step 3 fails to successfully identify the point cloud data of the navigator mobile robot, clear the historical position state variables of the navigator mobile robot and return to step 2; if step 3 successfully identifies the point cloud data of the navigator mobile robot, continue to step 5;

步骤5.基于步骤3得到的领航者移动机器人点云数据,利用平面几何关系计算领航者移动机器人的中心位置和方位角,并更新领航者移动机器人历史位姿状态,然后使用跟随者移动机器人里程计传感器数据对所述领航者移动机器人的中心位置和方位角进行坐标变换,得到领航者移动机器人相对于跟随者移动机器人的位姿,实现跟随者移动机器人对领航者移动机器人点云的跟踪;Step 5. Based on the point cloud data of the navigator mobile robot obtained in step 3, the center position and azimuth of the navigator mobile robot are calculated using the plane geometric relationship, and the historical position and posture state of the navigator mobile robot is updated. Then, the odometer sensor data of the follower mobile robot is used to perform coordinate transformation on the center position and azimuth of the navigator mobile robot, and the position and posture of the navigator mobile robot relative to the follower mobile robot are obtained, so that the follower mobile robot can track the point cloud of the navigator mobile robot;

步骤6.基于步骤3得到的障碍物点云数据,推算障碍物的中心位置与大小范围,并指定缓冲区域尺寸,得到障碍物区域,然后将障碍物区域添加到历史障碍物区域中,然后返回到步骤2;Step 6. Based on the obstacle point cloud data obtained in step 3, the center position and size range of the obstacle are calculated, and the buffer area size is specified to obtain the obstacle area, and then the obstacle area is added to the historical obstacle area, and then return to step 2;

其中,跟随者移动机器人与领航者移动机器人均搭载有激光雷达和里程计传感器。Among them, both the follower mobile robot and the leader mobile robot are equipped with laser radar and odometer sensors.

本发明具有如下优点:The present invention has the following advantages:

如上所述,本发明述及了一种基于激光雷达和里程计的移动机器人跟踪控制方法,适用于未知障碍物存在的环境。本发明方法所针对的系统由搭载二维激光雷达和里程计传感器的跟随者移动机器人和领航者移动机器人构成,其中,跟随者移动机器人获取激光雷达信息,并转换为点云信息;跟随者移动机器人对点云信息进行数据预处理,包括离群值剔除、体素栅格滤波降采样、参考坐标系变换和欧几里德聚类分割;利用相对运动原理区分领航者移动机器人点云和障碍物点云;利用平面几何关系计算领航者移动机器人相对跟随者移动机器人的位姿;利用空间位置连续性原理和领航者移动机器人的历史位姿,持续追踪领航者移动机器人点云;设置障碍物缓冲区域,标定并记录障碍物区域。即使由于障碍物遮挡等特殊情况使跟随者移动机器人对领航者移动机器人的视野丢失,当领航者移动机器人再次出现在跟随者移动机器人视野中时,基于相对运动原理和历史障碍物区域信息,跟随者移动机器人仍能对领航者移动机器人点云进行定位和持续跟踪。本发明通过传感器数据处理、相对运动原理、空间位置连续性原理和平面几何关系计算,实现了在未知环境下跟随者移动机器人对领航者移动机器人点云的稳定跟踪,提高了移动机器人在复杂环境中的安全性和持续跟踪性能。As described above, the present invention relates to a mobile robot tracking control method based on laser radar and odometer, which is suitable for environments with unknown obstacles. The system targeted by the method of the present invention is composed of a follower mobile robot and a navigator mobile robot equipped with a two-dimensional laser radar and an odometer sensor, wherein the follower mobile robot obtains laser radar information and converts it into point cloud information; the follower mobile robot performs data preprocessing on the point cloud information, including outlier removal, voxel grid filtering downsampling, reference coordinate system transformation and Euclidean clustering segmentation; the principle of relative motion is used to distinguish the point cloud of the navigator mobile robot from the point cloud of obstacles; the plane geometric relationship is used to calculate the position and posture of the navigator mobile robot relative to the follower mobile robot; the principle of spatial position continuity and the historical position and posture of the navigator mobile robot are used to continuously track the point cloud of the navigator mobile robot; an obstacle buffer area is set, and the obstacle area is calibrated and recorded. Even if the follower mobile robot loses its view of the navigator mobile robot due to special circumstances such as obstacle occlusion, when the navigator mobile robot appears in the follower mobile robot's field of view again, the follower mobile robot can still locate and continuously track the navigator mobile robot's point cloud based on the relative motion principle and historical obstacle area information. The present invention realizes the stable tracking of the navigator mobile robot's point cloud by the follower mobile robot in an unknown environment through sensor data processing, relative motion principle, spatial position continuity principle and plane geometric relationship calculation, thereby improving the safety and continuous tracking performance of the mobile robot in a complex environment.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明实施例中基于激光雷达和里程计的移动机器人跟踪控制方法的流程图。FIG1 is a flow chart of a mobile robot tracking control method based on a laser radar and an odometer in an embodiment of the present invention.

图2为本发明实施例中移动机器人的位姿模型示意图;FIG2 is a schematic diagram of a posture model of a mobile robot in an embodiment of the present invention;

图3为本发明实施例中移动机器人跟踪系统示意图;FIG3 is a schematic diagram of a mobile robot tracking system according to an embodiment of the present invention;

图4为本发明实施例中点云数据分割以及各簇进行中间分割和分段中心点计算示意图;FIG4 is a schematic diagram of point cloud data segmentation and intermediate segmentation and segment center point calculation for each cluster according to an embodiment of the present invention;

图5为本发明实施例中点云数据预处理后得到期望的点云数据的示意图;FIG5 is a schematic diagram of obtaining desired point cloud data after point cloud data preprocessing in an embodiment of the present invention;

图6为本发明实施例中位置连续性原理应用示意图;FIG6 is a schematic diagram of the application of the position continuity principle in an embodiment of the present invention;

图7为本发明实施例中领航者移动机器人相对于跟随者移动机器人的相对距离示意图;7 is a schematic diagram of the relative distance between the leader mobile robot and the follower mobile robot in an embodiment of the present invention;

图8为本发明实施例中领航者移动机器人相对于跟随者移动机器人的方向角示意图。FIG8 is a schematic diagram of the direction angle of the leader mobile robot relative to the follower mobile robot in an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

下面结合附图以及具体实施方式对本发明作进一步详细说明:The present invention is further described in detail below with reference to the accompanying drawings and specific embodiments:

实施例Example

鉴于未知障碍物环境下移动机器人跟踪的复杂性以及激光雷达和里程计传感器的数据处理挑战,本发明提出了一种基于激光雷达和里程计的移动机器人跟踪控制方法,以实现在未知障碍物存在的环境中跟随者移动机器人对领航者移动机器人的稳定跟踪,进而提高移动机器人在复杂环境中的安全性和持续跟踪性能。In view of the complexity of mobile robot tracking in an environment with unknown obstacles and the data processing challenges of lidar and odometer sensors, the present invention proposes a mobile robot tracking control method based on lidar and odometer to achieve stable tracking of a leader mobile robot by a follower mobile robot in an environment with unknown obstacles, thereby improving the safety and continuous tracking performance of the mobile robot in a complex environment.

本发明所针对的移动机器人跟踪控制系统包括跟随者移动机器人与领航者移动机器人,其中跟随者移动机器人与领航者移动机器人均搭载有二维激光雷达和里程计传感器。The mobile robot tracking control system targeted by the present invention comprises a follower mobile robot and a navigator mobile robot, wherein the follower mobile robot and the navigator mobile robot are both equipped with a two-dimensional laser radar and an odometer sensor.

如图1所示,基于激光雷达和里程计的移动机器人跟踪控制方法,包括如下步骤:As shown in FIG1 , the mobile robot tracking control method based on laser radar and odometer includes the following steps:

步骤1.定义用于描述障碍物区域的数据结构,使用链表对所述数据结构进行存储,并将该链表命名为历史障碍物区域,其中障碍物区域定义为矩形区域。同时定义用于描述领航者移动机器人历史位姿状态的数据结构。Step 1. Define a data structure for describing an obstacle region, use a linked list to store the data structure, and name the linked list as a historical obstacle region, where an obstacle region is defined as a rectangular region. At the same time, define a data structure for describing the historical position and posture state of the navigator mobile robot.

用于描述障碍物区域的数据结构包含四个双精度浮点数,依次表示障碍物区域中心点的横坐标和纵坐标、障碍物区域的长度以及障碍物区域的宽度。The data structure used to describe the obstacle area contains four double-precision floating-point numbers, which represent the horizontal and vertical coordinates of the center point of the obstacle area, the length of the obstacle area, and the width of the obstacle area, respectively.

使用链表对上述用于描述障碍物区域的数据结构进行存储,该链表的几何含义代表多个矩形障碍物区域,故将该链表命名为历史障碍物区域。A linked list is used to store the data structure for describing the obstacle area. The geometric meaning of the linked list represents multiple rectangular obstacle areas, so the linked list is named historical obstacle area.

用于描述领航者移动机器人历史位姿状态的数据结构包含三个双精度浮点数,依次表示领航者移动机器人中心点的横坐标和纵坐标以及领航者移动机器人的方向角。The data structure used to describe the historical position and posture state of the navigator mobile robot contains three double-precision floating-point numbers, which respectively represent the horizontal and vertical coordinates of the center point of the navigator mobile robot and the direction angle of the navigator mobile robot.

步骤2.获取跟随者移动机器人激光雷达传感器数据生成原始点云数据,同时考虑激光雷达传感器噪声、激光雷达传感器数据采样密度以及跟随者移动机器人运动干扰,并融合跟随者移动机器人里程计传感器数据,对原始点云数据进行预处理,得到期望点云数据。Step 2. Obtain the follower mobile robot's lidar sensor data to generate raw point cloud data, while considering the lidar sensor noise, lidar sensor data sampling density, and follower mobile robot motion interference, and fuse the follower mobile robot's odometer sensor data to preprocess the raw point cloud data and obtain the expected point cloud data.

如图3所示,获取跟随者移动机器人激光雷达传感器的扫描结果,生成周围环境数据,将周围环境数据以数组形式存储于计算机中。其中周围环境数据包括距离信息、激光雷达扫描范围和角度分辨率。将周围环境数据从激光雷达的极坐标系下的数据转化为激光雷达的直角坐标系坐标,生成原始云数据。该原始点云数据定义为一号点云数据。As shown in Figure 3, the scanning results of the laser radar sensor of the follower mobile robot are obtained to generate surrounding environment data, which are stored in the computer in the form of an array. The surrounding environment data includes distance information, laser radar scanning range and angular resolution. The surrounding environment data is converted from the laser radar polar coordinate system to the laser radar rectangular coordinate system coordinates to generate raw cloud data. The raw point cloud data is defined as point cloud data No. 1.

其中,从激光雷达极坐标系转换到激光雷达直角坐标系的转换公式如下:Among them, the conversion formula from the laser radar polar coordinate system to the laser radar rectangular coordinate system is as follows:

式中,r为激光雷达扫描点到激光雷达传感器的距离,θ为激光束相对于激光雷达传感器位置的方向,x和y为点云在激光雷达直角坐标系中的位置。Where r is the distance from the lidar scanning point to the lidar sensor, θ is the direction of the laser beam relative to the lidar sensor position, and x and y are the positions of the point cloud in the lidar rectangular coordinate system.

激光雷达直角坐标系即下文中激光雷达传感器坐标系。The laser radar rectangular coordinate system is the laser radar sensor coordinate system hereinafter.

步骤2.1.考虑激光雷达传感器噪声和激光雷达传感器数据采样密度,使用统计离群值剔除算法和体素栅格滤波器对原始点云数据进行滤波和降采样,生成二号点云数据。Step 2.1. Considering the lidar sensor noise and lidar sensor data sampling density, the original point cloud data is filtered and downsampled using a statistical outlier removal algorithm and a voxel grid filter to generate point cloud data No. 2.

步骤2.2.考虑跟随者移动机器人的运动干扰,获取跟随者移动机器人里程计传感器数据以及跟随者移动机器人刚体之间的变换信息,计算得到跟随者移动机器人激光雷达传感器坐标系到里程计传感器坐标系的变换矩阵,使用坐标变换将二号点云数据的参考坐标系从跟随者移动机器人激光雷达传感器坐标系变换到里程计传感器坐标系,生成三号点云数据。Step 2.2. Consider the motion interference of the follower mobile robot, obtain the transformation information between the follower mobile robot odometer sensor data and the follower mobile robot rigid body, calculate the transformation matrix from the follower mobile robot lidar sensor coordinate system to the odometer sensor coordinate system, and use the coordinate transformation to transform the reference coordinate system of the No. 2 point cloud data from the follower mobile robot lidar sensor coordinate system to the odometer sensor coordinate system to generate the No. 3 point cloud data.

其中跟随者移动机器人里程计传感器数据记录了跟随者移动机器人本体坐标系base_link在里程计传感器坐标系odom下的位置和方向,具体包含用于描述跟随者移动机器人位置信息的三维坐标(x,y,z)和用于描述跟随者移动机器人方向的四元数[a,b,c,d]T等信息。The odometer sensor data of the follower mobile robot records the position and direction of the follower mobile robot's body coordinate system base_link in the odometer sensor coordinate system odom, specifically including the three-dimensional coordinates (x, y, z) used to describe the position information of the follower mobile robot and the quaternion [a, b, c, d] T used to describe the direction of the follower mobile robot.

如图2示出了本发明实施例中移动机器人的位姿模型示意图。图中,Yodom以及Xodom分别表示里程计传感器坐标系的横纵坐标系,Oodom表示里程计传感器坐标系的坐标原点,OR既是移动机器人在里程计传感器坐标系的位置也是移动机器人本体坐标系的坐标原点,YR以及XR分别表示移动机器人本体坐标系的横纵坐标系。FIG2 shows a schematic diagram of the pose model of the mobile robot in an embodiment of the present invention. In the figure, Yodom and Xodom respectively represent the horizontal and vertical coordinate systems of the odometer sensor coordinate system, Oodom represents the coordinate origin of the odometer sensor coordinate system, OR is both the position of the mobile robot in the odometer sensor coordinate system and the coordinate origin of the mobile robot body coordinate system, and YR and XR respectively represent the horizontal and vertical coordinate systems of the mobile robot body coordinate system.

变换矩阵的计算公式以及三号点云数据的获取过程如下:The calculation formula of the transformation matrix and the process of obtaining the No. 3 point cloud data are as follows:

式中,R为旋转矩阵,表示跟随者移动机器人本体坐标系相对于里程计传感器坐标系的姿态信息;B为平移向量,表示跟随者移动机器人本体坐标系相对于里程计传感器坐标系的位置信息,参数(x,y,z)表示跟随者移动机器人位置信息的三维坐标。Where R is the rotation matrix, which represents the posture information of the follower mobile robot body coordinate system relative to the odometer sensor coordinate system; B is the translation vector, which represents the position information of the follower mobile robot body coordinate system relative to the odometer sensor coordinate system, and the parameters (x, y, z) represent the three-dimensional coordinates of the follower mobile robot position information.

将旋转矩阵R和平移向量B结合起来形成一个4×4的变换矩阵T1,表示跟随者移动机器人本体坐标系相对于里程计传感器坐标系的位姿信息;其中,0是一个1×3的行向量,[a,b,c,d]T为旋转矩阵R的四元数表示形式,用于描述跟随者移动机器人姿态。The rotation matrix R and the translation vector B are combined to form a 4×4 transformation matrix T 1 , which represents the position information of the follower mobile robot body coordinate system relative to the odometer sensor coordinate system; where 0 is a 1×3 row vector, [a, b, c, d] T is the quaternion representation of the rotation matrix R, which is used to describe the posture of the follower mobile robot.

获取跟随者移动机器人刚体之间的变换信息,得到跟随者移动机器人激光雷达传感器坐标系front_laser到跟随者移动机器人本体坐标系的变换矩阵,由于激光雷达传感器与跟随者移动机器人刚性连接,故两者间的相对位置和姿态固定不变,故此处提及的此处变换矩阵为常数矩阵,记为T2The transformation information between the rigid bodies of the follower mobile robot is obtained, and the transformation matrix from the laser radar sensor coordinate system front_laser of the follower mobile robot to the coordinate system of the follower mobile robot body is obtained. Since the laser radar sensor is rigidly connected to the follower mobile robot, the relative position and posture between the two are fixed, so the transformation matrix mentioned here is a constant matrix, denoted as T 2 .

考虑跟随者移动机器人的运动干扰,使用坐标变换,将二号点云数据的参考坐标系从跟随者移动机器人激光雷达传感器坐标系变换到里程计传感器坐标系,生成三号点云数据。Considering the motion interference of the follower mobile robot, the reference coordinate system of the No. 2 point cloud data is transformed from the follower mobile robot's lidar sensor coordinate system to the odometer sensor coordinate system using coordinate transformation to generate No. 3 point cloud data.

P3=(T1×T2)·P2 (2)P 3 =(T 1 ×T 2 )·P 2 (2)

式中,P2为二号点云数据,P3为三号点云数据。Where P2 is the second point cloud data, and P3 is the third point cloud data.

步骤2.3.使用欧几里德聚类方法对三号点云数据进行分割处理,得到包含K个簇的四号点云数据;对四号点云数据中的每个簇进行点云的中间分割和分段中心点计算,得到每个簇分割后的两个子集以及每个子集的中心点坐标,K为自然数。Step 2.3. Use the Euclidean clustering method to segment the point cloud data No. 3 to obtain the point cloud data No. 4 containing K clusters; perform intermediate segmentation and segment center point calculation on each cluster in the point cloud data No. 4 to obtain two subsets after each cluster segmentation and the center point coordinates of each subset, where K is a natural number.

将所述中心点坐标全部存储到一个数组中,即得到期望点云数据。The coordinates of the central points are all stored in an array, and the desired point cloud data is obtained.

具体的,首先使用欧几里德聚类方法对三号点云数据进行分割,得到包含K个簇的四号点云数据如图4所示:Specifically, the Euclidean clustering method is first used to segment the point cloud data No. 3, and the point cloud data No. 4 containing K clusters is obtained as shown in Figure 4:

首先,设定一个距离阈值ε,对于三号点云数据中每一对点(xi,yi)和(xj,yj),计算欧几里德距离: First, a distance threshold ε is set, and for each pair of points (x i , y i ) and (x j , y j ) in the No. 3 point cloud data, the Euclidean distance is calculated:

然后,将欧几里德距离dij小于阈值ε的点划分到同一簇,根据周围空间中分隔物体的数量,共生成包含K个簇的四号点云数据。Then, the points whose Euclidean distance d ij is less than the threshold ε are divided into the same cluster, and four point cloud data containing K clusters are generated according to the number of separated objects in the surrounding space.

如图5所示,对四号点云数据中的每个簇进行点云的中间分割和分段中心点计算,得到每个簇分割后的两个子集以及每个子集的中心点坐标。As shown in FIG5 , the middle segmentation of the point cloud and the calculation of the segment center point are performed on each cluster in the No. 4 point cloud data to obtain two subsets after each cluster segmentation and the coordinates of the center point of each subset.

具体的,各子集的中心点坐标如下公式所示:Specifically, the coordinates of the center points of each subset are shown in the following formula:

式中,Zi_1与Zi_2为第i簇点云数据两个子集的中心点坐标,i=1,2,...,K;N为第i簇点云数据的长度,符号表示向下取整运(Floor);用于将一个实数转换为最接近且不大于该实数的整数,xj和yj为点云数据的横纵坐标,j=1,2,...,N。Where, Zi_1 and Zi_2 are the center point coordinates of the two subsets of the i-th cluster point cloud data, i = 1, 2, ..., K; N is the length of the i-th cluster point cloud data, symbol Represents floor operation; it is used to convert a real number into the integer that is closest to and not greater than the real number. xj and yj are the horizontal and vertical coordinates of the point cloud data, and j = 1, 2, ..., N.

将上述中心点坐标按照原有顺序全部存储到一个数组中,即得到期望点云数据。The above center point coordinates are stored in an array in the original order to obtain the expected point cloud data.

步骤3.排除位于历史障碍物区域中的期望点云数据,得到有效点云数据,同时基于领航者移动机器人历史位姿状态,利用相对运动原理或空间位置连续性原理,跟随者移动机器人区分领航者移动机器人点云数据和障碍物点云数据。Step 3. Eliminate the expected point cloud data located in the historical obstacle area to obtain valid point cloud data. At the same time, based on the historical posture state of the leader mobile robot and using the principle of relative motion or the principle of spatial position continuity, the follower mobile robot distinguishes the point cloud data of the leader mobile robot from the obstacle point cloud data.

基于步骤2得到的期望点云数据,计算期望点云数据各簇的中心点坐标:Based on the expected point cloud data obtained in step 2, calculate the center point coordinates of each cluster of the expected point cloud data:

式中,oMi为点云数据各簇的中心点坐标,表示在跟随者移动机器人里程计传感器坐标系下点云数据各簇的中心点坐标,其中i=1,2,...,K。Where o M i is the coordinate of the center point of each cluster of point cloud data, which represents the coordinate of the center point of each cluster of point cloud data in the coordinate system of the follower mobile robot odometer sensor, where i = 1, 2, ..., K.

排除期望点云数据中位于历史障碍物区域中的簇的具体过程如下:The specific process of excluding clusters located in historical obstacle areas in the desired point cloud data is as follows:

输入:oMi即期望点云数据每个簇的中心点坐标、历史障碍物区域链表LIST;Input: o Mi is the center point coordinates of each cluster of the expected point cloud data and the historical obstacle area list LIST;

输出:是否在历史障碍物区域内的布尔值。Output: Boolean value indicating whether the vehicle is within the historical obstacle area.

判断点是否在历史障碍物区域内的具体过程如下:The specific process of judging whether a point is within the historical obstacle area is as follows:

对于LIST中每一个障碍物区域:For each obstacle area in LIST:

障碍物中心点_x=障碍物区域中心点的x坐标;Obstacle center point_x = x coordinate of the center point of the obstacle area;

障碍物中心点_y=障碍物区域中心点的y坐标;Obstacle center point_y = y coordinate of the center point of the obstacle area;

障碍物长度=障碍物区域的长度;Obstacle length = length of obstacle area;

障碍物宽度=障碍物区域的宽度。Obstacle width = width of the obstacle area.

计算矩形边界:Calculate the bounds of the rectangle:

右边界=(障碍物中心点_y-障碍物宽度)/2;Right boundary = (obstacle center point_y-obstacle width)/2;

左边界=(障碍物中心点_y+障碍物宽度)/2;Left boundary = (obstacle center point_y + obstacle width)/2;

下边界=(障碍物中心点_x-障碍物长度)/2;Lower boundary = (obstacle center point_x-obstacle length)/2;

上边界=(障碍物中心点_x+障碍物长度)/2。Upper boundary = (obstacle center point_x + obstacle length)/2.

如果oMi位于由上述四个边界值围成的矩形范围内,返回是;返回否。If o M i is within the rectangular range enclosed by the above four boundary values, return yes; return no.

其余期望点云数据簇按原有顺序依次存储在数组中,构成有效点云数据。The remaining expected point cloud data clusters are stored in the array in the original order to form valid point cloud data.

针对初始阶段领航者移动机器人的定位问题、正常运行阶段领航者移动机器人的跟踪定位问题和跟踪丢失阶段领航者移动机器人的重新定位问题,本发明分别设计初始定位、跟踪定位和重新定位三种不同的定位方法,其中不同的阶段分别运行对应的方法。In order to solve the positioning problem of the navigator mobile robot in the initial stage, the tracking and positioning problem of the navigator mobile robot in the normal operation stage, and the re-positioning problem of the navigator mobile robot in the tracking loss stage, the present invention designs three different positioning methods, namely initial positioning, tracking positioning and re-positioning, and corresponding methods are run in different stages.

初始阶段是指在程序启动之后,且在定位到领航者移动机器人点云之前的一段时间。The initial stage refers to the period after the program is started and before the point cloud of the leader mobile robot is located.

初始化阶段成功定位到领航者移动机器人点云之后,进入正常运行阶段。After the initialization phase successfully locates the point cloud of the navigator mobile robot, it enters the normal operation phase.

由于障碍物对领航者移动机器人的遮挡,导致在正常运行阶段无法定位领航者移动机器人点云,进入跟踪丢失阶段。Due to the obstruction of the navigator mobile robot by obstacles, the point cloud of the navigator mobile robot cannot be located during the normal operation stage, and the robot enters the tracking loss stage.

在跟踪丢失阶段重新识别到领航者移动机器人点云后,回到正常运行阶段。After re-identifying the point cloud of the leader mobile robot during the tracking loss phase, it returns to the normal operation phase.

初始定位方法:Initial positioning method:

程序连续进入初始定位方法的次数小于C次时,记录有效点云数据到数组中。When the program enters the initial positioning method less than C times continuously, the valid point cloud data is recorded in the array.

程序连续进入初始定位方法的次数等于C次时,记录有效点云数据到数组中,并使用平均值算法计算已记录的C组有效点云数据的平均值,称为初始有效点云数据。When the program enters the initial positioning method for C consecutive times, the valid point cloud data is recorded in an array, and the average value algorithm is used to calculate the average value of the recorded C groups of valid point cloud data, which is called the initial valid point cloud data.

使用公式(4)计算初始有效点云数据各簇和有效点云数据各簇的中心点坐标,分别称为点云初始位置和点云实时位置。Formula (4) is used to calculate the coordinates of the center points of each cluster of initial valid point cloud data and each cluster of valid point cloud data, which are called the initial position of the point cloud and the real-time position of the point cloud, respectively.

将上述初始有效点云数据各簇和有效点云数据各簇分别连接成线段,使用反正切函数,计算该线段与跟随者移动机器人里程计传感器坐标系水平轴正方向的夹角,分别称为点云初始角度和点云实时角度。The above-mentioned clusters of initial valid point cloud data and clusters of valid point cloud data are connected into line segments, and the inverse tangent function is used to calculate the angle between the line segment and the positive direction of the horizontal axis of the follower mobile robot odometer sensor coordinate system, which are called the point cloud initial angle and the point cloud real-time angle respectively.

当领航者移动机器人发生平移运动,导致点云初始位置和点云实时位置之间的欧几里德距离大于阈值σ时,从有效点云数据中区分领航者移动机器人点云数据。When the navigator mobile robot undergoes translational motion, resulting in the Euclidean distance between the initial position of the point cloud and the real-time position of the point cloud being greater than a threshold σ, the point cloud data of the navigator mobile robot is distinguished from the valid point cloud data.

或,当领航者移动机器人发生旋转,导致点云初始角度和点云实时角度之间的差值大于阈值η时,从有效点云数据中区分领航者移动机器人点云数据。Or, when the navigator mobile robot rotates, causing the difference between the initial angle of the point cloud and the real-time angle of the point cloud to be greater than a threshold η, the point cloud data of the navigator mobile robot is distinguished from the valid point cloud data.

跟踪定位方法:Tracking and positioning method:

使用公式(4)计算有效点云数据各簇的中心点坐标,称之为点云实时位置。Formula (4) is used to calculate the center point coordinates of each cluster of valid point cloud data, which is called the real-time position of the point cloud.

获取领航者移动机器人历史位姿状态变量,以该领航者移动机器人历史位姿状态变量中领航者移动机器人的中心点为圆心,做一个半径为R的圆形区域,如图6所示,称该圆形区域为圆形边界条件,点云实时位置位于圆形边界条件内的有效点云数据簇,认为是领航者移动机器人点云数据,从而在有效点云数据中区分领航者移动机器人点云数据。The historical posture state variables of the navigator mobile robot are obtained, and a circular area with a radius of R is made with the center point of the navigator mobile robot in the historical posture state variables of the navigator mobile robot as the center of the circle, as shown in Figure 6. The circular area is called the circular boundary condition, and the valid point cloud data cluster whose real-time position of the point cloud is located within the circular boundary condition is considered to be the point cloud data of the navigator mobile robot, thereby distinguishing the point cloud data of the navigator mobile robot from the valid point cloud data.

重新定位方法:Repositioning method:

程序连续进入重新定位方法的次数不大于C次时,记录有效点云数据到数组中,并计算已记录的有效点云数据的平均值,称为实时性的初始有效点云数据。When the program enters the re-positioning method for no more than C consecutive times, the valid point cloud data is recorded into an array, and the average value of the recorded valid point cloud data is calculated, which is called the real-time initial valid point cloud data.

使用公式(4)计算实时性的初始有效点云数据各簇和有效点云数据各簇的中心点坐标,分别称为实时性的点云初始位置和点云实时位置。Formula (4) is used to calculate the coordinates of the center points of each cluster of the initial effective point cloud data and each cluster of the effective point cloud data, which are called the real-time point cloud initial position and point cloud real-time position respectively.

将上述实时性的初始有效点云数据各簇和有效点云数据各簇分别连接成线段,使用反正切函数,计算该线段与跟随者移动机器人里程计传感器坐标系水平轴正方向的夹角,分别称为实时性的点云初始角度和点云实时角度。The above-mentioned clusters of initial valid real-time point cloud data and clusters of valid point cloud data are connected into line segments, and the inverse tangent function is used to calculate the angle between the line segment and the positive direction of the horizontal axis of the follower mobile robot odometer sensor coordinate system, which are called the real-time point cloud initial angle and point cloud real-time angle respectively.

当领航者移动机器人发生平移运动,导致实时性的点云初始位置和点云实时位置之间的欧几里德距离大于阈值σ时,从有效点云数据中区分领航者移动机器人点云数据。When the navigator mobile robot undergoes translational motion, resulting in the Euclidean distance between the initial position of the real-time point cloud and the real-time position of the point cloud being greater than a threshold σ, the point cloud data of the navigator mobile robot is distinguished from the valid point cloud data.

或,当领航者移动机器人发生旋转,导致实时性的点云初始角度和点云实时角度之间的差值大于阈值η时,从有效点云数据中区分领航者移动机器人点云数据。Or, when the navigator mobile robot rotates, causing the difference between the real-time point cloud initial angle and the point cloud real-time angle to be greater than a threshold η, the navigator mobile robot point cloud data is distinguished from the valid point cloud data.

步骤4.若当障碍物对领航者移动机器人遮挡导致在步骤3的正常运行阶段无法从有效点云数据中区分领航者移动机器人点云数据,清空领航者移动机器人历史位姿状态变量并返回步骤2。否则,若步骤3对领航者移动机器人点云数据成功识别,则继续执行步骤5。Step 4. If the obstacle blocks the navigator mobile robot and the point cloud data of the navigator mobile robot cannot be distinguished from the valid point cloud data during the normal operation phase of step 3, clear the historical pose state variables of the navigator mobile robot and return to step 2. Otherwise, if step 3 successfully identifies the point cloud data of the navigator mobile robot, continue to step 5.

步骤5.基于步骤3得到的领航者移动机器人点云数据,利用平面几何关系计算领航者移动机器人的中心位置和方位角,并使用跟随者移动机器人里程计传感器数据对所述领航者移动机器人的中心位置和方位角进行坐标变换,得到领航者移动机器人相对于跟随者移动机器人的位姿,实现跟随者移动机器人对领航者移动机器人点云的跟踪。Step 5. Based on the point cloud data of the navigator mobile robot obtained in step 3, the center position and azimuth of the navigator mobile robot are calculated using plane geometric relationships, and the odometer sensor data of the follower mobile robot is used to perform coordinate transformation on the center position and azimuth of the navigator mobile robot to obtain the position and posture of the navigator mobile robot relative to the follower mobile robot, thereby enabling the follower mobile robot to track the point cloud of the navigator mobile robot.

步骤5.1.基于步骤3得到的领航者移动机器人点云数据,使用平均值算法计算领航者移动机器人点云数据中心点坐标,根据点云数据中心点与领航者移动机器人中心点的固定位置关系,得到领航者移动机器人中心点位置,即中心位置,记为oMlStep 5.1. Based on the point cloud data of the navigator mobile robot obtained in step 3, use the average value algorithm to calculate the coordinates of the center point of the point cloud data of the navigator mobile robot. According to the fixed position relationship between the point cloud data center point and the center point of the navigator mobile robot, the center point position of the navigator mobile robot, that is, the center position, is obtained, denoted as o M l .

步骤5.2.基于步骤3得到的领航者移动机器人点云数据,将领航者移动机器人点云数据连接成一条线段,计算该条线段与里程计传感器坐标系水平轴正方向的夹角,得到领航者移动机器人的角度,即方位角,记为oθlStep 5.2. Based on the point cloud data of the navigator mobile robot obtained in step 3, connect the point cloud data of the navigator mobile robot into a line segment, calculate the angle between the line segment and the positive direction of the horizontal axis of the odometer sensor coordinate system, and obtain the angle of the navigator mobile robot, that is, the azimuth angle, recorded as o θ l .

步骤5.3.使用步骤5.1得到的中心位置和步骤5.2得到的方位角信息,更新领航者移动机器人历史位姿状态变量。Step 5.3. Use the center position obtained in step 5.1 and the azimuth information obtained in step 5.2 to update the historical posture state variables of the leader mobile robot.

步骤5.4.获取跟随者移动机器人里程计传感器数据以及跟随者移动机器人刚体之间的变换信息,计算得到里程计传感器坐标系到跟随者移动机器人底盘坐标系的变换矩阵。Step 5.4. Obtain the transformation information between the odometer sensor data of the follower mobile robot and the rigid body of the follower mobile robot, and calculate the transformation matrix from the odometer sensor coordinate system to the chassis coordinate system of the follower mobile robot.

首先获取跟随者移动机器人里程计传感器数据,计算跟随者移动机器人本体坐标系到里程计传感器坐标系的旋转矩阵和变换矩阵,将两者分别取逆,得到里程计传感器坐标系到跟随者移动机器人本体坐标系的旋转矩阵和变换矩阵,分别记为R-1和T3First, the odometer sensor data of the follower mobile robot is obtained, and the rotation matrix and transformation matrix from the coordinate system of the follower mobile robot to the odometer sensor coordinate system are calculated. The two are inverted to obtain the rotation matrix and transformation matrix from the odometer sensor coordinate system to the coordinate system of the follower mobile robot, which are recorded as R -1 and T 3 respectively.

获取跟随者移动机器人刚体之间的变换信息,得到跟随者移动机器人本体坐标系到跟随者移动机器人底盘坐标系chassis_link的变换矩阵,由于随者移动机器人本体坐标系与跟随者移动机器人底盘坐标系的相对位置和姿态固定不变,故此处的变换矩阵为常数矩阵,记为T4The transformation information between the rigid bodies of the follower mobile robot is obtained, and the transformation matrix from the coordinate system of the follower mobile robot to the coordinate system of the chassis of the follower mobile robot chassis_link is obtained. Since the relative position and posture of the coordinate system of the follower mobile robot body and the coordinate system of the chassis of the follower mobile robot are fixed, the transformation matrix here is a constant matrix, denoted as T 4 .

步骤5.5.使用步骤5.4得到的变换矩阵,将步骤5.1得到的中心位置以及步骤5.2得到的方位角的参考坐标系从里程计传感器坐标系变换到跟随者移动机器人底盘坐标系,如公式(5)所示,得到领航者移动机器人相对于跟随者移动机器人的位姿,如图7所示。图中,OF表示跟随者移动机器人本体坐标系的坐标原点,YF以及XF分别表示移动机器人本体坐标系的横纵坐标系。领航者移动机器人相对于跟随者移动机器人的相对位置计算公式如下:Step 5.5. Using the transformation matrix obtained in step 5.4, transform the reference coordinate system of the center position obtained in step 5.1 and the azimuth angle obtained in step 5.2 from the odometer sensor coordinate system to the chassis coordinate system of the follower mobile robot, as shown in formula (5), and obtain the position and posture of the leader mobile robot relative to the follower mobile robot, as shown in Figure 7. In the figure, OF represents the coordinate origin of the body coordinate system of the follower mobile robot, and YF and XF represent the horizontal and vertical coordinate systems of the body coordinate system of the mobile robot, respectively. The relative position calculation formula of the leader mobile robot relative to the follower mobile robot is as follows:

fMl=(T3×T4oMl (5) f M l = (T 3 ×T 4o M l (5)

式中,fMl为相对位置,表示领航者移动机器人中心点在跟随者移动机器人底盘坐标系上的坐标。Where fMl is the relative position, which represents the coordinate of the center point of the leader mobile robot in the chassis coordinate system of the follower mobile robot.

使用旋转矩阵,将方位角的参考坐标系从里程计传感器坐标系变换到跟随者移动机器人底盘坐标系,得到领航者移动机器人相对于跟随者移动机器人的角度,如图8所示:Using the rotation matrix, the reference coordinate system of the azimuth angle is transformed from the odometer sensor coordinate system to the chassis coordinate system of the follower mobile robot to obtain the angle of the leader mobile robot relative to the follower mobile robot, as shown in Figure 8:

式中,fRl为旋转矩阵,表示移动机器人底盘坐标系水平轴正方向到领航者移动机器人前进正方向的旋转关系。Where fRl is the rotation matrix, which represents the rotation relationship from the positive direction of the horizontal axis of the mobile robot chassis coordinate system to the positive direction of the leader mobile robot.

从上述旋转矩阵fRl中,提取出领航者移动机器人前进正方向与移动机器人底盘坐标系水平轴正方向的夹角fθlFrom the above rotation matrix f R l , the angle f θ l between the positive direction of the leader mobile robot and the positive direction of the horizontal axis of the mobile robot chassis coordinate system is extracted.

至此,得到领航者移动机器人相对于跟随者移动机器人的位姿,从而实现跟随者移动机器人对领航者移动机器人点云的跟踪。At this point, the position and posture of the leader mobile robot relative to the follower mobile robot are obtained, so that the follower mobile robot can track the point cloud of the leader mobile robot.

步骤6.基于步骤3得到的障碍物点云数据,推算障碍物的中心位置与大小范围,并指定缓冲区域尺寸,得到障碍物区域,然后将障碍物区域添加到历史障碍物区域中,然后返回步骤2。Step 6. Based on the obstacle point cloud data obtained in step 3, the center position and size range of the obstacle are calculated, and the buffer area size is specified to obtain the obstacle area, and then the obstacle area is added to the historical obstacle area, and then return to step 2.

基于步骤3得到的障碍物点云数据,利用平面几何关系,推算能将全部障碍物点云数据包围的矩形区域,具体推算公式如下:Based on the obstacle point cloud data obtained in step 3, the rectangular area that can enclose all the obstacle point cloud data is calculated using the plane geometric relationship. The specific calculation formula is as follows:

式中,Zj_mid为障碍物点云数据的中心点坐标;Where Z j_mid is the coordinate of the center point of the obstacle point cloud data;

为障碍物点云数据两个子集的中心点坐标,lj为矩形区域长度,wj为矩形区域宽度,其中,j=1,2,...,K-1。 and are the center point coordinates of the two subsets of obstacle point cloud data, lj is the length of the rectangular area, wj is the width of the rectangular area, where j = 1, 2, ..., K-1.

考虑移动机器人的外切圆半径和最小转弯半径,在矩形区域周围创建一个缓冲区,并指定缓冲区域尺寸,将带有缓冲区的矩形区域命名为障碍物区域。Considering the circumscribed circle radius and the minimum turning radius of the mobile robot, a buffer zone is created around the rectangular area, and the size of the buffer zone is specified. The rectangular area with the buffer zone is named the obstacle area.

推算障碍物区域的中心点坐标、长度和宽度,具体公式如下:Calculate the center point coordinates, length and width of the obstacle area. The specific formula is as follows:

式中,Mj表示障碍物区域的中心点坐标,lengthj表示障碍物区域的长度,widthj表示障碍物区域的宽度,buffer表示指定的缓冲区域尺寸,其中j=1,2,...,K-1。Where Mj represents the coordinates of the center point of the obstacle area, lengthj represents the length of the obstacle area, widthj represents the width of the obstacle area, and buffer represents the specified buffer area size, where j=1, 2, ..., K-1.

将障碍物区域的中心点坐标、长度和宽度存储在所述用于描述障碍物区域的数据结构中,然后添加到历史障碍物区域链表中,并返回到步骤2。The center point coordinates, length and width of the obstacle area are stored in the data structure used to describe the obstacle area, and then added to the historical obstacle area linked list, and then return to step 2.

本发明方法通过传感器数据处理、相对运动原理、空间位置连续性原理和平面几何关系计算,实现了在未知环境下跟随者移动机器人对领航者移动机器人点云的稳定跟踪,提高了移动机器人在复杂环境中的安全性和持续跟踪性能。The method of the present invention realizes the stable tracking of the point cloud of the leader mobile robot by the follower mobile robot in an unknown environment through sensor data processing, relative motion principle, spatial position continuity principle and plane geometric relationship calculation, thereby improving the safety and continuous tracking performance of the mobile robot in complex environments.

当然,以上说明仅仅为本发明的较佳实施例,本发明并不限于列举上述实施例,应当说明的是,任何熟悉本领域的技术人员在本说明书的教导下,所做出的所有等同替代、明显变形形式,均落在本说明书的实质范围之内,理应受到本发明的保护。Of course, the above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiments. It should be noted that all equivalent substitutions and obvious deformation forms made by any technician familiar with the field under the guidance of this specification fall within the essential scope of this specification and should be protected by the present invention.

Claims (5)

1. The mobile robot tracking control method based on the laser radar and the odometer aims at that a mobile robot tracking control system comprises a follower mobile robot and a navigator mobile robot and is characterized by comprising the following steps:
Step 1, defining a data structure for describing an obstacle area, storing the data structure by using a linked list, respectively naming the linked list as a history obstacle area, and defining the data structure for describing the history pose state of the pilot mobile robot;
Step 2, acquiring data of a laser radar sensor of a mobile robot of a follower to generate original point cloud data, simultaneously considering laser radar sensor noise, laser radar sensor data sampling density and motion interference of the mobile robot of the follower, fusing data of an odometer sensor of the mobile robot of the follower, and preprocessing the original point cloud data to obtain expected point cloud data;
in the step2, the process of acquiring the desired point cloud data is as follows:
Step 2.1, filtering and downsampling the original point cloud data by using a statistical outlier rejection algorithm and a voxel grid filter to generate second point cloud data;
Step 2.2, acquiring data of a follower mobile robot odometer sensor and transformation information between rigid bodies of the follower mobile robot, calculating to obtain a transformation matrix from a coordinate system of the follower mobile robot laser radar sensor to a coordinate system of the odometer sensor, transforming a reference coordinate system of second point cloud data from the coordinate system of the follower mobile robot laser radar sensor to the coordinate system of the odometer sensor by using coordinate transformation, and generating third point cloud data;
Step 2.3, dividing the third point cloud data by using a Euclidean clustering method to obtain fourth point cloud data containing K clusters; performing intermediate segmentation and segmentation center point calculation of the point cloud on each cluster in the fourth point cloud data to obtain two subsets after segmentation of each cluster and center point coordinates of each subset, wherein K is a natural number;
storing all the coordinates of the central point into an array to obtain expected point cloud data;
In the step 2.2, the calculation formula of the transformation matrix and the acquisition process of the third point cloud data are as follows:
Wherein R is a rotation matrix and represents the attitude information of a body coordinate system of the follower mobile robot relative to a coordinate system of an odometer sensor; b is a translation vector, which represents the position information of the body coordinate system of the follower mobile robot relative to the coordinate system of the odometer sensor, and the parameters (x, y, z) represent the three-dimensional coordinates of the position information of the follower mobile robot;
Combining the rotation matrix R and the translation vector B to form a4 multiplied by 4 transformation matrix T 1, and representing pose information of a body coordinate system of the follower mobile robot relative to a coordinate system of an odometer sensor; wherein 0 is a row vector of 1×3, and the parameters [ a, b, c, d ] T are quaternion representations of the rotation matrix R for describing the pose of the follower mobile robot;
obtaining transformation information between rigid bodies of the follower mobile robot to obtain a transformation matrix from a laser radar sensor coordinate system of the follower mobile robot to a body coordinate system of the follower mobile robot, wherein the transformation matrix is a constant matrix and is marked as T 2;
transforming a reference coordinate system of the second point cloud data from a follower mobile robot laser radar sensor coordinate system to an odometer sensor coordinate system by using coordinate transformation, and generating third point cloud data:
P3=(T1×T2)·P2 (2)
Wherein P 2 is the second point cloud data, and P 3 is the third point cloud data;
in the step 2.3, the coordinates of the central points of the subsets are shown in the following formula:
Wherein Z i1 and Z i2 are center point coordinates of two subsets of i-th cluster point cloud data, i=1, 2,., K; n is the length of the ith cluster point cloud data and the symbol Representing round-down, x j and y j are the abscissas of the point cloud data, j=1, 2,..n;
Step 3, eliminating expected point cloud data in a history obstacle area to obtain effective point cloud data, and distinguishing the pilot mobile robot point cloud data and the obstacle point cloud data by the follower mobile robot based on the pilot mobile robot history pose state and by utilizing a relative motion principle or a spatial position continuity principle;
in the step 3, the process of excluding the desired point cloud data located in the history obstacle region is as follows:
based on the expected point cloud data obtained in the step 2, calculating the center point coordinates of each cluster of the expected point cloud data:
wherein oMi is the center point coordinate of each cluster of the point cloud data, and represents the center point coordinate of each cluster of the point cloud data under the coordinate system of the mobile robot odometer sensor of the follower, wherein i=1, 2;
Z i_1 and Z i_2 are center point coordinates of two subsets of i-th cluster point cloud data;
Given oMi, namely the center point coordinates of each cluster of the expected point cloud data and the historical obstacle region linked LIST, the specific process of excluding the clusters in the historical obstacle region of the expected point cloud data is as follows:
For each obstacle region in LIST:
Obstacle center point_x=x coordinate of obstacle region center point;
Obstacle center point_y=the y coordinate of the obstacle region center point;
barrier length = length of barrier region;
Barrier width = width of barrier region;
calculating rectangular boundaries:
right boundary= (obstacle center point_y-obstacle width)/2;
left boundary= (obstacle center point_y+obstacle width)/2;
Lower boundary= (obstacle center point_x-obstacle length)/2;
upper boundary= (obstacle center point_x+obstacle length)/2;
If oMi is located within the rectangular range enclosed by the four boundary values, the expected point cloud data is in the historical obstacle region, otherwise, the expected point cloud data is not in the historical obstacle region;
the rest expected point cloud data clusters are sequentially stored in an array according to the original sequence to form effective point cloud data;
The step3 specifically comprises the following steps:
Aiming at the positioning problem of the pilot mobile robot in the initial stage, the tracking positioning problem of the pilot mobile robot in the normal operation stage and the repositioning problem of the pilot mobile robot in the tracking loss stage, three different positioning methods of initial positioning, tracking positioning and repositioning are respectively designed, and corresponding methods are respectively operated in different stages;
the initial positioning method comprises the following steps:
when the number of times of the program continuously entering the initial positioning method is less than C, recording effective point cloud data, wherein C is a natural number;
When the number of times of the program continuously entering the initial positioning method is equal to C, recording effective point cloud data, and calculating the average value of the recorded C groups of effective point cloud data, namely the initial point cloud position;
When the number of times of continuous program entering into the initial positioning method is not less than C, based on the initial position of the point cloud, when the pilot mobile robot translates or rotates, distinguishing pilot mobile robot point cloud data from the effective point cloud data by utilizing a relative motion principle;
the tracking and positioning method comprises the following steps:
Acquiring a historical pose state variable of a pilot mobile robot, taking a central point of the pilot mobile robot in the historical pose state variable of the pilot mobile robot as a circle center, making a circular area with a radius of R, and considering the circular area as a circular boundary condition, wherein the effective point cloud data appearing in the circular boundary condition at the current moment is considered as pilot mobile robot point cloud data, so that the pilot mobile robot point cloud data is distinguished from the effective point cloud data;
the repositioning method comprises the following steps:
Recording effective point cloud data when the number of times of the program continuously entering the repositioning method is not more than C, and calculating the average value of the recorded effective point cloud data, namely the real-time initial position of the point cloud;
When the program enters a repositioning method, based on the real-time initial position of the point cloud, when the navigator mobile robot translates or rotates, distinguishing navigator mobile robot point cloud data from effective point cloud data by utilizing a relative motion principle;
Step 4, if the step 3 cannot successfully identify the point cloud data of the mobile robot of the navigator, emptying the historical pose state variables of the mobile robot of the navigator, and returning to the step 2; if the step 3 successfully identifies the mobile robot point cloud data of the navigator, the step 5 is continuously executed;
Step 5, calculating the central position and azimuth angle of the pilot mobile robot based on the pilot mobile robot point cloud data obtained in the step 3 by utilizing a plane geometric relationship, updating the historical pose state of the pilot mobile robot, and then carrying out coordinate transformation on the central position and azimuth angle of the pilot mobile robot by using the follower mobile robot odometer sensor data to obtain the pose of the pilot mobile robot relative to the follower mobile robot, so as to realize the tracking of the pilot mobile robot point cloud by the follower mobile robot;
Step 6, calculating the center position and the size range of the obstacle based on the obstacle point cloud data obtained in the step 3, designating the size of a buffer area to obtain an obstacle area, adding the obstacle area into a historical obstacle area, and returning to the step 2;
Wherein, the follower mobile robot and the navigator mobile robot are both carried with a laser radar and an odometer sensor.
2. The method for controlling tracking of a mobile robot based on a lidar and an odometer according to claim 1, wherein in step 1, the obstacle area is defined as a rectangular area;
The data structure for describing the obstacle region comprises four double-precision floating point numbers which sequentially represent the abscissa and the ordinate of the center point of the obstacle region, the length of the obstacle region and the width of the obstacle region;
the data structure for describing the historical pose state of the pilot mobile robot comprises three double-precision floating point numbers which sequentially represent the abscissa and the ordinate of the center point of the pilot mobile robot and the direction angle of the pilot mobile robot.
3. The mobile robot tracking control method based on the laser radar and the odometer according to claim 1, wherein the step 5 is specifically:
Step 5.1, calculating the coordinates of a point cloud data center point of the mobile robot of the navigator by using an average algorithm based on the point cloud data of the mobile robot of the navigator obtained in the step 3, and obtaining the position of the center point of the mobile robot of the navigator, namely the center position, according to the fixed position relation between the point cloud data center point of the mobile robot of the navigator and the center point of the mobile robot of the navigator;
Step 5.2, based on the pilot mobile robot point cloud data obtained in the step 3, connecting the pilot mobile robot point cloud data into a line segment, and calculating an included angle between the line segment and the positive direction of the horizontal axis of the odometer sensor coordinate system to obtain the angle, namely the azimuth angle, of the pilot mobile robot;
Step 5.3, updating the historical pose state variable of the mobile robot of the pilot by using the central position obtained in the step 5.1 and the azimuth information obtained in the step 5.2;
Step 5.4, acquiring sensor data of an odometer of the mobile robot of the follower and transformation information between rigid bodies of the mobile robot of the follower, and calculating to obtain a transformation matrix from an odometer sensor coordinate system to a chassis coordinate system of the mobile robot of the follower;
Step 5.5, using the transformation matrix obtained in the step 5.4, transforming the central position obtained in the step 5.1 and the reference coordinate system of the azimuth angle obtained in the step 5.2 from the odometer sensor coordinate system to the chassis coordinate system of the follower mobile robot, and obtaining the pose of the navigator mobile robot relative to the follower mobile robot;
Therefore, the tracking of the mobile robot point cloud of the navigator by the mobile robot of the follower is realized.
4. The method for tracking and controlling a mobile robot based on a laser radar and an odometer according to claim 3, wherein in the step 5.4, the odometer sensor data of the mobile robot of the follower is obtained, the rotation matrix and the transformation matrix from the body coordinate system of the mobile robot of the follower to the odometer sensor coordinate system are calculated, and the rotation matrix and the transformation matrix from the odometer sensor coordinate system to the body coordinate system of the mobile robot of the follower are obtained by inverting the rotation matrix and the transformation matrix respectively, which are respectively denoted as R -1 and T 3;
Obtaining transformation information between rigid bodies of the follower mobile robot, and obtaining a transformation matrix from a body coordinate system of the follower mobile robot to a chassis coordinate system of the follower mobile robot, wherein the transformation matrix is a constant matrix and is marked as T 4;
In the step 5.5, the calculation formula of the relative position of the pilot mobile robot with respect to the follower mobile robot is as follows:
fMl=(T3×T4oMl (5)
Wherein fMl is a relative position, which represents the coordinates of the center point of the navigator mobile robot on the chassis coordinate system of the follower mobile robot; oMl The center position obtained in the step 5.1;
Transforming a reference coordinate system of the azimuth angle from an odometer sensor coordinate system to a follower mobile robot chassis coordinate system by using a rotation matrix to obtain an angle of the navigator mobile robot relative to the follower mobile robot:
Wherein fRl is a rotation matrix, which represents the rotation relation from the forward direction of the horizontal axis of the chassis coordinate system of the mobile robot to the forward direction of the mobile robot of the navigator; oθl The azimuth angle obtained in the step 5.2;
Extracting an included angle fθl between the forward direction of the navigator mobile robot and the forward direction of the horizontal axis of the mobile robot chassis coordinate system from the rotation matrix fRl;
thus, the pose of the navigator mobile robot relative to the follower mobile robot is obtained.
5. The mobile robot tracking control method based on the laser radar and the odometer according to claim 1, wherein the step 6 is specifically:
based on the obstacle point cloud data obtained in the step 3, a rectangular area which can surround all obstacle point cloud data is calculated by utilizing a plane geometric relationship, and a specific calculation formula is as follows:
Wherein Z j_mid is the center point coordinate of the obstacle point cloud data;
And (3) with For the center point coordinates of two subsets of obstacle point cloud data, l j is the rectangular area length, w j is the rectangular area width, where j=1, 2,..;
Creating a buffer area around the rectangular area in consideration of the circumscribed circle radius and the minimum turning radius of the mobile robot, designating the size of the buffer area, and naming the rectangular area with the buffer area as an obstacle area;
The coordinates, length and width of the center point of the obstacle region are calculated as follows:
Where M j represents the center point coordinates of the obstacle region, length j represents the length of the obstacle region, width j represents the width of the obstacle region, buffer represents the designated buffer region size, where j=1, 2, K-1;
The coordinates of the center point, the length and the width of the obstacle area are stored in the data structure for describing the obstacle area, then added to the historical obstacle area linked list, and the step returns to the step 2.
CN202410162551.1A 2024-02-05 2024-02-05 Mobile robot tracking control method based on laser radar and odometer Active CN117949966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410162551.1A CN117949966B (en) 2024-02-05 2024-02-05 Mobile robot tracking control method based on laser radar and odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410162551.1A CN117949966B (en) 2024-02-05 2024-02-05 Mobile robot tracking control method based on laser radar and odometer

Publications (2)

Publication Number Publication Date
CN117949966A CN117949966A (en) 2024-04-30
CN117949966B true CN117949966B (en) 2024-10-18

Family

ID=90794295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410162551.1A Active CN117949966B (en) 2024-02-05 2024-02-05 Mobile robot tracking control method based on laser radar and odometer

Country Status (1)

Country Link
CN (1) CN117949966B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881044A (en) * 2015-06-11 2015-09-02 北京理工大学 Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown
CN112379673A (en) * 2020-11-26 2021-02-19 广东盈峰智能环卫科技有限公司 Robot self-following method and device based on single-line laser radar and robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11241791B1 (en) * 2018-04-17 2022-02-08 AI Incorporated Method for tracking movement of a mobile robotic device
KR20220120908A (en) * 2021-02-24 2022-08-31 한국전자통신연구원 Mobile robot and control method of mobile robot
CN113885510B (en) * 2021-10-21 2023-11-10 齐鲁工业大学 Four-foot robot obstacle avoidance and pilot following method and system
CN116576857A (en) * 2023-04-19 2023-08-11 东北大学 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881044A (en) * 2015-06-11 2015-09-02 北京理工大学 Adaptive tracking control method of multi-mobile-robot system under condition of attitude unknown
CN112379673A (en) * 2020-11-26 2021-02-19 广东盈峰智能环卫科技有限公司 Robot self-following method and device based on single-line laser radar and robot

Also Published As

Publication number Publication date
CN117949966A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN110703747B (en) A robot autonomous exploration method based on simplified generalized Voronoi diagram
CN109800689B (en) Target tracking method based on space-time feature fusion learning
Weng et al. Inverting the pose forecasting pipeline with SPF2: Sequential pointcloud forecasting for sequential pose forecasting
Chen Kalman filter for robot vision: a survey
Luo et al. Multisensor fusion and integration: A review on approaches and its applications in mechatronics
JP2968184B2 (en) Neural network system and method for evaluating position and orientation
CN114119659A (en) Multi-sensor fusion target tracking method
CN114049362A (en) Transform-based point cloud instance segmentation method
Lobos-Tsunekawa et al. Point cloud based reinforcement learning for sim-to-real and partial observability in visual navigation
CN119131643B (en) UAV visual navigation method and device based on deep learning
Drwięga Features matching based merging of 3D maps in multi-robot systems
CN117949966B (en) Mobile robot tracking control method based on laser radar and odometer
CN115237140A (en) A D3-TD3 robot path planning method based on 3D point cloud
CN112733971B (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN116048120A (en) An autonomous navigation system and method for a small quadrotor UAV in an unknown dynamic environment
CN114815899A (en) 3D space path planning method for UAV based on 3D lidar sensor
Guo et al. A Novel Target Tracking System for the Amphibious Robot based on Improved Camshift Algorithm
Huang et al. Visual tracking in cluttered environments using the visual probabilistic data association filter
Du et al. Study on 6D Pose Estimation System of Occlusion Targets for the Spherical Amphibious Robot based on Neural Network
Xie et al. Path Planning for Robotic Arm Based on Reinforcement Learning under the Train
Chen et al. Rhaml: Rendezvous-based hierarchical architecture for mutual localization
CN117537803B (en) Robot inspection semantics-topological map construction methods, systems, equipment and media
Mitschke et al. Image-based visual servoing of rotationally invariant objects using a u-net prediction
CN118071798A (en) 3D human body tracking method for four-foot robot in complex environment
Song et al. Vehicle Positioning and Ranging with Static Traffic Camera based on 2D-3D Tracking and Re-Projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant