[go: up one dir, main page]

CN114740866A - Robot autonomous exploration method and system based on deep learning - Google Patents

Robot autonomous exploration method and system based on deep learning Download PDF

Info

Publication number
CN114740866A
CN114740866A CN202210504426.5A CN202210504426A CN114740866A CN 114740866 A CN114740866 A CN 114740866A CN 202210504426 A CN202210504426 A CN 202210504426A CN 114740866 A CN114740866 A CN 114740866A
Authority
CN
China
Prior art keywords
robot
point
indoor scene
deep learning
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210504426.5A
Other languages
Chinese (zh)
Other versions
CN114740866B (en
Inventor
王超群
王银川
赵昊宁
荣学文
宋锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210504426.5A priority Critical patent/CN114740866B/en
Publication of CN114740866A publication Critical patent/CN114740866A/en
Application granted granted Critical
Publication of CN114740866B publication Critical patent/CN114740866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

本公开公开的基于深度学习的机器人自主探索方法及系统,包括:获取机器人坐标点和构建的当前室内场景图;通过深度学习网络确定当前室内场景图的边界;选择距离机器人坐标点最近的边界中心为候选点;通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;选取树状路径上的倒数第二个节点为目标点;根据机器人坐标点与目标点生成全局路径;以设定速度驱动机器人沿全局路径移动,获取新的激光点云数据;通过新的激光点云数据对当前室内场景图进行更新,完成室内场景建图。实现高精度的室内场景建图。

Figure 202210504426

The deep learning-based robot autonomous exploration method and system disclosed in the present disclosure include: acquiring robot coordinate points and a constructed current indoor scene graph; determining the boundary of the current indoor scene graph through a deep learning network; selecting the boundary center closest to the robot coordinate point is a candidate point; a tree-like path is generated between the robot coordinate point and the candidate point through the fast-growing random tree algorithm; the penultimate node on the tree-like path is selected as the target point; the global path is generated according to the robot coordinate point and the target point; Drive the robot to move along the global path at a set speed to obtain new laser point cloud data; update the current indoor scene graph through the new laser point cloud data to complete the indoor scene mapping. Realize high-precision indoor scene mapping.

Figure 202210504426

Description

基于深度学习的机器人自主探索方法及系统Robot autonomous exploration method and system based on deep learning

技术领域technical field

本发明涉及机器人技术领域,尤其涉及基于深度学习的机器人自主探索方法及系统。The present invention relates to the field of robotics, in particular to a method and system for autonomous exploration of robots based on deep learning.

背景技术Background technique

本部分的陈述仅仅是提供了与本公开相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present disclosure and do not necessarily constitute prior art.

移动式机器人具有行动灵活,操作方便、鲁棒性强等优点,在医疗、军事、航天、物流等领域具有广泛的应用前景。移动机器人自主探索是指机器人在一个未知环境中,通过自主移动和环境感知,最终构建出完整的环境地图,机器人自主探索可以帮助人们实现复杂地形的地图重建工作,极大的减少了极端环境条件(如狭小、高温等)对人的不利影响,为人们后期对环境地图进行操作提供了便利。Mobile robots have the advantages of flexible action, convenient operation and strong robustness, and have broad application prospects in medical, military, aerospace, logistics and other fields. Autonomous exploration of mobile robots means that robots in an unknown environment, through autonomous movement and environmental perception, finally build a complete environmental map. Robotic autonomous exploration can help people achieve map reconstruction of complex terrain, greatly reducing extreme environmental conditions. (such as small size, high temperature, etc.) adversely affect people, which provides convenience for people to operate the environmental map in the later stage.

在现有的技术方案中,机器人自主探索进行场景建图时主要存在以下技术难题:In the existing technical solutions, the following technical difficulties exist when robots autonomously explore and map scenes:

(1)探索效率低。在进行大面积建图时,提取地图边界需要大量计算资源,而且难以得到信息量大的边界,导致机器人探索效率低下。(1) The exploration efficiency is low. When mapping a large area, extracting the map boundary requires a lot of computing resources, and it is difficult to obtain a boundary with a large amount of information, resulting in low efficiency of robot exploration.

(2)地图完整度低。机器人在进行探索式建图时由于目标点选取的区域局限性,导致最终绘制的环境地图不完整。(2) The map integrity is low. When the robot performs exploratory mapping, the final drawn environment map is incomplete due to the limitation of the target point selection area.

(3)建图灵活性差。机器人在选择目标点时,容易选取到距离障碍物过近的点,从而导致机器人难以到达目标点。(3) The flexibility of mapping is poor. When the robot selects the target point, it is easy to select the point that is too close to the obstacle, which makes it difficult for the robot to reach the target point.

发明内容SUMMARY OF THE INVENTION

本公开为了解决上述问题,提出了基于深度学习的机器人自主探索方法及系统,通过U-net深度学习网络快速确定地图中已知区域和未知区域的边界,并从边界中选取候选点,进而通过快速生长随机树算法在机器人坐标点与候选点间生成树状路径,进而根据树状路径确定目标点,通过该目标点进行全局路径规划时,能够实现实时避障的功能,最终实现高精度的室内场景建图。In order to solve the above problems, the present disclosure proposes a method and system for autonomous exploration of robots based on deep learning. The U-net deep learning network is used to quickly determine the boundaries of known and unknown areas in the map, and candidate points are selected from the boundaries, and then through the U-net deep learning network. The fast growing random tree algorithm generates a tree-like path between the robot coordinate points and the candidate points, and then determines the target point according to the tree-like path. Interior scene mapping.

为实现上述目的,本公开采用如下技术方案:To achieve the above object, the present disclosure adopts the following technical solutions:

第一方面,提出了基于深度学习的机器人自主探索方法,包括:In the first aspect, a deep learning-based autonomous exploration method for robots is proposed, including:

获取机器人坐标点和构建的当前室内场景图;Obtain the coordinate points of the robot and the current indoor scene graph constructed;

通过深度学习网络确定当前室内场景图的边界;Determine the boundary of the current indoor scene graph through a deep learning network;

选择距离机器人坐标点最近的边界中心为候选点;Select the boundary center closest to the robot coordinate point as the candidate point;

通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;Generate tree-like paths between robot coordinate points and candidate points through fast growing random tree algorithm;

选取树状路径上的倒数第二个节点为目标点;Select the penultimate node on the tree path as the target point;

根据机器人坐标点与目标点生成全局路径;Generate a global path based on robot coordinate points and target points;

以设定速度驱动机器人沿全局路径移动,获取新的激光点云数据;Drive the robot to move along the global path at a set speed to acquire new laser point cloud data;

通过新的激光点云数据对当前室内场景图进行更新,完成室内场景建图。The current indoor scene map is updated through the new laser point cloud data to complete the indoor scene mapping.

第二方面,提出了基于深度学习的机器人自主探索系统,包括:In the second aspect, a robot autonomous exploration system based on deep learning is proposed, including:

数据获取模块,用于获取机器人坐标点和构建的当前室内场景图;The data acquisition module is used to acquire the coordinate points of the robot and the current indoor scene graph constructed;

目标选择模块,通过深度学习网络确定当前室内场景图的边界;选择距离机器人坐标点最近的边界中心为候选点;通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;选取树状路径上的倒数第二个节点为目标点;The target selection module determines the boundary of the current indoor scene graph through the deep learning network; selects the boundary center closest to the robot coordinate point as the candidate point; generates a tree-like path between the robot coordinate point and the candidate point through the fast growing random tree algorithm; select The penultimate node on the tree path is the target point;

路径规划模块,用于根据机器人坐标点与目标点生成全局路径;The path planning module is used to generate a global path according to the robot coordinate points and target points;

机器人驱动模块,用于以设定速度驱动机器人沿全局路径移动,获取新的激光点云数据;The robot drive module is used to drive the robot to move along the global path at a set speed to obtain new laser point cloud data;

建图模块,用于通过激光点云数据对当前室内场景图进行更新,完成室内场景建图。The mapping module is used to update the current indoor scene map through the laser point cloud data to complete the indoor scene mapping.

第三方面,提出了一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成基于深度学习的机器人自主探索方法所述的步骤。In a third aspect, an electronic device is proposed, including a memory and a processor, and computer instructions stored in the memory and running on the processor, the computer instructions are executed by the processor to complete a deep learning-based robot autonomous exploration method the steps described.

第四方面,提出了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成基于深度学习的机器人自主探索方法所述的步骤。In a fourth aspect, a computer-readable storage medium is provided for storing computer instructions, and when the computer instructions are executed by a processor, the steps described in the deep learning-based robot autonomous exploration method are completed.

与现有技术相比,本公开的有益效果为:Compared with the prior art, the beneficial effects of the present disclosure are:

1、本公开根据边缘检测原理,通过U-net分割网络快速提取地图中已知区域与未知区域的边界,有助于提高边界提取的速度,获得最优目标点,有效解决了目标点的信息量小和区域局限性问题。1. According to the principle of edge detection, the present disclosure quickly extracts the boundary between the known area and the unknown area in the map through the U-net segmentation network, which helps to improve the speed of boundary extraction, obtains the optimal target point, and effectively solves the information of the target point. Small volume and regional limitations.

2、本公开在机器人坐标点和目标点之间建立了随机树,可同时解决全局路径规划与选取目标点的问题。2. The present disclosure establishes a random tree between robot coordinate points and target points, which can simultaneously solve the problems of global path planning and target point selection.

3、本公开采用RRT算法选取目标点,有效解决了目标点距离障碍物过近,从而导致机器人无法到达的问题。3. The present disclosure adopts the RRT algorithm to select the target point, which effectively solves the problem that the target point is too close to the obstacle, so that the robot cannot reach it.

本发明附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will become apparent from the description which follows, or may be learned by practice of the invention.

附图说明Description of drawings

构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。The accompanying drawings that form a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute improper limitations on the present application.

图1为实施例1公开方法的框架图;1 is a frame diagram of the method disclosed in Embodiment 1;

图2为实施例1公开的中心为边界单元的地图栅格单元图;Fig. 2 is the map grid cell diagram with the center as the boundary cell disclosed in embodiment 1;

图3为实施例1公开的中心为非边界单元的地图栅格单元图;Fig. 3 is a map grid cell diagram with the center being a non-boundary cell disclosed in Embodiment 1;

图4为实施例1公开的U-net深度学习网络结构图;4 is a structural diagram of the U-net deep learning network disclosed in Embodiment 1;

图5为实施例1公开的RRT算法流程图;Fig. 5 is the RRT algorithm flow chart disclosed in embodiment 1;

图6为实施例1公开的机器人选择目标点示意图;6 is a schematic diagram of the robot selecting target point disclosed in Embodiment 1;

图7为实施例1公开的机器人相邻时刻运动示意图;7 is a schematic diagram of the motion of the robot at adjacent moments disclosed in Embodiment 1;

图8为实施例1公开的机器人速度采样示意图;8 is a schematic diagram of the robot speed sampling disclosed in Embodiment 1;

图9为实施例1公开的Cartographer-SLAM算法架构图。FIG. 9 is an architecture diagram of the Cartographer-SLAM algorithm disclosed in Embodiment 1. FIG.

其中:1、占据单元,2、空闲单元,3、未知单元,4、边界单元,5、非边界单元,6、全连接层,7、池化层,8、卷积层,9、地图数据流,10、障碍物,11、障碍物的膨胀区,12、机器人底座,13、目标点,14、候选点,15、全局路径,16、机器人。Among them: 1. Occupied unit, 2. Free unit, 3. Unknown unit, 4. Boundary unit, 5. Non-boundary unit, 6. Fully connected layer, 7. Pooling layer, 8. Convolutional layer, 9. Map data Flow, 10, Obstacles, 11, Expansion Zones for Obstacles, 12, Robot Base, 13, Target Points, 14, Candidate Points, 15, Global Path, 16, Robot.

具体实施方式:Detailed ways:

下面结合附图与实施例对本公开作进一步说明。The present disclosure will be further described below with reference to the accompanying drawings and embodiments.

应该指出,以下详细说明都是例示性的,旨在对本申请提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the application. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

在本公开中,术语如“上”、“下”、“左”、“右”、“前”、“后”、“竖直”、“水平”、“侧”、“底”等指示的方位或位置关系为基于附图所示的方位或位置关系,只是为了便于叙述本公开各部件或元件结构关系而确定的关系词,并非特指本公开中任一部件或元件,不能理解为对本公开的限制。In this disclosure, terms such as "top", "bottom", "left", "right", "front", "rear", "vertical", "horizontal", "side", "bottom", etc. The orientation or positional relationship is based on the orientation or positional relationship shown in the drawings, and is only a relational word determined for the convenience of describing the structural relationship of each component or element of the present disclosure, and does not specifically refer to any component or element in the present disclosure, and should not be construed as a reference to the present disclosure. public restrictions.

本公开中,术语如“固接”、“相连”、“连接”等应做广义理解,表示可以是固定连接,也可以是一体地连接或可拆卸连接;可以是直接相连,也可以通过中间媒介间接相连。对于本领域的相关科研或技术人员,可以根据具体情况确定上述术语在本公开中的具体含义,不能理解为对本公开的限制。In the present disclosure, terms such as "fixed connection", "connected", "connected", etc. should be understood in a broad sense, indicating that it may be a fixed connection, an integral connection or a detachable connection; it may be directly connected, or through an intermediate connection. media are indirectly connected. For the relevant scientific research or technical personnel in the field, the specific meanings of the above terms in the present disclosure can be determined according to specific circumstances, and should not be construed as limitations on the present disclosure.

实施例1Example 1

为实现高精度的室内场景建图,在该实施例中公开了基于深度学习的机器人自主探索方法,包括:In order to achieve high-precision indoor scene mapping, a deep learning-based robot autonomous exploration method is disclosed in this embodiment, including:

获取机器人坐标点和构建的当前室内场景图;Obtain the coordinate points of the robot and the current indoor scene graph constructed;

通过深度学习网络确定当前室内场景图的边界;Determine the boundary of the current indoor scene graph through a deep learning network;

选择距离机器人坐标点最近的边界中心为候选点;Select the boundary center closest to the robot coordinate point as the candidate point;

通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;Generate tree-like paths between robot coordinate points and candidate points through fast growing random tree algorithm;

选取树状路径上的倒数第二个节点为目标点;Select the penultimate node on the tree path as the target point;

根据机器人坐标点与目标点生成全局路径;Generate a global path based on robot coordinate points and target points;

以设定速度驱动机器人沿全局路径移动,获取新的激光点云数据;Drive the robot to move along the global path at a set speed to acquire new laser point cloud data;

通过新的激光点云数据对当前室内场景图进行更新,完成室内场景建图。The current indoor scene map is updated through the new laser point cloud data to complete the indoor scene mapping.

进一步的,构建当前室内场景图的具体过程为:Further, the specific process of constructing the current indoor scene graph is as follows:

获取当前时刻的激光点云数据和机器人的位姿数据;Obtain the laser point cloud data and the pose data of the robot at the current moment;

根据机器人的位姿数据构建子图;Build a subgraph according to the pose data of the robot;

通过当前时刻的激光点云数据对构建的子图进行更新,获得当前室内场景图。The constructed subgraph is updated through the laser point cloud data at the current moment to obtain the current indoor scene graph.

进一步的,机器人的位姿数据包括机器人坐标点、里程数据、姿态数据和加速度。Further, the pose data of the robot includes robot coordinate points, mileage data, attitude data and acceleration.

进一步的,构建的当前室内场景图为二维占据栅格地图,根据栅格值将地图栅格单元划分为:空闲单元、未知单元和占据单元,当一个单元为空闲单元,且该单元的八个邻近单元中至少有一个未知单元时,该单元为边界单元,所有的边界单元组成边界。Further, the constructed current indoor scene graph is a two-dimensional occupied grid map, and the map grid cells are divided into: idle cells, unknown cells and occupied cells according to the grid values. When there is at least one unknown cell in the adjacent cells, the cell is a boundary cell, and all the boundary cells form a boundary.

进一步的,通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径的具体过程为:Further, the specific process of generating a tree-like path between the robot coordinate point and the candidate point through the fast growing random tree algorithm is as follows:

在当前室内场景图中随机选取一点为采样点;Randomly select a point in the current indoor scene graph as a sampling point;

选取距离采样点最近的节点为生长节点;Select the node closest to the sampling point as the growth node;

由生长节点向着采样点的方向生成一个新节点;Generate a new node from the growing node towards the sampling point;

将符合条件的新节点加入树中,生成树状路径。Add new nodes that meet the conditions to the tree to generate a tree-like path.

进一步的,机器人设定速度的选取过程为:Further, the selection process of the robot set speed is as follows:

获取机器人的运动速度区间;Get the movement speed range of the robot;

在运动速度区间内采样多组速度;Sampling multiple sets of speeds within the motion speed interval;

计算每组速度下的机器人运动轨迹;Calculate the trajectory of the robot at each speed;

通过评价函数从机器人运动轨迹中选取最优运动轨迹;Select the optimal motion trajectory from the robot motion trajectory through the evaluation function;

该最优运动轨迹所对应的速度为驱动机器人运动的设定速度。The speed corresponding to the optimal motion trajectory is the set speed for driving the robot to move.

进一步的,将机器人运动简化为直线运动,构建机器人的运动模型;Further, the motion of the robot is simplified to linear motion, and the motion model of the robot is constructed;

根据机器人的运动模型计算每组速度下的机器人运动轨迹。According to the motion model of the robot, the trajectory of the robot at each speed is calculated.

对本实施例公开的基于深度学习的机器人自主探索方法进行详细论述。The deep learning-based robot autonomous exploration method disclosed in this embodiment is discussed in detail.

基于深度学习的机器人自主探索方法,如图1所示,包括:The robot autonomous exploration method based on deep learning, as shown in Figure 1, includes:

S1:获取机器人坐标点和构建的当前室内场景图。S1: Obtain the robot coordinate points and the constructed current indoor scene graph.

在具体实施时,如图9所示,采用Cartographer-SLAM建图算法构建室内场景图,该SLAM算法是一种基于图优化的SLAM算法,算法主要由Loca l SLAM和G l oba l SLAM两部分构成。其中,Loca l SLAM通过接收滤波后的相邻几帧点云数据建立子图(Submap),每当获得一帧点云数据后,便与最近建立的子图进行匹配,将当前帧的点云数据插入到子图中(即scan_match i ng过程),通过不断插入新点云数据帧,实现对子图的更新。通过点云匹配所建立的子图及位姿估计在短时间内是可靠的,但是长时间会产生较大的累计误差,因此需要通过回环检测和后端优化(G l oba l SLAM)进一步确定机器人的全局位姿和全局地图。In the specific implementation, as shown in Figure 9, the Cartographer-SLAM mapping algorithm is used to construct the indoor scene graph. The SLAM algorithm is a SLAM algorithm based on graph optimization. The algorithm mainly consists of two parts: Local SLAM and Global SLAM. constitute. Among them, Local SLAM establishes a submap (Submap) by receiving the filtered adjacent frames of point cloud data. Whenever a frame of point cloud data is obtained, it is matched with the recently established submap, and the point cloud of the current frame is matched. The data is inserted into the subgraph (that is, the scan_match ing process), and the subgraph is updated by continuously inserting new point cloud data frames. The subgraph and pose estimation established by point cloud matching are reliable in a short time, but will generate a large cumulative error for a long time, so it needs to be further determined through loop closure detection and back-end optimization (G l obal SLAM). The global pose and global map of the robot.

其中,构建当前室内场景图的具体过程为:Among them, the specific process of constructing the current indoor scene graph is as follows:

获取当前时刻的激光点云数据和机器人的位姿数据,其中机器人的位姿数据包括机器人坐标点、里程数据、姿态数据和加速度等感知数据;Obtain the laser point cloud data and the pose data of the robot at the current moment, where the pose data of the robot includes sensory data such as robot coordinate points, mileage data, attitude data and acceleration;

根据机器人的位姿数据构建子图;Build a subgraph according to the pose data of the robot;

通过当前时刻的激光点云数据对构建的子图进行更新,获得当前室内场景图。The constructed subgraph is updated through the laser point cloud data at the current moment to obtain the current indoor scene graph.

本实施例构建的室内场景图采用为二维占据栅格地图。The indoor scene map constructed in this embodiment adopts a two-dimensional occupancy grid map.

S2:确定当前室内场景图的边界,选择距离机器人坐标点最近的边界中心为候选点;通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;选取树状路径上的倒数第二个节点为目标点。S2: Determine the boundary of the current indoor scene graph, and select the boundary center closest to the robot coordinate point as the candidate point; generate a tree-like path between the robot coordinate point and the candidate point through the fast-growing random tree algorithm; select the reciprocal on the tree-like path The second node is the target point.

在具体实施时,目标选择模块包含边界检测器(Front ier Detector)和全局随机树规划器(G l oba l P l anner),负责提取地图中的边界点和从边界点中选取一个最优的目标点,该目标点用于后续构建机器人的全局路径。In specific implementation, the target selection module includes a frontier detector (Frontier Detector) and a global random tree planner (Global Planner), which are responsible for extracting boundary points in the map and selecting an optimal one from the boundary points. The target point, which is used to build the global path of the robot later.

边界检测器用于快速确定当前室内场景图的边界,由于本实施例构建的室内场景图采用为二维占据栅格地图,占据栅格地图(OccupiedGrid Map)是一种栅格地图,每个栅格都有一个栅格值,即为该栅格被占据的概率值,故通过在不同地图值的栅格填充不同颜色,即可得到表征环境特征的地图,如图2所示,根据栅格值的不同,可将地图栅格单元划分为空闲单元2(Free)、未知单元3(Unknown)、占据单元1(Occup i ed),本实施例将边界单元定义为符合以下条件的栅格集合:条件1:如图3所示,该边界单元4为空闲单元;条件2:如图2所示,该边界单元4的八个邻近栅格中至少有一个未知单元3。边界为一组边界单元的集合。通过使用U-net深度学习网络对地图中的边界进行快速提取。之所以要将地图的边界提取出来,是因为在地图的边界区域,包含大量的未知信息,前往这些未知区域可以获得更大信息增益。The boundary detector is used to quickly determine the boundary of the current indoor scene graph. Since the indoor scene graph constructed in this embodiment is a two-dimensional occupied grid map, the occupied grid map (Occupied Grid Map) is a grid map. There is a grid value, which is the probability value of the grid being occupied. Therefore, by filling the grid with different map values with different colors, a map representing the environmental characteristics can be obtained, as shown in Figure 2, according to the grid value. The map grid cells can be divided into free cells 2 (Free), unknown cells 3 (Unknown), and occupied cells 1 (Occupied). In this embodiment, the boundary cells are defined as grid sets that meet the following conditions: Condition 1: As shown in FIG. 3 , the boundary unit 4 is an idle unit; Condition 2: As shown in FIG. 2 , there is at least one unknown unit 3 in the eight adjacent grids of the boundary unit 4 . A boundary is a collection of boundary elements. Fast extraction of boundaries in maps by using U-net deep learning network. The reason for extracting the boundaries of the map is that the boundary areas of the map contain a large amount of unknown information, and you can obtain greater information gain by going to these unknown areas.

U-net深度学习网络的结构如图4所示,包括池化层7、卷积层8和全连接层6,地图数据流9在池化层7、卷积层8和全连接层6中流转,确定地图的边界。The structure of the U-net deep learning network is shown in Figure 4, including pooling layer 7, convolution layer 8 and fully connected layer 6, map data stream 9 in pooling layer 7, convolution layer 8 and fully connected layer 6 Circulation, to determine the boundaries of the map.

全局规划器(G l oba l P l anner)用于选取机器人当前要前往的目标点和生成机器人坐标点到目标点的全局路径。The global planner (Global Planner) is used to select the target point that the robot is currently going to and generate the global path from the robot coordinate point to the target point.

其中,确定目标点的具体过程为:Among them, the specific process of determining the target point is as follows:

选择距离机器人坐标点最近的边界中心作为机器人当前要去的候选点;Select the boundary center closest to the robot coordinate point as the candidate point that the robot is currently going to;

通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;Generate tree-like paths between robot coordinate points and candidate points through fast growing random tree algorithm;

选取树状路径上的倒数第二个节点为目标点。Select the penultimate node on the tree path as the target point.

快速生长随机树算法(RRT算法)是一种基于采样的树状算法,如图4所示,通过随机采样地图上的点为采样点,选取距离采样点最近的节点作为生长节点,然后由生长节点向着采样点的方向生成一个新节点,并将符合一定条件的新节点加入树的一部分。The rapid growing random tree algorithm (RRT algorithm) is a tree-like algorithm based on sampling, as shown in Figure 4, by randomly sampling the points on the map as sampling points, selecting the node closest to the sampling point as the growing node, and then growing The node generates a new node in the direction of the sampling point, and adds a new node that meets certain conditions to a part of the tree.

如图5所示,RRT随机采样点的规则为:随机在规定空间内取一点和取目标点作为采样点交替进行,举例来说,假如本次RRT的采样点是在有限空间内随机取得的,下一次RRT的采样点将选在候选点处,如此反复迭代,这样做可以保证RRT最终会收敛至候选点14。As shown in Figure 5, the rules for random sampling points of RRT are: randomly select a point in the specified space and take the target point as the sampling point alternately. For example, if the sampling point of this RRT is randomly obtained in a limited space , the sampling point of the next RRT will be selected at the candidate point, and repeated iterations in this way can ensure that the RRT will eventually converge to the candidate point 14 .

选取树状路径上倒数第二个节点为目标点13,进而根据该目标点确定全局路径15时,可以避免机器人16走入障碍物的膨胀区11,防止机器人底座12距离障碍物10过近。When the penultimate node on the tree path is selected as the target point 13, and then the global path 15 is determined according to the target point, the robot 16 can be prevented from entering the expansion area 11 of the obstacle, and the robot base 12 can be prevented from being too close to the obstacle 10.

S3:根据机器人坐标点与目标点生成全局路径。S3: Generate a global path based on robot coordinate points and target points.

在具体实施时,通过路径规划模块生成全局路径,路径规划模块包括全局规划器(G l oba l P l anner)和局部规划器(Loca l P l anner)。During specific implementation, a global path is generated by a path planning module, and the path planning module includes a global planner (Global Planner) and a local planner (Local Planner).

其中,全局规划器在选择好最终的目标点后,根据机器人坐标点和目标点确定全局路径,局部规划期采用基于动态窗口的局部规划算法(DWA算法),使机器人沿着全局路径移动的同时,可以实时避开附近的障碍物,DWA算法的主要思想是通过在机器人的速度空间(v,w)中采样多组速度,然后模拟机器人在这些速度下一定时间间隔(s im_per i od)内的运动轨迹,得到多组运动轨迹后,通过评价函数对这些轨迹进行评价,选取一个最优轨迹所对应的速度(v_best,w_best)驱动机器人运动。Among them, after selecting the final target point, the global planner determines the global path according to the coordinate points and target points of the robot. In the local planning period, a local planning algorithm based on dynamic windows (DWA algorithm) is used to make the robot move along the global path while moving along the global path. , which can avoid nearby obstacles in real time. The main idea of the DWA algorithm is to sample multiple sets of speeds in the robot's speed space (v, w), and then simulate the robot in a certain time interval (sim_per i od) at these speeds. After obtaining multiple sets of motion trajectories, these trajectories are evaluated through the evaluation function, and the speed (v_best, w_best) corresponding to an optimal trajectory is selected to drive the robot to move.

计算机器人运动轨迹的过程为:假设机器人在一个平面内运动,且机器人不是全向移动的,则相邻时刻内,机器人的运动可简化为一个直线运动。其中,机器人经过一段时间后的坐标和偏航角的运动模型为:The process of calculating the motion trajectory of the robot is as follows: Assuming that the robot moves in a plane and the robot does not move omnidirectionally, the motion of the robot can be simplified as a linear motion in adjacent moments. Among them, the motion model of the robot's coordinates and yaw angle after a period of time is:

Figure BDA0003636863710000111
Figure BDA0003636863710000111

其中,xt,yt,θt分别为机器人运动一段时间后的位置坐标和偏航角,x,y,θ分别为机器人当前的位置坐标和偏航角,v,w分别为机器人当前的线速度和角速度,Δt为运动时间间隔,α为机器人运动前后位置向量与x轴正半轴的夹角,如图7所示。Among them, x t , y t , and θ t are the position coordinates and yaw angle of the robot after moving for a period of time, respectively, x, y, and θ are the current position coordinates and yaw angle of the robot, respectively, and v and w are the current position coordinates and yaw angle of the robot, respectively. Linear velocity and angular velocity, Δt is the movement time interval, α is the angle between the position vector and the positive half-axis of the x-axis before and after the robot moves, as shown in Figure 7.

如图8所示,机器人在实际运动过程中,其速度会受到多种约束,如电机最大转速、电机扭矩、最大制动距离以及与障碍物的安全距离等,基于以上约束,可以得到在该时刻下,机器人运动速度的区间,然后在此区间内,均匀采样多组速度,在一定时间间隔内(sim_period)机器人的速度保持不变。As shown in Figure 8, during the actual motion of the robot, its speed will be subject to various constraints, such as the maximum speed of the motor, the torque of the motor, the maximum braking distance, and the safety distance from obstacles, etc. Based on the above constraints, it can be obtained in this At the moment, the interval of the robot's motion speed, and then within this interval, multiple groups of speeds are uniformly sampled, and the robot's speed remains unchanged within a certain time interval (sim_period).

通过速度采样获得一组速度及其对应的预测的运动轨迹后,采用以下评价函数对多条运动轨迹进行评价:After obtaining a set of velocities and their corresponding predicted motion trajectories through velocity sampling, the following evaluation functions are used to evaluate multiple motion trajectories:

G(v,w)=w1Δθ(Δt)+w2Dnearest+w3v (2)G(v, w)=w 1 Δθ(Δt)+w 2 D nearest +w 3 v (2)

其中,wi为权重系数,Δθ(Δt)为机器人在一段时间内的偏航角变化量,Dnearest为机器人与最近障碍物的距离,v为机器人当前的速度。通过评价函数筛选出一条使G(v,w)最大的路径,为最优运动路径,并选取对应的速度驱动机器人,完成局部路径规划。Among them, w i is the weight coefficient, Δθ(Δt) is the yaw angle change of the robot in a period of time, D nearest is the distance between the robot and the nearest obstacle, and v is the current speed of the robot. Through the evaluation function, a path that maximizes G(v, w) is selected as the optimal motion path, and the corresponding speed is selected to drive the robot to complete the local path planning.

S4:通过选定的最优运动轨迹所对应的速度驱动机器人沿全局路径移动,获取新的激光点云数据;通过新的激光点云数据对当前室内场景图进行更新,完成室内场景建图。S4: Drive the robot to move along the global path through the speed corresponding to the selected optimal motion trajectory to obtain new laser point cloud data; update the current indoor scene graph through the new laser point cloud data to complete indoor scene mapping.

本实施例公开方法能够通过深度学习的方法快速提取地图中已知区域与未知区域的边界,并选取候选点,然后通过快速生长随机树算法在机器人坐标点与候选点之间生成一种树状路径,进一步选定树状路径上的倒数第二个节点为目标点,最终在机器人和该目标点间生成全局路径,有效解决了目标点距离障碍物过近,从而导致机器人无法到达的问题,通过在机器人坐标与目标点之间建立随机树,可同时解决全局路径规划与选取目标点问题,根据边缘检测原理,可提取地图中已知区域与未知区域的边界,有助于获得最优目标点,有效解决了目标点的信息量小和区域局限性问题。采用的Cartographer-SLAM建图算法,通过激光点云与子图之间的匹配进行回环检测,可以实现高精度的室内场景建图。The method disclosed in this embodiment can quickly extract the boundary between the known area and the unknown area in the map through the deep learning method, select candidate points, and then generate a tree shape between the robot coordinate points and the candidate points through the fast growing random tree algorithm path, and further select the penultimate node on the tree path as the target point, and finally generate a global path between the robot and the target point, which effectively solves the problem that the target point is too close to the obstacle and the robot cannot reach it. By establishing a random tree between the robot coordinates and the target point, the problem of global path planning and target point selection can be solved at the same time. According to the principle of edge detection, the boundary between the known area and the unknown area in the map can be extracted, which helps to obtain the optimal target. It effectively solves the problems of small amount of information and regional limitations of target points. The Cartographer-SLAM mapping algorithm used can perform loop closure detection through matching between laser point clouds and sub-images, which can achieve high-precision indoor scene mapping.

实施例2Example 2

在该实施例中,公开了基于深度学习的机器人自主探索系统,包括:In this embodiment, a robot autonomous exploration system based on deep learning is disclosed, including:

数据获取模块,用于获取机器人坐标点和构建的当前室内场景图;The data acquisition module is used to acquire the coordinate points of the robot and the current indoor scene graph constructed;

目标选择模块,通过深度学习网络确定当前室内场景图的边界;选择距离机器人坐标点最近的边界中心为候选点;通过快速生长随机树算法在机器人坐标点与候选点之间生成树状路径;选取树状路径上的倒数第二个节点为目标点;The target selection module determines the boundary of the current indoor scene graph through the deep learning network; selects the boundary center closest to the robot coordinate point as the candidate point; generates a tree-like path between the robot coordinate point and the candidate point through the fast growing random tree algorithm; select The penultimate node on the tree path is the target point;

路径规划模块,用于根据机器人坐标点与目标点生成全局路径;The path planning module is used to generate a global path according to the robot coordinate points and target points;

机器人驱动模块,用于以设定速度驱动机器人沿全局路径移动,获取新的激光点云数据;The robot drive module is used to drive the robot to move along the global path at a set speed to obtain new laser point cloud data;

建图模块,用于通过新的激光点云数据对当前室内场景图进行更新,完成室内场景建图。The mapping module is used to update the current indoor scene graph through the new laser point cloud data to complete the indoor scene mapping.

实施例3Example 3

在该实施例中,公开了一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成实施例1公开的基于深度学习的机器人自主探索方法所述的步骤。In this embodiment, an electronic device is disclosed, which includes a memory and a processor, and computer instructions stored in the memory and executed on the processor. When the computer instructions are executed by the processor, the based on Steps described in a deep learning approach to autonomous robotic exploration.

实施例4Example 4

在该实施例中,公开了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例1公开的基于深度学习的机器人自主探索方法所述的步骤。In this embodiment, a computer-readable storage medium is disclosed, which is used to store computer instructions. When the computer instructions are executed by a processor, the steps described in the deep learning-based robot autonomous exploration method disclosed in Embodiment 1 are completed. .

最后应当说明的是:以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者等同替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求保护范围之内。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the present invention can still be Modifications or equivalent replacements are made to the specific embodiments of the present invention, and any modifications or equivalent replacements that do not depart from the spirit and scope of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (10)

1. The robot autonomous exploration method based on deep learning is characterized by comprising the following steps:
acquiring a coordinate point of the robot and a constructed current indoor scene graph;
determining the boundary of the current indoor scene graph through a deep learning network;
selecting a boundary center closest to a coordinate point of the robot as a candidate point;
generating a tree-shaped path between the robot coordinate point and the candidate point by a fast growing random tree algorithm;
selecting the penultimate node on the tree-shaped path as a target point;
generating a global path according to the robot coordinate points and the target point;
driving the robot to move along the global path at a set speed to acquire new laser point cloud data;
and updating the current indoor scene graph through the new laser point cloud data to complete indoor scene graph building.
2. The deep learning-based robot autonomous exploration method according to claim 1, wherein a specific process of constructing a current indoor scene graph is as follows:
acquiring laser point cloud data of the current moment and pose data of the robot;
constructing subgraphs according to the pose data of the robot;
and updating the constructed subgraph through the laser point cloud data at the current moment to obtain a current indoor scene graph.
3. The deep learning-based robot autonomous exploration method according to claim 2, characterized in that pose data of the robot comprises robot coordinate points, mileage data, pose data and acceleration.
4. The deep learning-based robot autonomous exploration method according to claim 1, characterized in that the constructed current indoor scene graph is a two-dimensional occupancy grid map, and the grid cells of the map are divided into: the cell comprises a free cell, an unknown cell and an occupied cell, wherein when one cell is the known cell and at least one unknown cell is in eight adjacent cells of the cell, the cell is a boundary cell, and all boundary cells form a boundary.
5. The deep learning-based robot autonomous exploration method according to claim 1, characterized in that the specific process of generating a tree-like path between a robot coordinate point and a candidate point by a fast growing random tree algorithm is as follows:
randomly selecting a point in a current indoor scene graph as a sampling point;
selecting a node closest to the sampling point as a growth node;
generating a new node from the growing node towards the direction of the sampling point;
and adding the new nodes meeting the conditions into the tree to generate a tree-shaped path.
6. The deep learning-based robot autonomous exploration method according to claim 1, wherein the robot set speed is selected by the following process:
acquiring a motion speed interval of the robot;
sampling a plurality of groups of speeds in a movement speed interval;
calculating the motion track of the robot at each group of speeds;
selecting an optimal motion track from the motion tracks of the robot through an evaluation function;
the speed corresponding to the optimal motion track is the set speed for driving the robot to move.
7. The deep learning-based robot autonomous exploration method according to claim 6, wherein robot motion is simplified into linear motion, and a motion model of the robot is constructed;
and calculating the motion trail of the robot at each group of speeds according to the motion model of the robot.
8. Robot is system of independently exploring based on degree of depth learning, its characterized in that includes:
the data acquisition module is used for acquiring coordinate points of the robot and a constructed current indoor scene graph;
the target selection module is used for determining the boundary of the current indoor scene graph through a deep learning network; selecting a boundary center closest to a coordinate point of the robot as a candidate point; generating a tree-shaped path between the robot coordinate point and the candidate point by a fast growing random tree algorithm; selecting the penultimate node on the tree-shaped path as a target point;
the path planning module is used for generating a global path according to the robot coordinate points and the target point;
the robot driving module is used for driving the robot to move along the global path at a set speed to acquire new laser point cloud data;
and the mapping module is used for updating the current indoor scene map through the new laser point cloud data to complete indoor scene mapping.
9. An electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, the computer instructions, when executed by the processor, performing the steps of the deep learning based robotic autonomous exploration method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the deep learning based robotic autonomous exploration method of any of claims 1-7.
CN202210504426.5A 2022-05-10 2022-05-10 Robot autonomous exploration method and system based on deep learning Active CN114740866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210504426.5A CN114740866B (en) 2022-05-10 2022-05-10 Robot autonomous exploration method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210504426.5A CN114740866B (en) 2022-05-10 2022-05-10 Robot autonomous exploration method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN114740866A true CN114740866A (en) 2022-07-12
CN114740866B CN114740866B (en) 2025-05-02

Family

ID=82284794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210504426.5A Active CN114740866B (en) 2022-05-10 2022-05-10 Robot autonomous exploration method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114740866B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Construction method of 3D map for mobile robot in unknown environment
CN112325884A (en) * 2020-10-29 2021-02-05 广西科技大学 A DWA-based local path planning method for ROS robot
CN113110522A (en) * 2021-05-27 2021-07-13 福州大学 Robot autonomous exploration method based on composite boundary detection
CN113110457A (en) * 2021-04-19 2021-07-13 杭州视熵科技有限公司 Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN113467456A (en) * 2021-07-07 2021-10-01 中国科学院合肥物质科学研究院 Path planning method for specific target search in unknown environment
CN113485375A (en) * 2021-08-13 2021-10-08 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113625721A (en) * 2021-08-19 2021-11-09 东北大学 Autonomous exploration planning method for unknown space
CN113741523A (en) * 2021-09-08 2021-12-03 北京航空航天大学 Hybrid unmanned aerial vehicle autonomous detection method based on boundary and sampling
CN113805590A (en) * 2021-09-23 2021-12-17 云南民族大学 Indoor robot autonomous exploration method and system based on boundary driving
CN114119920A (en) * 2021-10-29 2022-03-01 北京航空航天大学杭州创新研究院 Three-dimensional point cloud map construction method and system
CN114137955A (en) * 2021-10-26 2022-03-04 中国人民解放军军事科学院国防科技创新研究院 Multi-robot rapid collaborative map building method based on improved market method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109341707A (en) * 2018-12-03 2019-02-15 南开大学 Construction method of 3D map for mobile robot in unknown environment
CN112325884A (en) * 2020-10-29 2021-02-05 广西科技大学 A DWA-based local path planning method for ROS robot
CN113110457A (en) * 2021-04-19 2021-07-13 杭州视熵科技有限公司 Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN113110522A (en) * 2021-05-27 2021-07-13 福州大学 Robot autonomous exploration method based on composite boundary detection
CN113467456A (en) * 2021-07-07 2021-10-01 中国科学院合肥物质科学研究院 Path planning method for specific target search in unknown environment
CN113485375A (en) * 2021-08-13 2021-10-08 苏州大学 Indoor environment robot exploration method based on heuristic bias sampling
CN113625721A (en) * 2021-08-19 2021-11-09 东北大学 Autonomous exploration planning method for unknown space
CN113741523A (en) * 2021-09-08 2021-12-03 北京航空航天大学 Hybrid unmanned aerial vehicle autonomous detection method based on boundary and sampling
CN113805590A (en) * 2021-09-23 2021-12-17 云南民族大学 Indoor robot autonomous exploration method and system based on boundary driving
CN114137955A (en) * 2021-10-26 2022-03-04 中国人民解放军军事科学院国防科技创新研究院 Multi-robot rapid collaborative map building method based on improved market method
CN114119920A (en) * 2021-10-29 2022-03-01 北京航空航天大学杭州创新研究院 Three-dimensional point cloud map construction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张澳等: "基于差分轮AGV的激光SLAM导航算法研究", 《机电技术》, 28 February 2022 (2022-02-28), pages 4 *

Also Published As

Publication number Publication date
CN114740866B (en) 2025-05-02

Similar Documents

Publication Publication Date Title
CN114384920B (en) Dynamic obstacle avoidance method based on real-time construction of local grid map
Xuexi et al. SLAM algorithm analysis of mobile robot based on lidar
Zhou et al. Survey on path and view planning for UAVs
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN103278170B (en) Based on mobile robot's cascade map creating method that remarkable scene point detects
CN111610786B (en) Path Planning Method for Mobile Robot Based on Improved RRT Algorithm
CN110084272A (en) A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression
Respall et al. Fast sampling-based next-best-view exploration algorithm for a MAV
CN106959700B (en) A UAV Swarm Cooperative Patrol Tracking Track Planning Method Based on Upper Confidence Interval Algorithm
CN113110455B (en) Multi-robot collaborative exploration method, device and system for unknown initial state
CN113587933A (en) Indoor mobile robot positioning method based on branch-and-bound algorithm
CN110320919B (en) Method for optimizing path of mobile robot in unknown geographic environment
CN113741523B (en) A hybrid UAV autonomous detection method based on boundary and sampling
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
CN112857370A (en) Robot map-free navigation method based on time sequence information modeling
CN117170406B (en) A fast autonomous search method for UAV based on hierarchical planning
Li et al. Object-aware view planning for autonomous 3-D model reconstruction of buildings using a mobile robot
CN118640910A (en) A ROS unmanned vehicle mapping and navigation method in unstructured environments
CN114578824A (en) Unknown environment autonomous exploration method suitable for air-ground dual-purpose robot
CN111123953A (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
Zhang et al. FGIP: A frontier-guided informative planner for UAV exploration and reconstruction
Zhao et al. A study of the global topological map construction algorithm based on grid map representation for multirobot
CN109885082A (en) A mission-driven approach to UAV trajectory planning
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
CN110717260A (en) Unmanned aerial vehicle maneuvering capability model establishing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant