[go: up one dir, main page]

CN111461023B - Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar - Google Patents

Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar Download PDF

Info

Publication number
CN111461023B
CN111461023B CN202010252915.7A CN202010252915A CN111461023B CN 111461023 B CN111461023 B CN 111461023B CN 202010252915 A CN202010252915 A CN 202010252915A CN 111461023 B CN111461023 B CN 111461023B
Authority
CN
China
Prior art keywords
point cloud
point
pilot
cluster
navigator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010252915.7A
Other languages
Chinese (zh)
Other versions
CN111461023A (en
Inventor
李贻斌
另佳
柴汇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010252915.7A priority Critical patent/CN111461023B/en
Publication of CN111461023A publication Critical patent/CN111461023A/en
Application granted granted Critical
Publication of CN111461023B publication Critical patent/CN111461023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种基于三维激光雷达的四足机器人自主跟随领航员的方法,具体步骤如下:(1)使用三维激光雷达获得原始的环境三维点云数据;(2)对原始点云进行预处理;(3)在提取出的非地面点云数据中进行基于欧式距离的分割聚类,得到地面上物体的点云簇列表;(4)对已分割的点云簇筛选出候选人体目标点云簇,根据设定的激光反射强度阈值判定出佩戴反射标贴的领航员目标位置;(5)获取每一时刻领航员目标位置的坐标后,向四足机器人输出行进速度和方向的指令,以实现自主跟随领航员。本发明采用激光雷达获取的全向三维激光点云数据包含了丰富的环境信息并且不易受光线干扰,在野外环境能够准确识别领航员并让四足机器人进行自主跟随,实时性高,误判率低。

Figure 202010252915

A method for a quadruped robot autonomously following a navigator based on 3D lidar, the specific steps are as follows: (1) use 3D lidar to obtain the original 3D point cloud data of the environment; (2) preprocess the original point cloud; (3) ) Carry out segmentation and clustering based on Euclidean distance in the extracted non-ground point cloud data to obtain a list of point cloud clusters of objects on the ground; (4) screen out candidate object point cloud clusters for the segmented point cloud clusters, The set laser reflection intensity threshold determines the target position of the navigator wearing the reflective label; (5) After obtaining the coordinates of the navigator's target position at each moment, output the instructions of the traveling speed and direction to the quadruped robot to realize autonomous following pilot. The invention adopts the omnidirectional three-dimensional laser point cloud data obtained by the laser radar to contain rich environmental information and is not easily disturbed by light. In the wild environment, the navigator can be accurately identified and the quadruped robot can follow it autonomously. The real-time performance is high and the misjudgment rate is high. Low.

Figure 202010252915

Description

基于三维激光雷达的四足机器人自主跟随领航员的方法A method for autonomously following a pilot by a quadruped robot based on three-dimensional laser radar

技术领域Technical Field

本发明提供一种用于实现四足机器人自主跟随领航员的方法,属于移动机器人环境感知技术领域,主要技术涉及到激光点云数据的处理。The present invention provides a method for realizing autonomous following of a navigator by a quadruped robot, which belongs to the technical field of mobile robot environment perception, and the main technology involves processing of laser point cloud data.

背景技术Background Art

移动机器人从结构上来讲可以分为三大类—轮式机器人、腿足式机器人和履带式机器人。其中,腿足机器人在运动中只需要离散的落脚点,相对于其他结构的移动机器人更能适应崎岖的非结构化环境,具有更优异的运动灵活性和环境适应性。Mobile robots can be divided into three categories from a structural perspective: wheeled robots, legged robots, and tracked robots. Among them, legged robots only need discrete footholds during movement, and are more adaptable to rugged unstructured environments than mobile robots of other structures, and have better movement flexibility and environmental adaptability.

四足机器人作为最常见的腿足式机器人,其行走结构接近于大自然中大多数哺乳动物,针对其结构特点,很多科研机构已对四足机器人的运动控制展开研究,但伴随应用场景的扩大,进一步的研究也聚焦在了四足机器人的环境感知和自主行为方面。大型四足机器人的应用场景往往是作用于军事用途,在复杂的野外环境中,执行负重任务的四足机器人可以通过对领航员的识别与追踪,减轻操作员控制的负担和机器人本身路径规划的要求,从而完成避障行走,到达指定目的地的任务。As the most common legged robot, the walking structure of the quadruped robot is similar to that of most mammals in nature. Based on its structural characteristics, many scientific research institutions have conducted research on the motion control of quadruped robots. However, with the expansion of application scenarios, further research has also focused on the environmental perception and autonomous behavior of quadruped robots. Large quadruped robots are often used for military purposes. In complex field environments, quadruped robots that perform load-bearing tasks can reduce the burden of operator control and the requirements of the robot's own path planning by identifying and tracking the navigator, thereby completing the task of avoiding obstacles and reaching the designated destination.

关于传感器的选型,不管是红外相机、深度相机、ToF等其他光学相机都可作为机器人环境感知的主要设备,其中,以激光雷达为代表的三维光测距系统的兴起给机器人领域带来了很多新的研究可能性。目前市面上常见的全向三维激光雷达具有抗干扰能力强,体积小功耗轻等特点,并能通过高频的扫描获取丰富的三维点云信息。Regarding the selection of sensors, infrared cameras, depth cameras, ToF and other optical cameras can all be used as the main equipment for robot environmental perception. Among them, the rise of three-dimensional optical ranging systems represented by laser radar has brought many new research possibilities to the field of robotics. Currently, the common omnidirectional three-dimensional laser radar on the market has the characteristics of strong anti-interference ability, small size and low power consumption, and can obtain rich three-dimensional point cloud information through high-frequency scanning.

四足机器人进行自主行为,包括自主跟随领航员的前提是能够有效实时地识别动态变化环境中的物体,一般情况下基于视觉图像处理的传感器也能够获取环境信息,但通常这样的设备获取的图像容易受光线变化的影响,且几乎不能在光线暗的场景下使用。The premise for a quadruped robot to perform autonomous behavior, including autonomous following of a navigator, is that it can effectively and in real time identify objects in a dynamically changing environment. Generally, sensors based on visual image processing can also obtain environmental information, but the images obtained by such devices are usually easily affected by changes in light and can hardly be used in dark scenes.

发明内容Summary of the invention

本发明针对现有技术存在的不足,提供一种基于三维激光雷达的四足机器人自主跟随领航员的方法,使机器人能够完全自主地跟随指定领航员从而避开障碍和壕沟等非结构化环境。本发明使用一种16线三维激光雷达作为数据获取设备,对激光点云进行数据处理从而准确识别领航员并向四足机器人发出运动控制指令使其能够安全平稳地自主跟随领航员目标。In view of the shortcomings of the prior art, the present invention provides a method for a quadruped robot to autonomously follow a navigator based on a three-dimensional laser radar, so that the robot can completely autonomously follow a designated navigator to avoid obstacles and unstructured environments such as trenches. The present invention uses a 16-line three-dimensional laser radar as a data acquisition device, processes the laser point cloud to accurately identify the navigator and sends motion control instructions to the quadruped robot so that it can autonomously follow the navigator's target safely and smoothly.

本发明基于三维激光雷达的四足机器人自主跟随领航员的方法,包括如下步骤:The method of the present invention for a quadruped robot to autonomously follow a navigator based on a three-dimensional laser radar comprises the following steps:

(1)使用16线的三维激光雷达作为传感器,获得原始的环境三维点云数据;(1) Use a 16-line 3D laser radar as a sensor to obtain the original 3D point cloud data of the environment;

(2)使用滤波操作对原始点云进行预处理,预处理流程还包括剔除地面点云以保留非地面点云数据;(2) Preprocessing the original point cloud using filtering operations, and the preprocessing process also includes removing ground point clouds to retain non-ground point cloud data;

(3)在提取出的非地面点云数据中进行基于欧式距离的分割聚类,得到地面上物体的点云簇列表;(3) Perform segmentation and clustering based on Euclidean distance in the extracted non-ground point cloud data to obtain a point cloud cluster list of objects on the ground;

(4)对已分割的点云簇使用外观分类器筛选出候选人体目标点云簇,根据设定的激光反射强度阈值判定出佩戴反射标贴的领航员目标位置;(4) Using the appearance classifier to select the candidate human target point cloud clusters from the segmented point cloud clusters, the target position of the navigator wearing the reflective sticker is determined according to the set laser reflection intensity threshold;

(5)用卡尔曼滤波对相邻时刻的运动对象进行关联以保持对领航员的准确跟踪;获取每一时刻领航员目标位置的坐标后,向四足机器人输出行进速度和方向的指令,以实现自主跟随领航员。(5) Use Kalman filtering to associate moving objects at adjacent moments to maintain accurate tracking of the navigator; after obtaining the coordinates of the navigator's target position at each moment, output instructions on the speed and direction of movement to the quadruped robot to achieve autonomous following of the navigator.

所述步骤(2)中,点云数据的预处理具体过程是:In step (2), the specific process of preprocessing the point cloud data is:

假设点云簇DT满足N(μff)的正态分布,在滤波阶段需要遍历两次,对于任何一点,第一次计算其平均分布距离μf和方差σf,第二次迭代过滤出不满足

Figure BDA0002436139270000021
的点,ΩF是滤波后的点云簇,
Figure BDA0002436139270000022
是Pi的邻点,df是从
Figure BDA0002436139270000023
到Pi的距离,Kf是给定的系数;对于点云簇中的任意点,如果原始强反射值是
Figure BDA0002436139270000024
则其新反射强度值
Figure BDA0002436139270000025
将通过
Figure BDA0002436139270000026
计算得到;Assuming that the point cloud cluster DT satisfies the normal distribution of N( μf , σf ), it needs to be traversed twice in the filtering stage. For any point, the first iteration calculates its average distribution distance μf and variance σf , and the second iteration filters out those that do not satisfy
Figure BDA0002436139270000021
points, Ω F is the filtered point cloud cluster,
Figure BDA0002436139270000022
is the neighbor of Pi , and df is from
Figure BDA0002436139270000023
The distance to Pi , Kf is a given coefficient; for any point in the point cloud cluster, if the original strong reflection value is
Figure BDA0002436139270000024
The new reflection intensity value is
Figure BDA0002436139270000025
Will pass
Figure BDA0002436139270000026
Calculated;

滤波处理后得到去除噪声,对其结果采用分段的方法,在分段点云集合中选取最低高度值点的平均值的点为最低点代表(LPR)。将其作为平面模型估计的初始种子,对于平面的估计使用最简单的线性模型ax+by+cz+d=0,并通过由种子点集合计算的协方差矩阵

Figure BDA0002436139270000027
求解法线n。After filtering, the noise is removed. The result is segmented. The point with the average value of the lowest height value in the segmented point cloud set is selected as the lowest point representative (LPR). It is used as the initial seed for plane model estimation. The simplest linear model ax+by+cz+d=0 is used for plane estimation, and the covariance matrix calculated by the seed point set is used.
Figure BDA0002436139270000027
Solve for the normal n.

所述步骤(3)中基于欧式距离的点云聚类分割具体过程是:The specific process of point cloud clustering segmentation based on Euclidean distance in step (3) is:

如果聚类阈值设置成非常小的值,则会把实际的物体分割成多个较小的对象;如果聚类阈值设置较大,那么实际的多个物体则会被分割成一个对象;If the clustering threshold is set to a very small value, the actual object will be divided into multiple smaller objects; if the clustering threshold is set to a large value, then the actual multiple objects will be divided into one object;

设定一种自适应的距离阈值为

Figure BDA0002436139270000028
其中a,b和c是调整参数,它们的值通常根据激光雷达的角度精度而变化,角度精度越高,它们的值应该越小,Sd将随着数据点的坐标而变化;换句话说,每当计算数据点或聚类与其相邻点或聚类之间的欧几里得距离时,Sd就会根据点或聚类的中心坐标来调整其值。Set an adaptive distance threshold as
Figure BDA0002436139270000028
Where a, b, and c are adjustment parameters. Their values usually vary according to the angular accuracy of the lidar. The higher the angular accuracy, the smaller their values should be. Sd will vary with the coordinates of the data points; in other words, whenever the Euclidean distance between a data point or cluster and its neighboring points or clusters is calculated, Sd will adjust its value according to the center coordinates of the point or cluster.

所述步骤(4)中外观分类器的具体实现过程是:The specific implementation process of the appearance classifier in step (4) is:

基于外观的分类器将感兴趣的对象分配给两个类别之一:人或非人P;为此,首先将去除店面点云后的非地面点云簇Ck转换为紧凑的六维特征向量:xk=[hkk,dkkkk]TThe appearance-based classifier assigns the object of interest to one of two categories: human or non-human P. To this end, the non-ground point cloud cluster Ck after removing the storefront point cloud is first converted into a compact six-dimensional feature vector: xk = [ hk , ωk , dk , νk , λk , ρk ] T ;

①人体轮廓特征提取:① Human body contour feature extraction:

这一部分将提取出人体轮廓相关的三个特征,包括点云簇聚类高度,高宽比,分布对称性;对三个特征的计算结果筛选后得到具有人体轮廓特点的点云簇。This part will extract three features related to the human body contour, including point cloud cluster cluster height, aspect ratio, and distribution symmetry; after screening the calculation results of the three features, a point cloud cluster with human body contour characteristics is obtained.

在领航员移动的过程中,当人体躯干受到受部分遮挡或传感器受到干扰等因素,导致采集点云中的领航员这类对象簇是悬浮在地面以上的;点云簇Ck的高度hk

Figure BDA0002436139270000031
确定;When the navigator moves, the human body is partially blocked or the sensor is disturbed, which causes the navigator cluster in the collected point cloud to be suspended above the ground. The height hk of the point cloud cluster Ck is given by
Figure BDA0002436139270000031
Sure;

点云簇的高宽比rk=hkk作为识别领航员的特征之一,但真实场景中可能会遇到如消防栓这样的非地面物体的干扰,利用公式

Figure BDA0002436139270000032
引入偏斜度γ来描述分布的不对称性,其中μ和σ分别是点云簇Ck分布的均值和方差;The aspect ratio of the point cloud cluster r k = h k / ω k is used as one of the features for identifying the pilot. However, in real scenes, there may be interference from non-ground objects such as fire hydrants. Using the formula
Figure BDA0002436139270000032
The skewness γ is introduced to describe the asymmetry of the distribution, where μ and σ are the mean and variance of the distribution of the point cloud cluster C k, respectively;

②主成分分析法估计姿态:② Principal component analysis method to estimate posture:

前一步骤通过新的高度计算方式,高宽比和引入偏斜度初步判断出人体目标点云簇,还应该注意的是领航员行走时始终保持着直立的状态,从而该点云簇产生的是一个几乎垂直方向的主要特征向量。利用主成分分析法(PCA)对目标点云簇进行姿态估计,从而更精确地筛选出领航员目标点云簇。点云p及其邻域点的协方差矩阵C的计算方式为

Figure BDA0002436139270000033
利用C=V∧VT对协方差矩阵C进行特征值分解。得到的特征值λ和特征向量ν通过排序,最小的特征值λmin对应的特征向量νmin即为求解点p的点法向量。In the previous step, the human target point cloud cluster was preliminarily determined by the new height calculation method, aspect ratio and skewness. It should also be noted that the navigator always remained upright when walking, so the point cloud cluster generated an almost vertical main feature vector. The principal component analysis (PCA) method was used to estimate the posture of the target point cloud cluster, so as to more accurately screen the navigator target point cloud cluster. The covariance matrix C of the point cloud p and its neighborhood points is calculated as follows:
Figure BDA0002436139270000033
The covariance matrix C is decomposed by eigenvalue using C= V∧VT . The obtained eigenvalues λ and eigenvectors ν are sorted, and the eigenvector νmin corresponding to the smallest eigenvalue λmin is the point normal vector of the solution point p.

基于外观的分类器已经能够基本提取出准确的领航员目标,但受环境干扰和传感器测量噪声影响,在这一步结束仍可能有少数误判的点云簇。为此,在领航员在腰间佩戴反射标志,通过传感器返回的激光强度,在大于阈值范围内的目标中选取最大强度的聚类簇得到精确的领航员位置。The appearance-based classifier can basically extract the accurate navigator target, but due to environmental interference and sensor measurement noise, there may still be a few misjudged point cloud clusters at the end of this step. To this end, the navigator wears a reflective marker on his waist, and the laser intensity returned by the sensor is used to select the cluster with the maximum intensity among the targets greater than the threshold range to obtain the accurate navigator position.

本发明采用激光雷达具有体积小功耗低的优点,容易集成在四足机器人平台,其获取的全向三维激光点云数据包含了丰富的环境信息并且不易受光线干扰,在野外环境使用上述方案能够准确识别领航员并让四足机器人进行自主跟随,实时性高,误判率低。The laser radar used in the present invention has the advantages of small size and low power consumption, and is easy to be integrated into a quadruped robot platform. The omnidirectional three-dimensional laser point cloud data acquired contains rich environmental information and is not easily disturbed by light. Using the above solution in a field environment can accurately identify the navigator and allow the quadruped robot to follow autonomously, with high real-time performance and low misjudgment rate.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明基于三维激光雷达的四足机器人自主跟随领航员的方法的流程图。FIG1 is a flow chart of a method for a quadruped robot to autonomously follow a navigator based on three-dimensional laser radar according to the present invention.

图2是本发明中点云聚类识别人体目标的流程图。FIG. 2 is a flow chart of point cloud clustering for identifying human targets in the present invention.

具体实施方式DETAILED DESCRIPTION

本发明基于三维激光雷达的四足机器人自主跟随领航员的方法中将选择带有三维坐标信息的激光雷达传感器的VLP16线激光雷达作为点云获取设备,并在此基础上提供一种适用于四足机器人平台的自主跟随领航员的方法。在基于激光扫描数据的基础上展开的机器人环境感知研究,有机构利用多个32线或64线激光雷达获得密集丰富的点云数据进行目标识别从而决定运动控制策略。而在四足机器人的应用平台上并不会出现高速移动场景,因此选用16线激光雷达完全能够满足实时性要求并且降低了环境感知系统的成本。In the method of autonomously following a navigator by a quadruped robot based on three-dimensional laser radar, the VLP16-line laser radar of the laser radar sensor with three-dimensional coordinate information is selected as the point cloud acquisition device, and on this basis, a method for autonomously following a navigator suitable for a quadruped robot platform is provided. In the research on robot environmental perception based on laser scanning data, some institutions use multiple 32-line or 64-line laser radars to obtain dense and rich point cloud data for target recognition and thus determine the motion control strategy. However, high-speed movement scenes do not appear on the application platform of the quadruped robot, so the selection of 16-line laser radar can fully meet the real-time requirements and reduce the cost of the environmental perception system.

将16线激光雷达安装在四足机器人头部约离地面高度1米的位置,领航员在腰部佩戴反射标贴,在机器人前方行走。点云输入模块在坐标转换和参数标定后获得原始点云数据,点云预处理阶段对点云进行滤波和地面点云分割提取,目标识别与追踪阶段对非地面点云聚类并进行特征提取,识别出准确的领航员目标位置;根据目标位置发出运动控制指令使四足机器人能够平稳安全地自主跟随领航员,能够在野外环境下执行作战任务。The 16-line laser radar is installed on the head of the quadruped robot at a height of about 1 meter from the ground. The navigator wears a reflective sticker on his waist and walks in front of the robot. The point cloud input module obtains the original point cloud data after coordinate conversion and parameter calibration. The point cloud preprocessing stage filters the point cloud and extracts the ground point cloud segmentation. The target recognition and tracking stage clusters the non-ground point cloud and extracts features to identify the accurate target position of the navigator. According to the target position, motion control commands are issued to enable the quadruped robot to follow the navigator autonomously and smoothly and safely, and to perform combat missions in the field environment.

结合图1所示的流程图,本发明方法主要分为四个模块:点云输入模块,点云预处理模块,目标识别与追踪模块,机器人运动控制模块。下面将对本发明具体实施例中的技术方案进行清晰完整的描述。Combined with the flowchart shown in Figure 1, the method of the present invention is mainly divided into four modules: point cloud input module, point cloud preprocessing module, target recognition and tracking module, and robot motion control module. The technical solution in the specific embodiment of the present invention will be described clearly and completely below.

1.点云数据获取过程1. Point cloud data acquisition process

激光雷达雷达系统由激光测距和扫描单元以及包含集成的差分全球定位系统(DGPS)和惯性导航系统(INS)的定位系统(POS)组成。激光测距单元通过测量激光脉冲传输与其检测之间的时间延迟来估计从传感器到被测表面的距离。本发明选取的16线激光雷达,虽然没有32线及64线激光雷达的激光束数量多,但其分辨率和采样频率完全能够满足本方法的实时性,而且其成本远低于其他激光雷达。The laser radar system consists of a laser ranging and scanning unit and a positioning system (POS) including an integrated differential global positioning system (DGPS) and an inertial navigation system (INS). The laser ranging unit estimates the distance from the sensor to the measured surface by measuring the time delay between the transmission of the laser pulse and its detection. Although the 16-line laser radar selected by the present invention does not have as many laser beams as the 32-line and 64-line laser radars, its resolution and sampling frequency can fully meet the real-time performance of the present method, and its cost is much lower than other laser radars.

通过激光雷达获取的原始点云数据前需要进行外参数标定工作。根据实验平台对传感器的视场要求,激光雷达的安装姿态为四足机器人头部距离地面高度约为1米。外参数标定需要获取激光雷达坐标系到机器人坐标系的平移和旋转矩阵,根据雷达的安装位置,每个方向的偏移量可在允许的误差范围内直接测量,旋转矩阵通过三维空间姿态的标准正交矩阵得到。External parameter calibration is required before the raw point cloud data obtained by the laser radar. According to the field of view requirements of the experimental platform for the sensor, the installation posture of the laser radar is that the head of the quadruped robot is about 1 meter above the ground. External parameter calibration requires obtaining the translation and rotation matrix from the laser radar coordinate system to the robot coordinate system. According to the installation position of the radar, the offset in each direction can be directly measured within the allowable error range, and the rotation matrix is obtained through the standard orthogonal matrix of the three-dimensional space posture.

2.点云预处理过程2. Point cloud preprocessing process

点云滤波是点云预处理的基础,本发明中在利用基于统计的滤波基础上保留了后续数据处理需要的重要反射强度信息。假设点云簇DT满足N(μff)的正态分布。在滤波阶段需要遍历两次。对于任何一点,第一次计算其平均分布距离μf和方差σf。第二次迭代过滤出不满足

Figure BDA0002436139270000051
的点,ΩF是滤波后的点云簇,
Figure BDA0002436139270000052
是Pi的邻点,df是从
Figure BDA0002436139270000053
到Pi的距离,Kf是给定的系数。Point cloud filtering is the basis of point cloud preprocessing. In the present invention, important reflection intensity information required for subsequent data processing is retained based on statistical filtering. Assume that the point cloud cluster DT satisfies the normal distribution of N( μf , σf ). Two iterations are required in the filtering stage. For any point, the average distribution distance μf and variance σf are calculated for the first time. The second iteration filters out points that do not meet the normal distribution.
Figure BDA0002436139270000051
points, Ω F is the filtered point cloud cluster,
Figure BDA0002436139270000052
is the neighbor of Pi , and df is from
Figure BDA0002436139270000053
The distance to Pi , Kf is the given coefficient.

对于本发明中的滤波场景,不仅需要考虑每个数据点的空间位置,也需要考虑反射强度,这里采用双边滤波算法。对于点云簇中的任意点,如果原始强反射值是

Figure BDA0002436139270000054
则其新反射强度值
Figure BDA0002436139270000055
将通过
Figure BDA0002436139270000056
计算得到。For the filtering scenario in the present invention, not only the spatial position of each data point needs to be considered, but also the reflection intensity. Here, a bilateral filtering algorithm is used. For any point in the point cloud cluster, if the original strong reflection value is
Figure BDA0002436139270000054
The new reflection intensity value is
Figure BDA0002436139270000055
Will pass
Figure BDA0002436139270000056
Calculated.

滤波处理后得到去除噪声但仍保留了大部分环境信息和反射强度信息的激光点云,对其结果采用分段的方法,在分段点云集合中选取最低高度值点的平均值的点为最低点代表(LPR)。将其作为平面模型估计的初始种子,对于平面的估计使用最简单的线性模型ax+by+cz+d=0,并通过由种子点集合计算的协方差矩阵

Figure BDA0002436139270000057
求解法线n。After filtering, the laser point cloud with noise removed but still retaining most of the environmental information and reflection intensity information is obtained. The result is segmented and the point with the average value of the lowest height value in the segmented point cloud set is selected as the lowest point representative (LPR). It is used as the initial seed for plane model estimation. The simplest linear model ax+by+cz+d=0 is used for plane estimation, and the covariance matrix calculated by the seed point set is used.
Figure BDA0002436139270000057
Solve for the normal n.

3.目标识别与追踪3. Target recognition and tracking

地面点云剔除以后才能消除对非地面点云聚类的影响。目标识别的工作在此基础上展开。如图2所示,展示了目标识别的具体过程。首先是点云聚类。在点云聚类这一步骤使用基于欧式距离的聚类方法。Only after the ground point cloud is eliminated can the influence on the clustering of non-ground point clouds be eliminated. The work of target recognition is carried out on this basis. As shown in Figure 2, the specific process of target recognition is shown. The first is point cloud clustering. In this step of point cloud clustering, a clustering method based on Euclidean distance is used.

传统的欧几里得距离聚类算法如果没有正确选择欧几里得距离阈值,往往会导致邻域检测错误或远域检测漏检。如果聚类阈值设置成非常小的值,则会把实际的物体分割成多个较小的对象;如果聚类阈值设置较大,那么实际的多个物体则会被分割成一个对象。If the traditional Euclidean distance clustering algorithm does not correctly select the Euclidean distance threshold, it will often lead to neighborhood detection errors or far-field detection omissions. If the clustering threshold is set to a very small value, the actual object will be divided into multiple smaller objects; if the clustering threshold is set to a large value, then multiple actual objects will be divided into one object.

在本发明中,为了提升聚类速率,减少误判率,节约计算资源,设定一种自适应的距离阈值为

Figure BDA0002436139270000058
其中a,b和c是调整参数,它们的值通常根据激光雷达的角度精度而变化。角度精度越高,它们的值应该越小,Sd将随着数据点的坐标而变化。换句话说,每当计算数据点或聚类与其相邻点或聚类之间的欧几里得距离时,Sd就会根据点或聚类的中心坐标来调整其值。In the present invention, in order to improve the clustering rate, reduce the misjudgment rate, and save computing resources, an adaptive distance threshold is set as
Figure BDA0002436139270000058
Where a, b, and c are adjustment parameters whose values usually vary according to the angular accuracy of the LiDAR. The higher the angular accuracy, the smaller their values should be, and Sd will vary with the coordinates of the data points. In other words, whenever the Euclidean distance between a data point or cluster and its neighboring points or clusters is calculated, Sd will adjust its value according to the center coordinates of the point or cluster.

得到点云聚类结果后,利用基于外观的分类器和加强的反射强度特征在点云簇列表中筛选出正确的人体目标。After obtaining the point cloud clustering results, the appearance-based classifier and enhanced reflection intensity features are used to screen out the correct human targets in the point cloud cluster list.

基于外观的分类器将感兴趣的对象分配给两个类别之一:人或非人P。为此,首先将去除店面点云后的非地面点云簇Ck转换为紧凑的六维特征向量:xk=[hkk,dkkkk]TThe appearance-based classifier assigns the object of interest to one of two categories: human or non-human P. To this end, the non-ground point cloud cluster Ck after removing the storefront point cloud is first converted into a compact six-dimensional feature vector: xk = [ hk , ωk , dk , νk , λk , ρk ] T .

首先进行人体轮廓特征提取,这一部分将提取出人体轮廓相关的三个特征,包括点云簇聚类高度,高宽比,分布对称性。对三个特征的计算结果筛选后得到具有人体轮廓特点的点云簇。First, human body contour feature extraction is performed. This part will extract three features related to human body contour, including point cloud cluster cluster height, aspect ratio, and distribution symmetry. After screening the calculation results of the three features, point cloud clusters with human body contour characteristics are obtained.

在领航员移动的过程中,当人体躯干受到受部分遮挡或传感器受到干扰等因素,导致采集点云中的领航员这类对象簇是悬浮在地面以上的。以前的计算方式使用边界框的高度表征点云簇高度,这导致漂浮的躯干高度减半,从而使其与人的身高相距甚远。本发明中,簇Ck的高度hk

Figure BDA0002436139270000061
确定,这样的计算高度的方式使悬浮的人体点云最大高度定义为为对整个人的扫描的高度。When the navigator moves, the human torso is partially blocked or the sensor is disturbed, which causes the navigator cluster in the collected point cloud to be suspended above the ground. The previous calculation method uses the height of the bounding box to represent the point cloud cluster height, which causes the floating torso height to be halved, making it far from the height of a person. In the present invention, the height hk of cluster Ck is given by
Figure BDA0002436139270000061
It is determined that such a method of calculating the height enables the maximum height of the suspended human body point cloud to be defined as the height of the scan of the entire person.

点云簇的高宽比rk=hkk作为识别领航员的特征之一,但真实场景中可能会遇到如消防栓这样的非地面物体的干扰。利用公式

Figure BDA0002436139270000062
引入偏斜度γ来描述分布的不对称性,其中μ和σ分别是点云簇Ck分布的均值和方差。The aspect ratio of the point cloud cluster r k = h kk is used as one of the features for identifying the pilot, but in real scenes, there may be interference from non-ground objects such as fire hydrants. Using the formula
Figure BDA0002436139270000062
The skewness γ is introduced to describe the asymmetry of the distribution, where μ and σ are the mean and variance of the distribution of the point cloud cluster C k, respectively.

前一步骤通过新的高度计算方式,高宽比和引入偏斜度初步判断出人体目标点云簇,还应该注意的是领航员行走时始终保持着直立的状态,从而该点云簇产生的是一个几乎垂直方向的主要特征向量。利用主成分分析法(PCA)对目标点云簇进行姿态估计,从而更精确地筛选出领航员目标点云簇。点云p及其邻域点的协方差矩阵C的计算方式为

Figure BDA0002436139270000063
利用C=V∧VT对协方差矩阵C进行特征值分解。得到的特征值λ和特征向量ν通过排序,最小的特征值λmin对应的特征向量νmin即为求解点p的点法向量。In the previous step, the human target point cloud cluster was preliminarily determined by the new height calculation method, aspect ratio and skewness. It should also be noted that the navigator always remained upright when walking, so the point cloud cluster generated an almost vertical main feature vector. The principal component analysis (PCA) method was used to estimate the posture of the target point cloud cluster, so as to more accurately screen the navigator target point cloud cluster. The covariance matrix C of the point cloud p and its neighborhood points is calculated as follows:
Figure BDA0002436139270000063
The covariance matrix C is decomposed by eigenvalue using C= V∧VT . The obtained eigenvalues λ and eigenvectors ν are sorted, and the eigenvector νmin corresponding to the smallest eigenvalue λmin is the point normal vector of the solution point p.

基于外观的分类器已经能够基本提取出准确的领航员目标,但受环境干扰和传感器测量噪声影响,在这一步结束仍可能有少数误判的点云簇。解决这一问题的方法是领航员在腰间佩戴反射标志,通过传感器返回的激光强度,在大于阈值范围内的目标中选取最大强度的聚类簇得到精确的领航员位置。The appearance-based classifier can basically extract the accurate navigator target, but due to environmental interference and sensor measurement noise, there may still be a few misjudged point cloud clusters at the end of this step. The solution to this problem is that the navigator wears a reflective marker on his waist, and through the laser intensity returned by the sensor, select the cluster with the maximum intensity from the targets within the threshold range to obtain the accurate navigator position.

4.机器人运动控制4. Robot motion control

用卡尔曼滤波对相邻时刻的运动对象进行关联以保持对领航员的准确跟踪。获取每一时刻领航员目标位置的坐标后,向四足机器人输出行进速度和方向的指令,以实现自主跟随领航员。在控制指令输出前,进行简单的运动策略判定,如果领航员位置发生突变容易造成四足机器人急速转向,影响平稳行驶。另外,当领航员位置与机器人距离小于设定的安全阈值时,发出速度为0的控制指令,以避免发生碰撞。Kalman filtering is used to associate moving objects at adjacent moments to keep accurate tracking of the navigator. After obtaining the coordinates of the navigator's target position at each moment, the speed and direction instructions are output to the quadruped robot to achieve autonomous following of the navigator. Before the control instruction is output, a simple motion strategy judgment is performed. If the navigator's position suddenly changes, it is easy to cause the quadruped robot to turn rapidly, affecting smooth driving. In addition, when the distance between the navigator's position and the robot is less than the set safety threshold, a control instruction with a speed of 0 is issued to avoid collision.

本发明基于激光雷达获取的激光点云数据,对领航员目标进行准确识别追踪并使四足机器人能够有效平稳地对其进行自主跟随。本发明能够满足机器人自主跟随人员要求地实时性和准确率。The present invention accurately identifies and tracks the pilot target based on the laser point cloud data obtained by the laser radar, and enables the quadruped robot to effectively and smoothly follow the pilot autonomously. The present invention can meet the real-time and accuracy requirements of the robot autonomously following the personnel.

Claims (3)

1. A method for a quadruped robot to autonomously follow a pilot based on a three-dimensional laser radar is characterized by comprising the following specific steps:
(1) Using a 16-line three-dimensional laser radar as a sensor to obtain original environment three-dimensional point cloud data;
(2) Using a filtering operation to preprocess the original point cloud, wherein the preprocessing process also comprises the step of rejecting ground point cloud so as to keep non-ground point cloud data;
(3) Carrying out segmentation clustering based on Euclidean distance in the extracted non-ground point cloud data to obtain a point cloud cluster list of objects on the ground;
(4) Screening candidate human body target point cloud clusters from the segmented point cloud clusters by using an appearance classifier, and judging the target position of a pilot wearing a reflective label according to a set laser reflection intensity threshold;
(5) Using Kalman filtering to correlate the moving objects at adjacent moments so as to keep accurate tracking of a pilot; after the coordinates of the target position of the pilot at each moment are obtained, instructions of the traveling speed and the traveling direction are output to the quadruped robot, so that the pilot can be autonomously followed;
the specific implementation process of the appearance classifier in the step (4) is as follows:
appearance-based classifiers assign an object of interest to one of two categories: human or non-human P; for this purpose, firstly, the non-ground point cloud cluster C after the ground point cloud is removed is processed k Conversion to a compact six-dimensional feature vector: x is the number of k =[h kk ,d kkkk ] T
(1) Human body contour feature extraction:
three characteristics related to the human body contour are extracted from the part, including the clustering height, the height-to-width ratio and the distribution symmetry of the point cloud clusters, and the point cloud clusters with the human body contour characteristics are obtained after the calculation results of the three characteristics are screened;
in the process of movement of a pilot, when the trunk of a human body is partially shielded or a sensor is interfered and other factors, an object cluster such as the pilot in the collected point cloud is suspended above the ground, and a point cloud cluster C k Height h of k By
Figure QLYQS_1
Determining;
aspect ratio r of point cloud cluster k =h kk As one of the characteristics for identifying the pilot, a formula is used
Figure QLYQS_2
Skewness gamma is introduced to describe the asymmetry of the distribution, where mu and sigma are point cloud clusters C, respectively k Mean and variance of the distribution;
(2) Estimating the attitude by a principal component analysis method:
the method comprises the following steps that in the previous step, a human body target point cloud cluster is preliminarily judged through a new height calculation mode, the height-width ratio and the introduced skewness, and attention should be paid to that a pilot always keeps an upright state when walking, so that a main characteristic vector in an almost vertical direction is generated by the point cloud cluster; performing attitude estimation on the target point cloud cluster by using a principal component analysis method, so that the target point cloud cluster of the pilot is more accurately screened out; the calculation mode of the covariance matrix C of the point cloud p and the neighborhood points is
Figure QLYQS_3
Using C = V ^ V T The covariance matrix C is subjected to eigenvalue decomposition, the obtained eigenvalue lambda and eigenvector v are sorted, and the minimum eigenvalue lambda is min Corresponding feature vector v min Namely, the point normal vector of the solution point p is obtained;
the navigator wears a reflection sign at the waist, and the accurate position of the navigator is obtained by selecting the clustering cluster with the maximum intensity from the targets in the range larger than the threshold value through the laser intensity returned by the sensor.
2. The method for the quadruped robot to autonomously follow the pilot based on the three-dimensional laser radar as claimed in claim 1, wherein the specific process of point cloud data preprocessing in the step (2) is as follows:
hypothesis Point cloud Cluster D T Satisfies N (mu) ff ) The normal distribution of (2) needs to be traversed twice in the filtering stage, and for any point, the average distribution distance mu is calculated for the first time f Sum variance σ f Second iteration filters out unsatisfied
Figure QLYQS_4
Point of (a), omega F Is a filtered point cloud cluster, P i n Is P i Adjacent point of (a), d f Is from P i n To P i Distance of, K f Is a given coefficient; for any point in a point cloud cluster, if the original strong reflection value is +>
Figure QLYQS_5
Then its new reflection intensity value->
Figure QLYQS_6
Will pass through
Figure QLYQS_7
Calculating to obtain;
removing noise after filtering processing, adopting a segmentation method for the result, selecting a point of the average value of the lowest height value points in a segmented point cloud set as the lowest point representative, taking the point as the initial seed of plane model estimation, using the simplest linear model ax + by + cz + d =0 for plane estimation, and calculating a covariance matrix through a seed point set
Figure QLYQS_8
The normal n is solved.
3. The method for the quadruped robot to autonomously follow the pilot based on the three-dimensional laser radar as claimed in claim 1, wherein the specific process of the Euclidean distance-based point cloud clustering in the step (3) is as follows:
setting an adaptive distance threshold of
Figure QLYQS_9
Where a, b and c are adjustment parameters, their values typically vary depending on the angular accuracy of the lidar; the higher the angular accuracy, the smaller their value should be, sd will vary with the coordinates of the data points. />
CN202010252915.7A 2020-04-02 2020-04-02 Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar Active CN111461023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010252915.7A CN111461023B (en) 2020-04-02 2020-04-02 Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010252915.7A CN111461023B (en) 2020-04-02 2020-04-02 Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar

Publications (2)

Publication Number Publication Date
CN111461023A CN111461023A (en) 2020-07-28
CN111461023B true CN111461023B (en) 2023-04-18

Family

ID=71681299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010252915.7A Active CN111461023B (en) 2020-04-02 2020-04-02 Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar

Country Status (1)

Country Link
CN (1) CN111461023B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092898A (en) * 2020-07-31 2022-02-25 华为技术有限公司 Target object sensing method and device
CN114051628B (en) * 2020-10-30 2023-04-04 华为技术有限公司 Method and device for determining target object point cloud set
CN112346460B (en) * 2020-11-05 2022-08-09 泉州装备制造研究所 Automatic following method of mobile robot suitable for multi-person scene
CN112733922A (en) * 2021-01-04 2021-04-30 上海高仙自动化科技发展有限公司 Method and device for determining forbidden area, robot and storage medium
CN112733923A (en) * 2021-01-04 2021-04-30 上海高仙自动化科技发展有限公司 System and robot for determining forbidden area
CN112819958B (en) * 2021-03-01 2022-09-23 徐一帆 Engineering geological mapping method and system based on three-dimensional laser scanning
CN113778078B (en) * 2021-03-05 2025-04-18 北京京东乾石科技有限公司 Positioning information generation method, device, electronic device and computer readable medium
CN113298833A (en) * 2021-05-20 2021-08-24 山东大学 Target object point cloud characteristic line and surface extraction method and system
CN113343840B (en) * 2021-06-02 2022-03-08 合肥泰瑞数创科技有限公司 Object identification method and device based on three-dimensional point cloud
CN113313654B (en) * 2021-06-23 2023-12-15 上海西井科技股份有限公司 Laser point cloud filtering denoising method, system, equipment and storage medium
CN113447953B (en) * 2021-06-29 2022-08-26 山东高速建设管理集团有限公司 Background filtering method based on road traffic point cloud data
CN113705617B (en) * 2021-07-30 2024-06-21 武汉万集光电技术有限公司 Point cloud data processing method, device, computer equipment and storage medium
CN113484875B (en) * 2021-07-30 2022-05-24 燕山大学 A Hierarchical Recognition Method of LiDAR Point Cloud Targets Based on Mixture Gaussian Sorting
CN115705064B (en) * 2021-08-03 2024-05-24 北京小米移动软件有限公司 Following control method and device of foot-type robot and robot
CN113485405A (en) * 2021-08-05 2021-10-08 Oppo广东移动通信有限公司 Attitude acquisition method, robot and readable storage medium
CN113917917B (en) * 2021-09-24 2023-09-15 四川启睿克科技有限公司 Obstacle avoidance method and device for indoor bionic multi-legged robot and computer readable medium
CN113920134B (en) * 2021-09-27 2022-06-07 山东大学 A method and system for segmentation of slope ground point cloud based on multi-line lidar
CN114104139B (en) * 2021-09-28 2022-10-11 北京炎凌嘉业机电设备有限公司 A bionic-footed robot foot platform integrated obstacle-surmounting and autonomous following system
CN113917454A (en) * 2021-10-11 2022-01-11 上海大学 A method and system for fusion detection of unmanned boat surface targets
CN113885510B (en) * 2021-10-21 2023-11-10 齐鲁工业大学 Four-foot robot obstacle avoidance and pilot following method and system
CN114358140B (en) * 2021-12-13 2025-05-09 南京莱斯信息技术股份有限公司 A method for fast capture of sparse point clouds by aircraft in low visibility
CN114995135B (en) * 2022-05-27 2025-06-17 重庆览辉信息技术有限公司 Quadruped robot dog-assisted posture correction method and system for cable tunnel inspection
CN115797397B (en) * 2022-09-09 2024-04-05 北京科技大学 A method and system for a robot to autonomously follow a target person around the clock
CN115712298B (en) * 2022-10-25 2023-05-09 太原理工大学 Autonomous navigation method for robots driving in passageways based on single-line lidar
CN116342899A (en) * 2022-12-19 2023-06-27 北京东土科技股份有限公司 Target detection positioning method, device, equipment and storage medium
CN116660920B (en) * 2023-05-08 2025-06-24 郑州恒达智控科技股份有限公司 A method for detecting the opening and closing angle of the hydraulic support frame guard plate of a laser radar
CN119057768B (en) * 2023-05-31 2025-06-20 深圳市普渡科技有限公司 Robot motion control method, device, robot and storage medium
CN118429426B (en) * 2024-06-26 2024-09-13 广汽埃安新能源汽车股份有限公司 SLAM method and device for rescue unmanned vehicle, electronic equipment and storage medium
CN118887419A (en) * 2024-09-30 2024-11-01 山东港口烟台港集团有限公司 A fast identification method for commodity vehicles in yard based on feature evaluation function

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103575272A (en) * 2013-11-15 2014-02-12 南开大学 Method for extracting natural landmarks for mobile robot in forest environment
CN104574303A (en) * 2014-12-26 2015-04-29 河海大学 Airborne LiDAR point cloud ground filtering method based on spatial clustering
CN105354811A (en) * 2015-10-30 2016-02-24 北京自动化控制设备研究所 A Filtering Method for Ground-Used Multi-Line 3D LiDAR Point Cloud Data
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110243360B (en) * 2018-03-08 2022-02-22 深圳市优必选科技有限公司 Method for constructing and positioning map of robot in motion area

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103575272A (en) * 2013-11-15 2014-02-12 南开大学 Method for extracting natural landmarks for mobile robot in forest environment
CN104574303A (en) * 2014-12-26 2015-04-29 河海大学 Airborne LiDAR point cloud ground filtering method based on spatial clustering
CN105354811A (en) * 2015-10-30 2016-02-24 北京自动化控制设备研究所 A Filtering Method for Ground-Used Multi-Line 3D LiDAR Point Cloud Data
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data

Also Published As

Publication number Publication date
CN111461023A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111461023B (en) Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar
CN113359810A (en) Unmanned aerial vehicle landing area identification method based on multiple sensors
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
Lin et al. A robust real-time embedded vision system on an unmanned rotorcraft for ground target following
CN111968128B (en) A Method for Resolving UAV's Visual Pose and Position Based on Image Marking
CN103149939B (en) A kind of unmanned plane dynamic target tracking of view-based access control model and localization method
CN110782481A (en) Unmanned ship intelligent decision method and system
US12099141B1 (en) Laser detection method for port machinery equipment
Arora et al. Infrastructure-free shipdeck tracking for autonomous landing
CN110865650A (en) Adaptive estimation method of UAV pose and attitude based on active vision
CN113536959A (en) Dynamic obstacle detection method based on stereoscopic vision
CN114549549B (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN110824495B (en) Drosophila vision-inspired 3D moving object detection method based on lidar
Fan et al. Vision algorithms for fixed-wing unmanned aerial vehicle landing system
Lu et al. Perception and avoidance of multiple small fast moving objects for quadrotors with only low-cost RGBD camera
CN116753937A (en) Unmanned plane laser radar and vision SLAM-based real-time map building fusion method
Wang et al. Target detection for construction machinery based on deep learning and multisource data fusion
CN113253289A (en) Unmanned aerial vehicle detection tracking system implementation method based on combination of laser radar and vision
Ji et al. Adaptive denoising-enhanced LiDAR odometry for degeneration resilience in diverse terrains
Li et al. Feature assessment and enhanced vertical constraint lidar odometry and mapping on quadruped robot
Gao et al. Design and implementation of autonomous mapping system for UGV based on LiDAR
CN112731335B (en) A multi-UAV collaborative positioning method based on full-area laser scanning
CN117665805A (en) A fine-grained multi-scale human posture estimation method based on radio frequency signals
Lu et al. Research on unmanned surface vessel perception algorithm based on multi-sensor fusion
Zhang et al. Vision-based uav positioning method assisted by relative attitude classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant