[go: up one dir, main page]

CN111522022A - Dynamic target detection method of robot based on laser radar - Google Patents

Dynamic target detection method of robot based on laser radar Download PDF

Info

Publication number
CN111522022A
CN111522022A CN202010310030.8A CN202010310030A CN111522022A CN 111522022 A CN111522022 A CN 111522022A CN 202010310030 A CN202010310030 A CN 202010310030A CN 111522022 A CN111522022 A CN 111522022A
Authority
CN
China
Prior art keywords
lidar
radar
point cloud
pose
simulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010310030.8A
Other languages
Chinese (zh)
Other versions
CN111522022B (en
Inventor
孙伟
杜川
林旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202010310030.8A priority Critical patent/CN111522022B/en
Publication of CN111522022A publication Critical patent/CN111522022A/en
Application granted granted Critical
Publication of CN111522022B publication Critical patent/CN111522022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a dynamic target detection method, in particular to a dynamic target detection method for a robot based on a laser radar, which corrects positioning by using a non-gradient optimization mode, reduces positioning errors, improves detection precision, and can reach centimeter level; the invention uses radar to scan the whole plane for detection, and has wide detection range. The robot dynamic multi-target detection method based on the laser radar is provided to reduce positioning errors and improve detection precision.

Description

基于激光雷达的机器人进行动态目标检测方法Dynamic target detection method for robot based on lidar

技术领域technical field

本发明涉及一种动态目标检测方法,特别是基于激光雷达的机器人进行动态目标检测方法。The invention relates to a dynamic target detection method, in particular to a dynamic target detection method for a robot based on a laser radar.

背景技术Background technique

近年来,机器人已成为高技术领域内具有代表性的战略目标。机器人技术的出现和发展,不但使传统的工业生产面貌发生根本性的变化,而且将对人类的社会生活产生深远的影响。由于自主移动机器人只有准确地知道其自身的位置、工作空间中障碍物的位置以及障碍物的运动情况,才能有效安全地进行自主运动,所以,自主移动机器人目标检测与定位的问题就显得尤为重要。国内外在此领域里已进行了大量的研究工作。就目前的研究进展而言,主要是通过GPS、摄像机、惯导等传感器采集环境信息并处理实现自主定位与目标检测。而这也决定了由传感器的各项性能和对环境准确的鲁棒描述保证后续决策的正确。In recent years, robots have become a representative strategic target in high-tech fields. The emergence and development of robot technology will not only fundamentally change the traditional industrial production, but also have a profound impact on human social life. Since an autonomous mobile robot can only perform autonomous motion effectively and safely only by accurately knowing its own position, the position of obstacles in the workspace and the motion of the obstacles, the problem of target detection and localization of autonomous mobile robots is particularly important. . A lot of research work has been done in this field at home and abroad. As far as the current research progress is concerned, it mainly collects environmental information and processes it through GPS, cameras, inertial navigation and other sensors to achieve autonomous positioning and target detection. And this also determines the correctness of subsequent decision-making by the performance of the sensor and the accurate and robust description of the environment.

想要进行准确的目标检测,首先要对机器人进行定位。现有技术在利用机器人进行目标检测时,在定位过程中存在许多不确定因素。比如机器人本身具有的不确定性、里程计误差的累积、传感器的噪声干扰以及机器人所处环境的复杂性、未知性等等。由于这些不确定因素的存在,使得机器人的定位变得更加复杂,从而使环境描述出现误差,导致系统中错误信息的积累,这种积累更加剧了环境感知系统的错误偏离,造成检测结果的精度较低。To perform accurate object detection, the robot must first be positioned. When using a robot for target detection in the prior art, there are many uncertain factors in the positioning process. For example, the uncertainty of the robot itself, the accumulation of odometer errors, the noise interference of sensors, and the complexity and unknown of the environment in which the robot is located. Due to the existence of these uncertain factors, the positioning of the robot becomes more complicated, which leads to errors in the description of the environment, and leads to the accumulation of wrong information in the system. lower.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于针对上述现有技术的不足,提出一种基于激光雷达的机器人进行动态多目标检测方法,以减小定位的误差,提高检测精度。The purpose of the present invention is to propose a dynamic multi-target detection method for a robot based on lidar, in order to reduce the positioning error and improve the detection accuracy, aiming at the above-mentioned shortcomings of the prior art.

为实现上述目的,基于激光雷达的机器人进行动态目标检测方法,包括如下步骤:In order to achieve the above purpose, the dynamic target detection method for a robot based on lidar includes the following steps:

(1)在机器人中心包括一个激光扫描雷达用于采集生成点云集[βn,l1n],由机器人定位单元给出机器人定位位姿;其中,βn表示激光雷达的扫描角度,角度范围为雷达扫描全平面,l1n表示对应的角度处检测到的目标与机器人的距离;以βn横坐标,l1n为纵坐标建立的实体雷达点云数据图;(1) Include a laser scanning radar in the center of the robot to collect and generate point cloud sets [β n , l 1n ], and the robot positioning unit gives the robot positioning pose; where β n represents the scanning angle of the laser radar, and the angle range is The radar scans the entire plane, l 1n represents the distance between the detected target and the robot at the corresponding angle; the entity radar point cloud data map established with β n abscissa and l 1n as the ordinate;

(2)根据场景建立矢量的场景地图,将场景地图中的障碍块和场景四周边缘用围成的线段表示,得到所有线段的终点和起点的坐标;(2) establishing a vector scene map according to the scene, representing the obstacle blocks in the scene map and the surrounding edges of the scene with enclosed line segments, and obtaining the coordinates of the end points and starting points of all line segments;

(3)建立仿真激光雷达模型,输入实体激光雷达的位姿,使用截距遍历法求出仿真激光雷达发出的所有射线与地图边缘和障碍块的交点组成的点云集[βn,l2n],得到雷达仿真图;(3) Establish a simulated lidar model, input the pose of the entity lidar, and use the intercept traversal method to obtain the point cloud set [β n , l 2n ] composed of the intersections of all rays emitted by the simulated lidar and the edges of the map and obstacle blocks , get the radar simulation map;

(4)采用非梯度优化的方式对机器人定位位姿进行修正,使用修正后的位姿pk取代之前输入仿真的实体雷达位姿,得到新的仿真激光雷达点云集和点云数据图;(4) Use the non-gradient optimization method to correct the positioning pose of the robot, use the corrected pose p k to replace the previously input simulated entity radar pose, and obtain a new simulated lidar point cloud set and point cloud data map;

(5)将实体激光雷达点云集与新仿真激光雷达点云集在相同扫描角度处的点做差,并将其差值与设定的阈值ζ进行比较:(5) Make the difference between the solid lidar point cloud set and the new simulated lidar point cloud set at the same scanning angle, and compare the difference with the set threshold ζ:

如果实体激光雷达点云数据图中有连续n个点的距离与仿真激光雷达点云数据图中相对应的点的距离之差l大于设定的阈值ζ,即l>ζ,则实体激光雷达检测到了目标,并将相邻n点的βn的均值记为

Figure BDA0002457305550000031
将相邻n点的实体激光雷达的距离平均值记为
Figure BDA0002457305550000032
以实体激光雷达所在位置为极坐标系原点,得到该目标的极坐标为
Figure BDA0002457305550000033
If the difference l between the distances of n consecutive points in the solid lidar point cloud data graph and the distances of the corresponding points in the simulated lidar point cloud data graph is greater than the set threshold ζ, that is, l>ζ, then the solid lidar The target is detected, and the mean of β n of adjacent n points is recorded as
Figure BDA0002457305550000031
The average distance of the physical lidar of adjacent n points is recorded as
Figure BDA0002457305550000032
Taking the location of the physical lidar as the origin of the polar coordinate system, the polar coordinates of the target are obtained as
Figure BDA0002457305550000033

否则,没有检测到目标。Otherwise, no target is detected.

本发明现有技术相比具有如下优点:Compared with the prior art of the present invention, it has the following advantages:

1)本发明使用非梯度优化的方式修正定位,减少了定位误差,提高了检测精度,其检测精度可达厘米级;1) The present invention uses a non-gradient optimization method to correct the positioning, reduces the positioning error, and improves the detection accuracy, and the detection accuracy can reach the centimeter level;

2)本发明使用雷达扫描全平面进行检测,具有检测范围广。2) The present invention uses the radar to scan the entire plane for detection, and has a wide detection range.

附图说明Description of drawings

图1本发明的实现总流程图Fig. 1 realizes the overall flow chart of the present invention

图2为本发明中的实体雷达点云数据图;Fig. 2 is the entity radar point cloud data diagram in the present invention;

图3为本发明中使用的实际场景地图;Fig. 3 is the actual scene map used in the present invention;

图4为本发明中根据实际场景建立的场景矢量图;Fig. 4 is the scene vector diagram established according to the actual scene in the present invention;

图5为本发明中生成仿真雷达点云数据的子流程图;Fig. 5 is the sub-flow chart of generating simulated radar point cloud data in the present invention;

图6为本发明中仿真激光雷达发出射线分别与场景边缘和障碍块相交的仿真图;FIG. 6 is a simulation diagram of the ray emitted by the simulated lidar and the scene edge and the obstacle block respectively intersecting in the present invention;

图7为本发明中实体雷达与仿真雷达在机器人定位点处点云数据的对比结果图;Fig. 7 is the comparison result diagram of the point cloud data of the physical radar and the simulated radar in the robot positioning point in the present invention;

图8为本发明中使用非梯度优化方式对机器人定位位姿进行修正的子流程图;8 is a sub-flow diagram of the present invention using a non-gradient optimization method to correct the positioning pose of the robot;

图9为本发明中仿真雷达的点云数据图;Fig. 9 is the point cloud data diagram of simulated radar in the present invention;

图10为本发明中修正后实体雷达与仿真雷达在机器人定位点处点云数据的对比结果图。FIG. 10 is a comparison result diagram of the point cloud data at the robot positioning point after the correction of the real radar and the simulated radar in the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的实施例和效果作详细描述:Embodiments and effects of the present invention are described in detail below in conjunction with the accompanying drawings:

参照图1,基于激光雷达的机器人进行动态目标检测方法,包括如下具体实施步骤:Referring to FIG. 1 , a method for dynamic target detection by a lidar-based robot includes the following specific implementation steps:

步骤一:在机器人中心包括一个激光扫描雷达用于采集生成点云集[βn,l1n],机器人包括一个定位单元给出机器人定位位姿(x,y,α);其中,βn表示激光雷达的扫描角度,角度范围为雷达扫描全平面,l1n表示对应的角度处检测到的目标与机器人的距离;以βn横坐标,l1n为纵坐标建立的实体雷达点云数据图;Step 1: Include a laser scanning radar in the center of the robot to collect and generate point cloud sets [β n , l 1n ], the robot includes a positioning unit to give the robot positioning pose (x, y, α); where β n represents the laser The scanning angle of the radar, the angle range is the full plane of the radar scanning, and l 1n represents the distance between the detected target and the robot at the corresponding angle; the entity radar point cloud data map established with β n abscissa and l 1n as the ordinate;

激光扫描雷达采用hokuyo的UTM-30LX,激光扫描雷达hokuyo的UTM-30LX扫描角度θ为270度,角度分辨率λ为0.25度,扫描范围为0.1-30m,扫描频率为40HZ,hokuyo的UTM-30LX的点云数据为(βn,ln),βn和ln分别表示实体激光雷达所发出的第n条射线相对于激光雷达本体极坐标系的角度和距离,以βn为横坐标,ln为纵坐标建立的激光扫描雷达点云数据图,如图4所示。The laser scanning radar adopts hokuyo's UTM-30LX, the laser scanning radar hokuyo's UTM-30LX has a scanning angle θ of 270 degrees, an angular resolution λ of 0.25 degrees, a scanning range of 0.1-30m, and a scanning frequency of 40HZ, hokuyo's UTM-30LX The point cloud data is (β n , ln ), β n and ln respectively represent the angle and distance of the nth ray emitted by the solid lidar relative to the polar coordinate system of the lidar body, with β n as the abscissa, l n is the vertical coordinate of the laser scanning radar point cloud data map, as shown in Figure 4.

参照图4,激光扫描雷达所发出的第n条射线相对于激光雷达本体极坐标系的角度βn范围为

Figure BDA0002457305550000051
且安装在机器人的中心,则机器人的位姿就是激光扫描雷达的位姿,其通过蒙塔卡洛定位产生,或者由UWB、GPS系统提供,假设位姿为(x,y,α),其中x,y表示实体激光雷达在场景坐标系中的位置,α表示实体激光雷达的中心线与地图坐标系x轴的夹角,本实例使用UWB系统产生机器人的定位位姿。Referring to Figure 4, the range of the angle β n of the nth ray emitted by the laser scanning radar relative to the polar coordinate system of the laser radar body is:
Figure BDA0002457305550000051
And installed in the center of the robot, the pose of the robot is the pose of the laser scanning radar, which is generated by Monta Carlo positioning, or provided by UWB and GPS systems, assuming that the pose is (x, y, α), where x, y represent the position of the entity lidar in the scene coordinate system, and α represents the angle between the center line of the entity lidar and the x-axis of the map coordinate system. In this example, the UWB system is used to generate the positioning pose of the robot.

步骤二:根据场景建立矢量的场景地图,将场景地图中的障碍块和场景四周边缘用围成的线段表示,得到所有线段的终点和起点的坐标;Step 2: Create a vector scene map according to the scene, represent the obstacle blocks in the scene map and the surrounding edges of the scene with the enclosed line segments, and obtain the coordinates of the end points and start points of all line segments;

参照图3,本发明所使用的实际场景地图,是由用线段围成的障碍块和场景边缘组成,所有线段的起点和终点的坐标表示如表1:Referring to Fig. 3, the actual scene map used in the present invention is composed of obstacle blocks and scene edges surrounded by line segments, and the coordinates of the starting point and the end point of all line segments are shown in Table 1:

表1:地图中线段的坐标(单位:CM)Table 1: Coordinates of line segments in the map (unit: CM)

Figure BDA0002457305550000052
Figure BDA0002457305550000052

Figure BDA0002457305550000061
Figure BDA0002457305550000061

根据表一线段的端点坐标,将其在x-o-y直角坐标系中表示出来,即可得到场景的矢量地图,如图4所示。According to the endpoint coordinates of the line segment of the table, express it in the x-o-y rectangular coordinate system, and then the vector map of the scene can be obtained, as shown in Figure 4.

步骤三:建立仿真激光雷达模型,输入实体激光雷达的位姿,使用截距遍历法求出仿真激光雷达发出的所有射线与地图边缘和障碍块的交点组成的点云集[βn,l2n],得到雷达仿真图;Step 3: Establish a simulated lidar model, input the pose of the physical lidar, and use the intercept traversal method to obtain the point cloud set [β n , l 2n ] composed of the intersection of all rays emitted by the simulated lidar and the edge of the map and obstacle blocks , get the radar simulation map;

3.1)以矢量地图坐标原点以及X轴建立极坐标;3.1) Establish polar coordinates with the origin of vector map coordinates and the X-axis;

3.2)假设实体雷达坐标为(x0,y0),实体雷达中心线与x轴方向夹角为α度,雷达扫描范围为θ度,雷达角度分辨率为λ度,则仿真雷达一共发出n条射线:3.2) Assuming that the coordinates of the physical radar are (x 0 , y 0 ), the angle between the center line of the physical radar and the x-axis direction is α degrees, the scanning range of the radar is θ degrees, and the angular resolution of the radar is λ degrees, the simulated radar emits a total of n Rays:

Figure BDA0002457305550000062
Figure BDA0002457305550000062

这些射线由雷达坐标点发出,以λ度为间隔,覆盖以α为中心的

Figure BDA0002457305550000063
的范围,这些射线最后与障碍块或者场景四周所在的线段相交,这些交点即为仿真雷达扫描得到的点云坐标(xi,yi);These rays are emitted from the radar coordinate points, at λ degree intervals, covering the α-centred
Figure BDA0002457305550000063
The range of these rays finally intersects with the obstacle block or the line segment around the scene, and these intersection points are the point cloud coordinates (x i , y i ) obtained by the simulated radar scanning;

3.3)参照图5,由向量法求得每一个交点:3.3) Referring to Figure 5, find each intersection point by the vector method:

3.3.1)将射线与障碍块或者场景四周所在的边缘轮廓线化为向量,第i条射线与水平方向的角度γ满足

Figure BDA0002457305550000064
3.3.1) Convert the ray and the obstacle block or the edge contour around the scene into a vector, and the angle γ between the i-th ray and the horizontal direction satisfies
Figure BDA0002457305550000064

3.3.2)通过两直线相交的约束关系求射线与边缘轮廓线在二维空间中的交点(xi,yi):3.3.2) Find the intersection (x i , y i ) of the ray and the edge contour line in the two-dimensional space through the constraint relationship of the intersection of the two straight lines:

根据3.3.1)的结果,得到第i条射线的方向向量为(cosγ,sinγ),该射线起点坐标为(x0,y0),则(x0+cosγ,y0+sinγ)亦为射线上一点,设边缘轮廓线的起点和终点分别为(xs,ys)和(xe,ye),According to the result of 3.3.1), the direction vector of the i-th ray is (cosγ, sinγ), and the coordinates of the starting point of the ray are (x 0 , y 0 ), then (x 0 +cosγ, y 0 +sinγ) is also A point on the ray, let the start and end points of the edge contour be (x s , y s ) and (x e , y e ) respectively,

则交点同时满足两直线方程:Then the point of intersection satisfies both line equations:

Figure BDA0002457305550000071
Figure BDA0002457305550000071

由该方程组计算出交点坐标(xi,yi),则仿真雷达坐标点到障碍物的距离

Figure BDA0002457305550000072
The intersection coordinates (x i , y i ) are calculated from this equation system, then the distance from the simulated radar coordinate point to the obstacle
Figure BDA0002457305550000072

3.4)记录所有n条射线的γ和lβ,得到仿真激光雷达点云集,其仿真图如图6所示。3.4) Record the γ and l β of all n rays to obtain a simulated lidar point cloud set, the simulation diagram is shown in Figure 6.

步骤四:采用非梯度优化的方式对机器人定位位姿进行修正,使用修正后的位姿pk取代之前输入仿真的实体雷达位姿,得到新的仿真激光雷达点云集和点云数据图;Step 4: Use the non-gradient optimization method to revise the robot positioning pose, use the revised pose p k to replace the previously input simulated entity radar pose, and obtain a new simulated lidar point cloud set and point cloud data map;

经过上述几个步骤,已经获得了实体雷达以及仿真雷达在机器人定位点处的点云数据,但是由于两者点云数据之前会有一定的偏差,该偏差会造成误检测,如图7所示;After the above steps, the point cloud data of the physical radar and the simulated radar at the robot positioning point have been obtained. However, since there will be a certain deviation between the point cloud data of the two, the deviation will cause false detection, as shown in Figure 7. ;

图7中,第一条线条A代表实体激光雷达点云,第二条线条B代表步骤三中得到仿真激光雷达点云,第三条线条C代表两者之差,观察图中线条C可得,仿真激光雷达与实体激光雷达点云并不是完全匹配的,黑色箭头处为实体激光雷达检测到的障碍物,灰色箭头处是两者的偏差最大的位置;偏差的来源有两方面,其一是实际场景地图与仿真场景地图之间的差别导致的,该差别可以通过建立仿真地图时,尽量减小仿真地图与实际场景地图的误差来减小;其二是由于实体机器人定位结果不够精确导致的,仿真雷达所使用的定位位姿与激光雷达所在的真实位姿的差别会导致点云的整体偏移,反应在图上就是图7方框中的线条B和线条A的偏移,这种情况产生的偏移会体现在整个点云中,特别是在曲线的拐点位置,偏差会很大,容易造成目标的误检测,对此,本实例采用非梯度优化的方式来消除偏移,以对机器人定位位姿进行修正;In Figure 7, the first line A represents the real lidar point cloud, the second line B represents the simulated lidar point cloud obtained in step 3, and the third line C represents the difference between the two. , the point cloud of the simulated lidar and the physical lidar are not completely matched. The black arrow is the obstacle detected by the solid lidar, and the gray arrow is the position where the deviation between the two is the largest; the source of the deviation is from two aspects. It is caused by the difference between the actual scene map and the simulated scene map. This difference can be reduced by minimizing the error between the simulated map and the actual scene map when establishing the simulated map; the second is due to the inaccurate positioning results of the physical robot. Yes, the difference between the positioning pose used by the simulated radar and the real pose where the lidar is located will cause the overall offset of the point cloud, which is reflected on the graph as the offset of line B and line A in the box in Figure 7. The offset generated in this case will be reflected in the entire point cloud, especially at the inflection point of the curve, the deviation will be large, and it is easy to cause false detection of the target. In this regard, this example uses a non-gradient optimization method to eliminate the offset. To correct the robot positioning pose;

参照图8,本步骤的具体实现如下:Referring to Figure 8, the specific implementation of this step is as follows:

4.1)假设雷达在地图坐标系中的真实位姿为(x0,y00),该位姿未知但真实存在,机器人通过定位单元或者蒙特卡洛自定位得到的位姿为(x1,y11),由于传感器存在噪声,该位姿与机器人真实位姿或多或少都会存在偏差,定义

Figure BDA0002457305550000081
为无偏率,其中,L为实体激光雷达点云集,L’为仿真激光雷达点云集,Nt表示激光雷达点云的总点数,在本实例中,Nt=1080,Nu为满足距离之差l≤ζ的所有点数,ζ为设定的阈值;4.1) Assuming that the real pose of the radar in the map coordinate system is (x 0 , y 0 , α 0 ), the pose is unknown but real, and the pose obtained by the robot through the positioning unit or Monte Carlo self-positioning is (x 1 , y 1 , α 1 ), due to the noise of the sensor, the pose and the real pose of the robot will deviate more or less, the definition
Figure BDA0002457305550000081
is the unbiased rate, where L is the solid lidar point cloud set, L' is the simulated lidar point cloud set, and N t represents the total number of points of the lidar point cloud. In this example, N t =1080, and Nu is the satisfying distance All points where the difference l≤ζ, ζ is the set threshold;

4.2)初始化无偏率C=-1,机器人在某一时刻定位得到的位姿为:p1=(x1,y11),其中x1,y1表示实体激光雷达在场景坐标系中的位置,α1为此时雷达中心线与x轴方向夹角,实体激光雷达扫描的点云集为L1n=[βn,l1n],将该位姿作为初始值,计算得到基于p1点的仿真雷达的点云L2n=[βn,l2n]=f(x1,y11),根据4.1)中的公式计算此时的无偏率C1=fC(L1n,L2n);4.2) Initialize the unbiased rate C=-1, the pose obtained by the robot at a certain moment is: p 1 =(x 1 , y 1 , α 1 ), where x 1 , y 1 represent the physical lidar in the scene coordinates The position in the system, α 1 is the angle between the radar center line and the x-axis direction at this time, the point cloud set scanned by the solid lidar is L 1n =[β n ,l 1n ], the pose is taken as the initial value, the calculation is based on The point cloud L 2n = [β n , l 2n ]=f(x 1 , y 1 , α 1 ) of the simulated radar at point p 1 , and the unbiased rate C 1 =f C is calculated according to the formula in 4.1). (L 1n , L 2n );

4.3)设步进量用step表示,对x1,y11分别步进偏移,得到偏移后的仿真激光雷达点云集在三个方向上的表示式:

Figure DA00024573055537050
Figure BDA0002457305550000091
Figure BDA0002457305550000092
该点云集对应三个方向上的无偏率为
Figure BDA0002457305550000093
4.3) Set the step amount to be represented by step, and step and offset x 1 , y 1 , and α 1 respectively, and obtain the expressions of the simulated lidar point clouds in three directions after the offset:
Figure DA00024573055537050
Figure BDA0002457305550000091
Figure BDA0002457305550000092
The point cloud set corresponds to the unbiased rate in the three directions
Figure BDA0002457305550000093

4.4)定义dx,dy,dα分别为偏移量,则dx=Cx-C1,dy=Cy-C1,dα=Cα-C1,得到新的定位位姿p2=(x2,y22)=(x1+dx,y1+dy,α1+dα);4.4) Define dx, dy, and dα as offsets respectively, then dx=C x -C 1 , dy=C y -C 1 , dα=C α -C 1 , and a new positioning pose p 2 =(x 2 , y 2 , α 2 )=(x 1 +dx, y 1 +dy, α 1 +dα);

4.5)重复(4.3)和(4.4)共200次,记录得到的最大无偏率Cmax及Cmax对应的位姿pk=(xk,ykk),其中xk,yk为得到Cmax时仿真雷达在场景坐标系中的位置,αk为此时雷达中心线与x轴方向夹角,该pk就是最逼近机器人真实位姿的点;4.5) Repeat (4.3) and (4.4) a total of 200 times, and record the maximum unbiased rate C max and the corresponding pose p k = (x k , y k , α k ), where x k , y k In order to obtain the position of the simulated radar in the scene coordinate system when Cmax is obtained, α k is the angle between the radar centerline and the x-axis direction at this time, and this p k is the point closest to the real pose of the robot;

步骤五:使用修正后的位姿pk取代之前输入仿真的实体雷达位姿,得到新的仿真激光雷达点云集。Step 5: Use the corrected pose p k to replace the previously input simulated entity radar pose to obtain a new simulated lidar point cloud set.

将步骤四中得到的最逼近机器人真实位姿的定位位姿pk输入仿真激光雷达模型,按照步骤三生成基于此位姿的仿真雷达点云数据,生成的新仿真雷达点云数据,如图9所示。Input the positioning pose p k obtained in step 4, which is closest to the real pose of the robot, into the simulated lidar model, and generate simulated radar point cloud data based on this pose according to step 3, and the generated new simulated radar point cloud data, as shown in the figure 9 shown.

在pk处将实体雷达与仿真雷达点云数据进行对比,结果如图10所示,其中线条A代表实体激光雷达点云,第二条线条B代表新仿真雷达点云,第三条线条C代表两者之差,从图10可见,此时二者之间的偏移已经大大减小,表明此时机器人位姿已经得到修正。The real radar and simulated radar point cloud data are compared at p k , and the results are shown in Figure 10, where line A represents the real lidar point cloud, the second line B represents the new simulated radar point cloud, and the third line C It represents the difference between the two. It can be seen from Figure 10 that the offset between the two has been greatly reduced at this time, indicating that the robot pose has been corrected at this time.

步骤六:根据实体激光雷达点云集与新仿真激光雷达点云集在相同扫描角度处的点的差值,判断是否存在目标。Step 6: Determine whether there is a target according to the difference between the point cloud set of the entity lidar and the point cloud of the new simulated lidar at the same scanning angle.

6.1)将实体激光雷达点云集与新仿真激光雷达点云集在相同扫描角度处的点做差,如图10中线条C所示,并将其差值与设定的阈值ζ进行比较:6.1) Make the difference between the solid lidar point cloud set and the new simulated lidar point cloud set at the same scanning angle, as shown by line C in Figure 10, and compare the difference with the set threshold ζ:

如果实体激光雷达点云数据图中有连续n个点的距离与仿真激光雷达点云数据图中相对应的点的距离之差l大于设定的阈值ζ,如图10的箭头标记处所示,则实体激光雷达检测到了目标,执行6.2);If the difference l between the distances of n consecutive points in the solid lidar point cloud data graph and the distances of the corresponding points in the simulated lidar point cloud data graph is greater than the set threshold ζ, as shown by the arrow mark in Figure 10 , then the entity lidar detects the target, and executes 6.2);

否则,没有检测到目标;Otherwise, no target is detected;

6.2)将相邻n点的βn的均值记为

Figure BDA0002457305550000101
将相邻n点的距离平均值记为
Figure BDA0002457305550000103
以实体激光雷达所在位置为极坐标系原点,得到该目标的极坐标为
Figure BDA0002457305550000102
6.2) Record the mean value of β n of adjacent n points as
Figure BDA0002457305550000101
The average distance between adjacent n points is recorded as
Figure BDA0002457305550000103
Taking the location of the physical lidar as the origin of the polar coordinate system, the polar coordinates of the target are obtained as
Figure BDA0002457305550000102

以上描述仅是本发明的一个具体实例,并未构成对本发明的任何限制,显然对于本领域的专业人员来说,在了解了本发明内容和原理后,都可能在不背离本发明原理、结构的情况下,进行形式和细节上的各种修改和改变,但是这些基于本发明思想的修正和改变仍在本发明的权利要求保护范围之内。The above description is only a specific example of the present invention, and does not constitute any limitation to the present invention. Obviously, for those skilled in the art, after understanding the content and principles of the present invention, they may not deviate from the principles and structures of the present invention. Under the circumstance of the present invention, various modifications and changes in form and details are made, but these modifications and changes based on the idea of the present invention are still within the scope of protection of the claims of the present invention.

Claims (7)

1.基于激光雷达的机器人进行动态目标检测方法,其特征是:包括如下具体实施步骤:1. A method for dynamic target detection based on a lidar-based robot, characterized in that it comprises the following specific implementation steps: 步骤一:在机器人中心包括一个激光扫描雷达用于采集生成点云集[βn,l1n],机器人包括一个定位单元给出机器人定位位姿(x,y,α);其中,βn表示激光雷达的扫描角度,角度范围为雷达扫描全平面,l1n表示对应的角度处检测到的目标与机器人的距离;以βn横坐标,l1n为纵坐标建立的实体雷达点云数据图;Step 1: Include a laser scanning radar in the center of the robot to collect and generate point cloud sets [β n , l 1n ], the robot includes a positioning unit to give the robot positioning pose (x, y, α); where β n represents the laser The scanning angle of the radar, the angle range is the full plane of the radar scanning, and l 1n represents the distance between the detected target and the robot at the corresponding angle; the entity radar point cloud data map established with β n abscissa and l 1n as the ordinate; 步骤二:根据场景建立矢量的场景地图,将场景地图中的障碍块和场景四周边缘用围成的线段表示,得到所有线段的终点和起点的坐标;Step 2: Create a vector scene map according to the scene, represent the obstacle blocks in the scene map and the surrounding edges of the scene with the enclosed line segments, and obtain the coordinates of the end points and start points of all line segments; 步骤三:建立仿真激光雷达模型,输入实体激光雷达的位姿,使用截距遍历法求出仿真激光雷达发出的所有射线与地图边缘和障碍块的交点组成的点云集[βn,l2n],得到雷达仿真图;Step 3: Establish a simulated lidar model, input the pose of the physical lidar, and use the intercept traversal method to obtain the point cloud set [β n , l 2n ] composed of the intersection of all rays emitted by the simulated lidar and the edge of the map and obstacle blocks , get the radar simulation map; 步骤四:采用非梯度优化的方式对机器人定位位姿进行修正,使用修正后的位姿pk取代之前输入仿真的实体雷达位姿,得到新的仿真激光雷达点云集和点云数据图;Step 4: Use the non-gradient optimization method to revise the robot positioning pose, use the revised pose p k to replace the previously input simulated entity radar pose, and obtain a new simulated lidar point cloud set and point cloud data map; 步骤五:使用修正后的位姿pk取代之前输入仿真的实体雷达位姿,得到新的仿真激光雷达点云集;Step 5: Use the corrected pose p k to replace the previously input simulated entity radar pose to obtain a new simulated lidar point cloud set; 步骤六:根据实体激光雷达点云集与新仿真激光雷达点云集在相同扫描角度处的点的差值,判断是否存在目标。Step 6: Determine whether there is a target according to the difference between the point cloud set of the entity lidar and the point cloud of the new simulated lidar at the same scanning angle. 2.根据权利要求1所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:所述的步骤一所使用的激光扫描雷达采用hokuyo的UTM-30LX,激光扫描雷达hokuyo的UTM-30LX扫描角度θ为270度,角度分辨率λ为0.25度,扫描范围为0.1-30m,扫描频率为40HZ。2. the robot based on laser radar according to claim 1 carries out dynamic target detection method, it is characterized in that: the laser scanning radar that described step 1 uses adopts the UTM-30LX of hokuyo, the UTM-30LX of laser scanning radar hokuyo The scanning angle θ is 270 degrees, the angular resolution λ is 0.25 degrees, the scanning range is 0.1-30 m, and the scanning frequency is 40 Hz. 3.根据权利要求2所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:hokuyo的UTM-30LX的点云数据为(βn,ln),βn和ln分别表示实体激光雷达所发出的第n条射线相对于激光雷达本体极坐标系的角度和距离;激光扫描雷达所发出的第n条射线相对于激光雷达本体极坐标系的角度βn范围为
Figure FDA0002457305540000021
且安装在机器人的中心,则机器人的位姿就是激光扫描雷达的位姿,其通过蒙塔卡洛定位产生,或者由UWB、GPS系统提供,假设位姿为(x,y,α),其中x,y表示实体激光雷达在场景坐标系中的位置,α表示实体激光雷达的中心线与地图坐标系x轴的夹角。
3. the robot based on lidar according to claim 2 carries out dynamic target detection method, it is characterized in that: the point cloud data of the UTM-30LX of hokuyo is (β n , 1 n ), β n and 1 n represent entities respectively The angle and distance of the nth ray emitted by the lidar relative to the polar coordinate system of the lidar body; the angle β n of the nth ray emitted by the lidar relative to the polar coordinate system of the lidar body is in the range of
Figure FDA0002457305540000021
And installed in the center of the robot, the pose of the robot is the pose of the laser scanning radar, which is generated by Monta Carlo positioning, or provided by UWB and GPS systems, assuming that the pose is (x, y, α), where x, y represent the position of the physical lidar in the scene coordinate system, and α represents the angle between the center line of the physical lidar and the x-axis of the map coordinate system.
4.根据权利要求1所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:所述的步骤三具体包括如下步骤:4. The method for detecting a dynamic target based on a lidar-based robot according to claim 1, wherein the step 3 specifically comprises the following steps: 3.1)以矢量地图坐标原点以及X轴建立极坐标;3.1) Establish polar coordinates with the origin of vector map coordinates and the X-axis; 3.2)假设实体雷达坐标为(x0,y0),实体雷达中心线与x轴方向夹角为α度,雷达扫描范围为θ度,雷达角度分辨率为λ度,则仿真雷达一共发出n条射线:3.2) Assuming that the coordinates of the physical radar are (x 0 , y 0 ), the angle between the center line of the physical radar and the x-axis direction is α degrees, the scanning range of the radar is θ degrees, and the angular resolution of the radar is λ degrees, the simulated radar emits a total of n Rays:
Figure FDA0002457305540000022
Figure FDA0002457305540000022
这些射线由雷达坐标点发出,以λ度为间隔,覆盖以α为中心的
Figure FDA0002457305540000031
的范围,这些射线最后与障碍块或者场景四周所在的线段相交,这些交点即为仿真雷达扫描得到的点云坐标(xi,yi);
These rays are emitted from the radar coordinate points, at λ degree intervals, covering the α-centred
Figure FDA0002457305540000031
The range of these rays finally intersects with the obstacle block or the line segment around the scene, and these intersection points are the point cloud coordinates (x i , y i ) obtained by the simulated radar scanning;
3.3)由向量法求得每一个交点:3.3) Find each intersection by the vector method: 3.3.1)将射线与障碍块或者场景四周所在的边缘轮廓线化为向量,第i条射线与水平方向的角度γ满足
Figure FDA0002457305540000032
3.3.1) Convert the ray and the obstacle block or the edge contour around the scene into a vector, and the angle γ between the i-th ray and the horizontal direction satisfies
Figure FDA0002457305540000032
3.3.2)通过两直线相交的约束关系求射线与边缘轮廓线在二维空间中的交点(xi,yi):3.3.2) Find the intersection (x i , y i ) of the ray and the edge contour line in the two-dimensional space through the constraint relationship of the intersection of the two straight lines: 根据3.3.1)的结果,得到第i条射线的方向向量为(cosγ,sinγ),该射线起点坐标为(x0,y0),则(x0+cosγ,y0+sinγ)亦为射线上一点,设边缘轮廓线的起点和终点分别为(xs,ys)和(xe,ye),According to the result of 3.3.1), the direction vector of the i-th ray is (cosγ, sinγ), and the coordinates of the starting point of the ray are (x 0 , y 0 ), then (x 0 +cosγ, y 0 +sinγ) is also A point on the ray, let the start and end points of the edge contour be (x s , y s ) and (x e , y e ) respectively, 则交点同时满足两直线方程:Then the point of intersection satisfies both line equations:
Figure FDA0002457305540000033
Figure FDA0002457305540000033
由该方程组计算出交点坐标(xi,yi),则仿真雷达坐标点到障碍物的距离The intersection coordinates (x i , y i ) are calculated from this equation system, then the distance from the simulated radar coordinate point to the obstacle
Figure FDA0002457305540000034
Figure FDA0002457305540000034
3.4)记录所有n条射线的γ和lβ,得到仿真激光雷达点云集。3.4) Record the γ and l β of all n rays to obtain a simulated lidar point cloud set.
5.根据权利要求1所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:所述的步骤四具体包括如下步骤:5. The method for detecting a dynamic target based on a lidar-based robot according to claim 1, wherein the step 4 specifically comprises the following steps: 4.1)假设雷达在地图坐标系中的真实位姿为(x0,y00),该位姿未知但真实存在,机器人通过定位单元或者蒙特卡洛自定位得到的位姿为(x1,y11),由于传感器存在噪声,该位姿与机器人真实位姿或多或少都会存在偏差,定义
Figure FDA0002457305540000041
为无偏率,其中,L为实体激光雷达点云集,L’为仿真激光雷达点云集,Nt表示激光雷达点云的总点数,在本实例中,Nt=1080,Nu为满足距离之差l≤ζ的所有点数,ζ为设定的阈值;
4.1) Assuming that the real pose of the radar in the map coordinate system is (x 0 , y 0 , α 0 ), the pose is unknown but real, and the pose obtained by the robot through the positioning unit or Monte Carlo self-positioning is (x 1 , y 1 , α 1 ), due to the noise of the sensor, the pose and the real pose of the robot will deviate more or less, the definition
Figure FDA0002457305540000041
is the unbiased rate, where L is the solid lidar point cloud set, L' is the simulated lidar point cloud set, and N t represents the total number of points of the lidar point cloud. In this example, N t =1080, and Nu is the satisfying distance All points where the difference l≤ζ, ζ is the set threshold;
4.2)初始化无偏率C=-1,机器人在某一时刻定位得到的位姿为:p1=(x1,y11),其中x1,y1表示实体激光雷达在场景坐标系中的位置,α1为此时雷达中心线与x轴方向夹角,实体激光雷达扫描的点云集为L1n=[βn,l1n],将该位姿作为初始值,计算得到基于p1点的仿真雷达的点云L2n=[βn,l2n]=f(x1,y11),根据4.1)中的公式计算此时的无偏率C1=fC(L1n,L2n);4.2) Initialize the unbiased rate C=-1, the pose obtained by the robot at a certain moment is: p 1 =(x 1 , y 1 , α 1 ), where x 1 , y 1 represent the physical lidar in the scene coordinates The position in the system, α 1 is the angle between the radar center line and the x-axis direction at this time, the point cloud set scanned by the solid lidar is L 1n =[β n ,l 1n ], the pose is taken as the initial value, the calculation is based on The point cloud L 2n = [β n , l 2n ]=f(x 1 , y 1 , α 1 ) of the simulated radar at point p 1 , and the unbiased rate C 1 =f C is calculated according to the formula in 4.1). (L 1n , L 2n ); 4.3)设步进量用step表示,对x1,y11分别步进偏移,得到偏移后的仿真激光雷达点云集在三个方向上的表示式:
Figure DA00024573055436967
Figure FDA0002457305540000042
Figure FDA0002457305540000043
该点云集对应三个方向上的无偏率为
Figure FDA0002457305540000044
4.3) Set the step amount to be represented by step, and step and offset x 1 , y 1 , and α 1 respectively, and obtain the expressions of the simulated lidar point clouds in three directions after the offset:
Figure DA00024573055436967
Figure FDA0002457305540000042
Figure FDA0002457305540000043
The point cloud set corresponds to the unbiased rate in the three directions
Figure FDA0002457305540000044
4.4)定义dx,dy,dα分别为偏移量,则dx=Cx-C1,dy=Cy-C1,dα=Cα-C1,得到新的定位位姿p2=(x2,y22)=(x1+dx,y1+dy,α1+dα);4.4) Define dx, dy, and dα as offsets respectively, then dx=C x -C 1 , dy=C y -C 1 , dα=C α -C 1 , and a new positioning pose p 2 =(x 2 , y 2 , α 2 )=(x 1 +dx, y 1 +dy, α 1 +dα); 4.5)重复(4.3)和(4.4)共200次,记录得到的最大无偏率Cmax及Cmax对应的位姿pk=(xk,ykk),其中xk,yk为得到Cmax时仿真雷达在场景坐标系中的位置,αk为此时雷达中心线与x轴方向夹角,该pk就是最逼近机器人真实位姿的点。4.5) Repeat (4.3) and (4.4) a total of 200 times, and record the maximum unbiased rate C max and the corresponding pose p k = (x k , y k , α k ), where x k , y k In order to obtain the position of the simulated radar in the scene coordinate system when Cmax is obtained, α k is the angle between the radar centerline and the x-axis direction at this time, and the p k is the point closest to the real pose of the robot.
6.根据权利要求1所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:所述的步骤五具体包括如下进一步的步骤:6. The method for detecting a dynamic target based on a lidar-based robot according to claim 1, wherein the step 5 specifically comprises the following further steps: 将步骤四中得到的最逼近机器人真实位姿的定位位姿pk输入仿真激光雷达模型,按照步骤三生成基于此位姿的仿真雷达点云数据,生成的新仿真雷达点云数据;Input the positioning pose p k that is closest to the real pose of the robot obtained in step 4 into the simulated lidar model, and generate simulated radar point cloud data based on this pose according to step 3, and generate new simulated radar point cloud data; 在pk处将实体雷达与仿真雷达点云数据进行对比,其中线条A代表实体激光雷达点云,第二条线条B代表新仿真雷达点云,第三条线条C代表两者之差,机器人位姿得到修正。Compare the real radar with the simulated radar point cloud data at p k , where line A represents the real lidar point cloud, the second line B represents the new simulated radar point cloud, and the third line C represents the difference between the two. The pose is corrected. 7.根据权利要求1所述的基于激光雷达的机器人进行动态目标检测方法,其特征是:所述的步骤六具体包括如下进一步的步骤:7. The lidar-based robot according to claim 1 carries out a dynamic target detection method, wherein the step 6 specifically comprises the following further steps: 6.1)将实体激光雷达点云集与新仿真激光雷达点云集在相同扫描角度处的点做差,并将其差值与设定的阈值ζ进行比较:6.1) Make the difference between the solid lidar point cloud set and the new simulated lidar point cloud set at the same scanning angle, and compare the difference with the set threshold ζ: 如果激光扫描雷达点云数据图中有连续n个点的距离与仿真激光雷达点云数据图中相对应的点的距离之差l大于设定的阈值ζ,则激光扫描雷达检测到了目标,执行6.2);If the difference l between the distances of n consecutive points in the lidar point cloud data graph and the distances of the corresponding points in the simulated lidar point cloud data graph is greater than the set threshold ζ, the lidar scanning radar detects the target and executes 6.2); 否则,没有检测到目标;Otherwise, no target is detected; 6.2)将相邻n点的βn的均值记为
Figure FDA0002457305540000051
将相邻n点的距离平均值记为
Figure FDA0002457305540000052
以实体激光雷达所在位置为极坐标系原点,得到该目标的极坐标为
Figure FDA0002457305540000053
6.2) Record the mean value of β n of adjacent n points as
Figure FDA0002457305540000051
The average distance between adjacent n points is recorded as
Figure FDA0002457305540000052
Taking the location of the physical lidar as the origin of the polar coordinate system, the polar coordinates of the target are obtained as
Figure FDA0002457305540000053
CN202010310030.8A 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar Active CN111522022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010310030.8A CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010310030.8A CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Publications (2)

Publication Number Publication Date
CN111522022A true CN111522022A (en) 2020-08-11
CN111522022B CN111522022B (en) 2023-03-28

Family

ID=71903379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010310030.8A Active CN111522022B (en) 2020-04-20 2020-04-20 Dynamic target detection method of robot based on laser radar

Country Status (1)

Country Link
CN (1) CN111522022B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433211A (en) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN112732849A (en) * 2020-12-14 2021-04-30 北京航空航天大学 High-precision vector map compression method based on polar coordinate system
CN112884900A (en) * 2021-02-10 2021-06-01 广东高德星光智能科技有限公司 Landing positioning method and device for unmanned aerial vehicle, storage medium and unmanned aerial vehicle nest
CN113359151A (en) * 2021-08-09 2021-09-07 浙江华睿科技股份有限公司 Robot task point positioning method and device
CN114252859A (en) * 2020-09-24 2022-03-29 北京万集科技股份有限公司 Method, apparatus, computer equipment and storage medium for determining target area
CN114743449A (en) * 2020-12-23 2022-07-12 北醒(北京)光子科技有限公司 Thing networking teaching aid based on laser radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018182538A1 (en) * 2017-03-31 2018-10-04 Agency For Science, Technology And Research Systems and methods that improve alignment of a robotic arm to an object
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018182538A1 (en) * 2017-03-31 2018-10-04 Agency For Science, Technology And Research Systems and methods that improve alignment of a robotic arm to an object
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张艳国等: "基于惯性测量单元的激光雷达点云融合方法", 《系统仿真学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114252859A (en) * 2020-09-24 2022-03-29 北京万集科技股份有限公司 Method, apparatus, computer equipment and storage medium for determining target area
CN112433211A (en) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
WO2022110653A1 (en) * 2020-11-27 2022-06-02 浙江商汤科技开发有限公司 Pose determination method and apparatus, electronic device and computer-readable storage medium
CN112732849A (en) * 2020-12-14 2021-04-30 北京航空航天大学 High-precision vector map compression method based on polar coordinate system
CN114743449A (en) * 2020-12-23 2022-07-12 北醒(北京)光子科技有限公司 Thing networking teaching aid based on laser radar
CN112884900A (en) * 2021-02-10 2021-06-01 广东高德星光智能科技有限公司 Landing positioning method and device for unmanned aerial vehicle, storage medium and unmanned aerial vehicle nest
CN112884900B (en) * 2021-02-10 2025-03-18 广东高德星光智能科技有限公司 Landing positioning method, device, storage medium and drone nest for unmanned aerial vehicle
CN113359151A (en) * 2021-08-09 2021-09-07 浙江华睿科技股份有限公司 Robot task point positioning method and device

Also Published As

Publication number Publication date
CN111522022B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN111522022A (en) Dynamic target detection method of robot based on laser radar
CN110645974B (en) An indoor map construction method for mobile robots based on fusion of multi-sensors
CN107239076B (en) AGV laser SLAM method based on virtual scanning and distance measurement matching
CN109597864B (en) Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering
CN111380573B (en) Method for calibrating the orientation of a moving object sensor
CN110142805A (en) A kind of robot end's calibration method based on laser radar
CN112882053B (en) Method for actively calibrating external parameters of laser radar and encoder
CN104062973A (en) Mobile robot SLAM method based on image marker identification
CN112904358B (en) Laser positioning method based on geometric information
CN110736456B (en) Two-dimensional laser real-time positioning method based on feature extraction in sparse environment
CN115639547A (en) Multi-line laser radar and GNSS-INS combined calibration method, system and medium
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
CN110986956A (en) Autonomous learning global positioning method based on improved Monte Carlo algorithm
CN111207753A (en) A method for simultaneous positioning and mapping in a multi-glass partition environment
Xue et al. Visual-marker-based localization for flat-variation scene
CN115047485A (en) Laser radar motion distortion compensation method based on NDT registration and wheel type odometer
CN110554405A (en) normal scanning registration method and system based on cluster combination
CN111735443B (en) Dense target track correlation method based on assignment matrix
CN113902828A (en) Construction method of indoor two-dimensional semantic map with corner as key feature
CN118565494A (en) Vehicle positioning method and system
CN110007680B (en) Robot obstacle avoidance algorithm based on topological relation
Deusch et al. Improving localization in digital maps with grid maps
CN114216465B (en) Submarine topography navigation parallel matching method
CN116858219A (en) Multi-sensor fusion-based pipe robot map building and navigation method
CN116255981A (en) Laser radar positioning algorithm based on preset line feature extraction and matching and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant