CN109633688B - LiDAR obstacle recognition method and device - Google Patents
LiDAR obstacle recognition method and device Download PDFInfo
- Publication number
- CN109633688B CN109633688B CN201811533572.0A CN201811533572A CN109633688B CN 109633688 B CN109633688 B CN 109633688B CN 201811533572 A CN201811533572 A CN 201811533572A CN 109633688 B CN109633688 B CN 109633688B
- Authority
- CN
- China
- Prior art keywords
- obstacle
- identified
- frame
- point cloud
- blind area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
本申请提供一种激光雷达障碍物识别方法和装置,所述方法包括获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息;判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;对第N+1帧中的所述待识别障碍物激光点云进行补全;根据补全后的所述待识别障碍物激光点云,进行障碍物识别。可以对部分进入激光雷达盲区的障碍物进行识别,避免了错误识别其长度、距离造成碰撞的风险,有效地提高了无人驾驶车辆的行驶安全性。
The present application provides a laser radar obstacle identification method and device, the method includes obtaining the information of obstacles to be identified in continuous N+1 frames around the unmanned vehicle scanned by the laser radar; Whether the obstacle to be identified enters the lidar blind area; according to the length of the obstacle to be identified entering the lidar blind area, the laser point cloud of the obstacle to be identified in the Nth frame is truncated; according to the truncated The obstacle laser point cloud is matched with the laser point cloud of the obstacle to be identified in the N+1th frame; the laser point cloud of the obstacle to be identified in the N+1th frame is completed; The laser point cloud of the obstacle to be identified is used for obstacle identification. It can identify some obstacles that enter the lidar blind area, avoiding the risk of collision caused by misidentifying their length and distance, and effectively improving the driving safety of unmanned vehicles.
Description
【技术领域】【Technical field】
本申请涉及自动控制领域,尤其涉及一种激光雷达障碍物识别方法和装置。The present application relates to the field of automatic control, in particular to a laser radar obstacle recognition method and device.
【背景技术】【Background technique】
在无人驾驶车辆中,集成了多类传感器:GPS-IMU(惯性测量单元,InertialMeasurement Unit)组合导航模块、相机、激光雷达、毫米波雷达等传感器。In unmanned vehicles, multiple types of sensors are integrated: GPS-IMU (inertial measurement unit, Inertial Measurement Unit) integrated navigation module, cameras, lidar, millimeter wave radar and other sensors.
无人驾驶车辆行驶过程中,主要依靠激光雷达对障碍物进行检测,激光雷达对障碍物可以实现精确的形状感知。但是,由于激光雷达安装于无人驾驶车辆顶部中央前端,其视野纵向范围为正负15度。受到无人驾驶车辆车体遮挡,存在视野盲区,一般为6-7m范围。因此,现有无人驾驶车辆的障碍物检测,只能检测到出现在激光雷达视野中的障碍物。而随着无人驾驶车辆的行进,障碍物很可能进入激光雷达的盲区,造成障碍物识别错误或丢失,例如,当障碍物为公交车,部分进入盲区后,激光雷达可能对其长度及与无人驾驶车辆的距离识别错误,将其识别为处于盲区之外的小汽车。这会影响无人驾驶车辆的车辆决策控制模块的决策,进而导致碰撞风险。During the driving process of unmanned vehicles, it mainly relies on lidar to detect obstacles, and lidar can realize accurate shape perception of obstacles. However, since the lidar is installed at the front center of the top of the unmanned vehicle, its vertical field of view ranges from plus or minus 15 degrees. Blocked by the body of the unmanned vehicle, there is a blind area of vision, generally in the range of 6-7m. Therefore, the obstacle detection of existing unmanned vehicles can only detect obstacles that appear in the field of view of the lidar. As the unmanned vehicle advances, the obstacle is likely to enter the blind area of the lidar, causing the obstacle to be recognized incorrectly or lost. The self-driving vehicle misidentified the distance, identifying it as a car outside the blind spot. This affects the decision-making of the vehicle decision-making control module of the unmanned vehicle, which in turn leads to the risk of collision.
【发明内容】【Content of invention】
本申请的多个方面提供一种激光雷达障碍物识别方法和装置,用以解决障碍物部分进入激光雷达盲区造成的障碍物识别错误的问题。Aspects of the present application provide a laser radar obstacle recognition method and device, which are used to solve the problem of obstacle recognition errors caused by obstacles partially entering the laser radar blind zone.
本申请的一方面,提供一种激光雷达障碍物识别方法,包括:In one aspect of the present application, there is provided a lidar obstacle recognition method, including:
获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息;Obtain the information of obstacles to be identified in consecutive N+1 frames around the unmanned vehicle scanned by the lidar;
根据所述N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;According to the information of the obstacle to be identified in each frame in the first N frames in the N+1 frame, it is judged whether the obstacle to be identified in the N+1 frame enters the lidar blind area; if it enters the lidar blind area truncating the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the lidar blind zone;
根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;According to the truncated laser point cloud of the obstacle to be identified and the laser point cloud of the obstacle to be identified in the N+1th frame are matched; if the matching is successful, the said to be identified in the N+1th frame Obstacle laser point cloud for completion;
根据补全后的所述待识别障碍物激光点云,进行障碍物识别。Obstacle identification is performed according to the completed laser point cloud of the obstacle to be identified.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息包括:According to the above-mentioned aspect and any possible implementation, an implementation is further provided. Obtaining the information of obstacles to be identified in consecutive N+1 frames around the unmanned vehicle scanned by the lidar includes:
得出N+1帧中的前N帧中的各帧中的待识别的障碍物的点云,对N+1帧中的前N帧中的各帧中的待识别障碍物点云进行匹配,确定各帧中的待识别障碍物的对应关系。Obtain the point cloud of the obstacle to be identified in each frame of the first N frames in the N+1 frame, and match the point cloud of the obstacle to be identified in each frame of the first N frames in the N+1 frame , to determine the corresponding relationship of the obstacles to be recognized in each frame.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,根据所述N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区包括:According to the above aspect and any possible implementation manner, an implementation manner is further provided, according to the information of the obstacle to be identified in each frame in the first N frames in the N+1 frame, it is judged that the N+th Whether the obstacle to be identified in 1 frame enters the lidar blind area includes:
通过所述待识别障碍物在前N帧中的各帧中的点云,确定所述待识别障碍物在第N帧中的位置、移动速度和移动方向以及激光雷达盲区范围,确定第N+1帧中所述待识别障碍物进入激光雷达盲区的长度。Through the point cloud of the obstacle to be identified in each frame in the first N frames, determine the position, moving speed and moving direction of the obstacle to be identified in the Nth frame and the range of the lidar blind zone, and determine the N+th The length of the obstacle to be recognized entering the blind zone of the lidar in one frame.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断包括:According to the above aspect and any possible implementation, an implementation is further provided, truncating the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the blind zone of the lidar include:
根据盲区三维模型及所述带识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断。The laser point cloud of the obstacle to be identified in the Nth frame is truncated according to the three-dimensional model of the blind area and the length of the identified obstacle entering the lidar blind area.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,对第N+1帧中的所述待识别障碍物激光点云进行补全包括:According to the above aspect and any possible implementation, an implementation is further provided. Complementing the laser point cloud of the obstacle to be identified in the N+1th frame includes:
若第N帧中的待识别障碍物未进入激光雷达盲区,根据第N帧中的所述待识别障碍物激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全;If the obstacle to be identified in the Nth frame does not enter the lidar blind area, the obstacle to be identified in the N+1th frame enters the lidar blind area according to the laser point cloud of the obstacle to be identified in the Nth frame Part of the laser point cloud is completed;
若第N帧中的待识别障碍物已进入激光雷达盲区,根据第N帧中的所述待识别障碍物补全后的激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。If the obstacle to be identified in the Nth frame has entered the lidar blind area, enter the obstacle to be identified in the N+1th frame according to the completed laser point cloud of the obstacle to be identified in the Nth frame The laser point cloud of the part of the lidar blind area is completed.
本申请的另一方面,提供了一种激光雷达障碍物识别装置,包括:Another aspect of the present application provides a laser radar obstacle recognition device, including:
获取模块,用于获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息;The obtaining module is used to obtain the information of the obstacles to be identified in continuous N+1 frames around the unmanned vehicle scanned by the lidar;
截断模块,用于根据所述N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;A truncation module, configured to determine whether the obstacle to be identified in the N+1th frame enters the lidar blind spot according to the information of the obstacle to be identified in each of the first N frames in the N+1 frame; If entering the lidar blind area, the laser point cloud of the obstacle to be identified in the Nth frame is truncated according to the length of the obstacle to be identified entering the lidar blind area;
补全模块,用于根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;The completion module is used to match the truncated laser point cloud of the obstacle to be identified with the laser point cloud of the obstacle to be identified in the N+1th frame; if the matching is successful, the N+1th frame Complete the laser point cloud of the obstacle to be identified in ;
识别模块,用于根据补全后的所述待识别障碍物激光点云,进行障碍物识别。The identification module is configured to perform obstacle identification according to the completed laser point cloud of the obstacle to be identified.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述获取模块具体用于:According to the above aspect and any possible implementation manner, an implementation manner is further provided, and the acquisition module is specifically used for:
得出N+1帧中的前N帧中的各帧中的待识别的障碍物的点云,对N+1帧中的前N帧中的各帧中的待识别障碍物点云进行匹配,确定各帧中的待识别障碍物的对应关系。Obtain the point cloud of the obstacle to be identified in each frame of the first N frames in the N+1 frame, and match the point cloud of the obstacle to be identified in each frame of the first N frames in the N+1 frame , to determine the corresponding relationship of the obstacles to be recognized in each frame.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述截断模块具体用于:According to the above aspects and any possible implementation, an implementation is further provided, the truncation module is specifically used for:
通过所述待识别障碍物在前N帧中的各帧中的点云,确定所述待识别障碍物在第N帧中的位置、移动速度和移动方向以及激光雷达盲区范围,确定第N+1帧中所述待识别障碍物进入激光雷达盲区的长度。Through the point cloud of the obstacle to be identified in each frame in the first N frames, determine the position, moving speed and moving direction of the obstacle to be identified in the Nth frame and the range of the lidar blind zone, and determine the N+th The length of the obstacle to be recognized entering the blind zone of the lidar in one frame.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述截断模块具体还用于:According to the above aspect and any possible implementation manner, an implementation manner is further provided, and the truncation module is specifically further configured to:
根据盲区三维模型及所述带识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断。The laser point cloud of the obstacle to be identified in the Nth frame is truncated according to the three-dimensional model of the blind area and the length of the identified obstacle entering the lidar blind area.
如上所述的方面和任一可能的实现方式,进一步提供一种实现方式,所述补全模块具体用于:According to the above aspect and any possible implementation manner, an implementation manner is further provided, the completion module is specifically used for:
若第N帧中的待识别障碍物未进入激光雷达盲区,根据第N帧中的所述待识别障碍物激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全;If the obstacle to be identified in the Nth frame does not enter the lidar blind area, the obstacle to be identified in the N+1th frame enters the lidar blind area according to the laser point cloud of the obstacle to be identified in the Nth frame Part of the laser point cloud is completed;
若第N帧中的待识别障碍物已进入激光雷达盲区,根据第N帧中的所述待识别障碍物补全后的激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。If the obstacle to be identified in the Nth frame has entered the lidar blind area, enter the obstacle to be identified in the N+1th frame according to the completed laser point cloud of the obstacle to be identified in the Nth frame The laser point cloud of the part of the lidar blind area is completed.
本发明的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如以上所述的方法。Another aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the above method is implemented.
由所述技术方案可知,本申请实施例可以对部分进入激光雷达盲区的障碍物进行识别,避免了错误识别其长度、距离造成碰撞的风险。It can be seen from the technical solution that the embodiment of the present application can identify some obstacles entering the blind area of the lidar, avoiding the risk of collision caused by misidentifying their length and distance.
【附图说明】【Description of drawings】
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the following will briefly introduce the drawings that need to be used in the embodiments or the description of the prior art. Obviously, the accompanying drawings in the following description are of the present application For some embodiments, those of ordinary skill in the art can also obtain other drawings based on these drawings without paying creative efforts.
图1为本申请一实施例提供的激光雷达障碍物识别方法的流程示意图;FIG. 1 is a schematic flow chart of a lidar obstacle recognition method provided by an embodiment of the present application;
图2为本申请一实施例提供的激光雷达障碍物识别装置的结构示意图;FIG. 2 is a schematic structural diagram of a lidar obstacle recognition device provided by an embodiment of the present application;
图3示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。Figure 3 shows a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the present invention.
【具体实施方式】【Detailed ways】
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的全部其他实施例,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of this application, not all of them. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.
图1为本申请一实施例提供的激光雷达障碍物识别方法的示意图,如图1所示,包括以下步骤:Fig. 1 is a schematic diagram of a lidar obstacle recognition method provided by an embodiment of the present application, as shown in Fig. 1, including the following steps:
步骤S11、获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息;Step S11, obtaining the information of obstacles to be identified in consecutive N+1 frames around the unmanned vehicle scanned by the lidar;
步骤S12、根据所述N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;Step S12, according to the information of the obstacle to be identified in each frame of the first N frames in the N+1 frame, it is judged whether the obstacle to be identified in the N+1th frame enters the lidar blind area; if it enters The lidar blind zone, truncating the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the lidar blind zone;
步骤S13、根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;Step S13: Match the truncated laser point cloud of the obstacle to be identified with the laser point cloud of the obstacle to be identified in frame N+1; Complete the laser point cloud of the obstacles to be identified;
步骤S14、根据补全后的所述待识别障碍物激光点云,进行障碍物识别。Step S14 , performing obstacle identification according to the completed laser point cloud of the obstacle to be identified.
步骤S11的一种优选实现方式中,In a preferred implementation of step S11,
本实施例的障碍物识别方法应用在无人驾驶技术领域中。在无人驾驶中,需要无人驾驶车辆能够自动识别道路中的障碍物,以在车辆行驶中及时做出决策与控制,便于车辆的安全行驶。本实施例的障碍物识别方法的执行主体可以为障碍物识别装置,该障碍物识别装置可以采用多个模块集成而得,该障碍物识别装置具体可以设置在无人驾驶车辆中,以对无人驾驶的车辆进行控制。The obstacle recognition method of this embodiment is applied in the technical field of unmanned driving. In unmanned driving, unmanned vehicles need to be able to automatically identify obstacles in the road, so as to make timely decisions and control during vehicle driving, so as to facilitate the safe driving of vehicles. The execution body of the obstacle recognition method in this embodiment can be an obstacle recognition device, which can be obtained by integrating multiple modules, and the obstacle recognition device can be specifically set in an unmanned vehicle to Human-driven vehicles are controlled.
本实施例的待识别障碍物的信息可以采用激光雷达扫射得到的。激光雷达的规格可以采用16线、32线或者64线等等。其中线数越高表示激光雷达的单位能量密度越大。本实施例中,装在当前车辆上的激光雷达在每一秒中旋转360次,扫描当前车辆周围一圈的待识别的障碍物的信息,为一帧待识别的障碍物的信息。本实施例中的待识别的障碍物的信息可以包括待识别的障碍物的点云以及待识别的障碍物的反射值。当前车辆周围的待识别的障碍物可以有一个,也可以有多个。激光雷达扫描之后,可以以当前车辆的质心位置为坐标系的原点,并取平行于水平面的两个方向分别为x方向和y方向,作为长度方向和宽度方向,垂直于地面的方向为z方向,作为高度方向。然后可以根据待识别障碍物的点云中的每一个点与原点的相对位置和距离,在坐标系中标识待识别的障碍物。The information of the obstacle to be identified in this embodiment can be obtained by laser radar scanning. The specification of lidar can adopt 16 lines, 32 lines or 64 lines and so on. The higher the number of lines, the greater the unit energy density of the lidar. In this embodiment, the laser radar mounted on the current vehicle rotates 360 times per second, and scans the information of obstacles to be identified around the current vehicle, which is a frame of information of obstacles to be identified. The information of the obstacle to be identified in this embodiment may include a point cloud of the obstacle to be identified and a reflection value of the obstacle to be identified. There may be one or more obstacles to be identified around the current vehicle. After the lidar scan, the center of mass position of the current vehicle can be used as the origin of the coordinate system, and the two directions parallel to the horizontal plane are taken as the x direction and the y direction as the length direction and the width direction, and the direction perpendicular to the ground is the z direction , as the height direction. Then, the obstacle to be identified can be identified in the coordinate system according to the relative position and distance of each point in the point cloud of the obstacle to be identified and the origin.
优选地,对N+1帧中的前N帧中的各帧中的待识别障碍物进行匹配,确定各帧中的待识别障碍物的对应关系,并对各帧中的待识别障碍物进行识别。通过匹配操作,可以避免在复杂场景下(例如存在多个障碍物)待识别障碍物跟踪失败,将不同帧中的同一待识别障碍物识别为不同障碍物的问题。Preferably, the obstacles to be identified in each frame of the first N frames in the N+1 frame are matched, the corresponding relationship of the obstacles to be identified in each frame is determined, and the obstacles to be identified in each frame are determined. identify. Through the matching operation, it is possible to avoid the problem that the same obstacle to be recognized in different frames is recognized as a different obstacle when the obstacle to be recognized fails to be tracked in a complex scene (for example, there are multiple obstacles).
优选地,所述识别是获取激光雷达扫描无人驾驶车辆周围的每一帧的待识别障碍物的信息后分别进行的。Preferably, the identification is performed after acquiring the information of the obstacle to be identified in each frame around the unmanned vehicle scanned by the lidar.
这样,在每一帧的待识别的障碍物的点云中,可以根据各待识别的障碍物中的每一点与当前车辆的相对位置,得出各待识别障碍物的点云。另外,激光雷达还可以检测出每一个待识别障碍物中每一个点的反射值。实际应用中,坐标系还可以以激光雷达的质心位置为原点,其它方向不变。本实施例的N的数值,可以根据实际需求来取。In this way, in the point cloud of each obstacle to be identified, the point cloud of each obstacle to be identified can be obtained according to the relative position of each point in each obstacle to be identified and the current vehicle. In addition, lidar can also detect the reflection value of each point in each obstacle to be identified. In practical applications, the coordinate system can also take the center of mass position of the lidar as the origin, and the other directions remain unchanged. The value of N in this embodiment may be selected according to actual requirements.
在步骤S12的一种优选实现方式中,In a preferred implementation of step S12,
根据N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若在第N+1帧中所述待识别障碍物进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;According to the information of the obstacle to be identified in each frame of the first N frames in the N+1 frame, it is judged whether the obstacle to be identified in the N+1 frame enters the lidar blind area; if the N+1 frame The obstacle to be identified enters the lidar blind area, and the laser point cloud of the obstacle to be identified in the Nth frame is truncated according to the length of the obstacle to be identified entering the lidar blind area;
优选地,在前N帧中的各帧中,得出各待识别障碍物的点云,与前一帧中的所述待识别障碍物的点云进行匹配,以确定同一待识别障碍物在前N帧中的各帧中的点云。Preferably, in each frame of the previous N frames, the point cloud of each obstacle to be identified is obtained, and matched with the point cloud of the obstacle to be identified in the previous frame, to determine that the same obstacle to be identified is in Point clouds in each of the previous N frames.
通过同一待识别障碍物在前N帧中的各帧中的点云,确定所述待识别障碍物在第N帧中,以当前无人驾驶车辆的质心位置为原点的坐标系中的位置、移动速度和移动方向。Through the point clouds of the same obstacle to be identified in each frame in the previous N frames, determine the position of the obstacle to be identified in the Nth frame, with the current center of mass position of the unmanned vehicle as the origin in the coordinate system, Movement speed and direction of movement.
判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区,本实施例中,以当前无人驾驶车辆的质心位置为原点的8m范围内为激光雷达盲区。It is judged whether the obstacle to be identified in the N+1th frame enters the lidar blind zone. In this embodiment, the laser radar blind zone is within 8 meters of the current unmanned vehicle's centroid position as the origin.
若在第N帧中,所述待识别障碍物已经位于激光雷达盲区边缘,且无人驾驶车辆与所述待识别障碍物的相对速度为正值,则在第N+1帧中所述待识别障碍物进入激光雷达盲区。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物部分进入激光雷达盲区。If in the Nth frame, the obstacle to be identified is already located at the edge of the lidar blind zone, and the relative speed between the unmanned vehicle and the obstacle to be identified is a positive value, then the obstacle to be identified in the N+1 frame Identify obstacles entering the lidar blind zone. Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be identified in the frame is determined to be a new obstacle, rather than the obstacle to be identified in the Nth frame partially entering the lidar blind area.
因此,根据所述待识别障碍物在第N帧中以当前无人驾驶车辆的质心位置为原点的坐标系中的位置、移动速度和移动方向,以及激光雷达盲区范围,确定第N+1帧中所述待识别障碍物进入激光雷达盲区的长度。Therefore, according to the position, moving speed and moving direction of the obstacle to be identified in the coordinate system with the current center of mass position of the unmanned vehicle as the origin in the Nth frame, and the range of the lidar blind zone, determine the N+1th frame The length of the obstacle to be identified entering the blind zone of the lidar.
根据第N+1帧中所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断,以便与第N+1帧中部分进入激光雷达盲区的所述待识别障碍物的激光点云进行匹配。Cut off the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the blind zone of the laser radar in the N+1th frame, so as to enter the blind zone of the laser radar in the N+1th frame Match the laser point cloud of the obstacle to be identified.
优选地,对第N帧中的所述待识别障碍物激光点云截取进入激光雷达盲区长度的部分,所述截取考虑到激光雷达被无人驾驶车辆车体遮挡的三维模型,而非简单的垂直截取。Preferably, the laser point cloud of the obstacle to be identified in the Nth frame is intercepted into the part of the length of the lidar blind zone, and the interception takes into account the three-dimensional model of the lidar being blocked by the body of the unmanned vehicle, rather than a simple Cut vertically.
在步骤S13的一种优选实现方式中,In a preferred implementation of step S13,
根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;Matching the truncated laser point cloud of the obstacle to be identified with the laser point cloud of the obstacle to be identified in the N+1th frame;
其中,对第N帧中的所述待识别障碍物激光点云进行截断得到的激光点云是预测的第N+1帧中的所述待识别障碍物激光点云,通过与第N+1帧中实际的所述待识别障碍物激光点云进行匹配,可以提高匹配精度及成功率。Wherein, the laser point cloud obtained by truncating the laser point cloud of the obstacle to be identified in the Nth frame is the predicted laser point cloud of the obstacle to be identified in the N+1th frame, by combining with the N+1th Matching the laser point cloud of the actual obstacle to be identified in the frame can improve the matching accuracy and success rate.
若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;即对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。If the matching is successful, the laser point cloud of the obstacle to be identified in the N+1th frame is completed; that is, the laser point of the part where the obstacle to be identified in the N+1th frame enters the blind area of the lidar Cloud completes.
优选地,所述补全是根据第N帧中的所述待识别障碍物激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。由于是在第N+1帧中的所述待识别障碍物激光点云的基础上进行补全,补全后的激光点云更为符合实际的障碍物的形状。Preferably, the completion is based on the laser point cloud of the obstacle to be identified in the Nth frame to complete the laser point cloud of the part where the obstacle to be identified in the N+1th frame enters the blind area of the lidar . Since the completion is performed on the basis of the laser point cloud of the obstacle to be recognized in the N+1th frame, the completed laser point cloud is more in line with the shape of the actual obstacle.
在步骤S14的一种优选实现方式中,In a preferred implementation of step S14,
根据补全后的所述待识别障碍物激光点云,进行障碍物识别。Obstacle identification is performed according to the completed laser point cloud of the obstacle to be identified.
优选地,根据补全后的所述待识别障碍物的激光点云,可以得到所述待识别障碍物的精确的长度、距离等参数,可以为所述待识别障碍物识别提供更精确的激光点云,保证了所述待识别障碍物识别的准确性。Preferably, according to the completed laser point cloud of the obstacle to be identified, the precise length, distance and other parameters of the obstacle to be identified can be obtained, and more accurate laser light can be provided for the identification of the obstacle to be identified. The point cloud ensures the accuracy of identifying the obstacle to be identified.
在本实施例的另一种优选实现方式中,若在第N帧中,所述待识别障碍物已经进入部分进入激光雷达盲区,且无人驾驶车辆与所述待识别障碍物的相对速度为正值,在第N+1帧中所述待识别障碍物仍然继续部分进入激光雷达盲区。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物继续进入激光雷达盲区。采用与本申请上述实施例同样的方式,进行截断、匹配及补全。其中,所述第N帧中所述待识别障碍物的激光点云为对第N帧中所述待识别障碍物的激光点云进行补全后得到的所述待识别障碍物的激光点云。In another preferred implementation of this embodiment, if in the Nth frame, the obstacle to be identified has entered part of the lidar blind area, and the relative speed between the unmanned vehicle and the obstacle to be identified is If the value is positive, the obstacle to be recognized in the N+1th frame continues to partially enter the lidar blind area. Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be recognized in the frame is determined to be a new obstacle, but the obstacle to be recognized in the Nth frame continues to enter the lidar blind area. Truncation, matching and completion are performed in the same manner as in the above-mentioned embodiments of the present application. Wherein, the laser point cloud of the obstacle to be identified in the Nth frame is the laser point cloud of the obstacle to be identified obtained after completing the laser point cloud of the obstacle to be identified in the Nth frame .
在本实施例的另一种优选实现方式中,若在第N帧中,所述待识别障碍物已经进入部分进入激光雷达盲区,且无人驾驶车辆与所述待识别障碍物的相对速度为负值,在第N+1帧中所述待识别障碍物仍然继续部分进入激光雷达盲区(进入激光雷达盲区的部分较少或全部退出激光雷达盲区)。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物继续进入激光雷达盲区。采用与本申请上述实施例同样的方式,通过对第N帧中所述待识别障碍物的激光点云进行补全,与第N+1帧中的所述待识别障碍物的激光点云进行匹配,匹配成功后,对第N+1帧中的所述待识别障碍物的激光点云进行补全,根据补全后的障碍物激光点云,进行障碍物识别,并判断其是否影响无人驾驶车辆的行驶。其中,所述第N帧中所述待识别障碍物的激光点云为对第N帧中所述待识别障碍物的激光点云进行补全后得到的所述待识别障碍物的激光点云。In another preferred implementation of this embodiment, if in the Nth frame, the obstacle to be identified has entered part of the lidar blind area, and the relative speed between the unmanned vehicle and the obstacle to be identified is Negative value, the obstacle to be recognized in frame N+1 still continues to partially enter the lidar blind area (the part that enters the lidar blind area is less or all exits the lidar blind area). Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be recognized in the frame is determined to be a new obstacle, but the obstacle to be recognized in the Nth frame continues to enter the lidar blind zone. Using the same method as the above-mentioned embodiment of the present application, by completing the laser point cloud of the obstacle to be identified in the Nth frame, and the laser point cloud of the obstacle to be identified in the N+1th frame Matching, after the matching is successful, complete the laser point cloud of the obstacle to be identified in the N+1th frame, perform obstacle recognition according to the completed obstacle laser point cloud, and judge whether it affects The movement of a human-driven vehicle. Wherein, the laser point cloud of the obstacle to be identified in the Nth frame is the laser point cloud of the obstacle to be identified obtained after completing the laser point cloud of the obstacle to be identified in the Nth frame .
采用上述实施例提供的技术方案,可以对部分进入激光雷达盲区的障碍物进行识别,避免了错误识别其长度、距离造成碰撞的风险,有效地提高了无人驾驶车辆的行驶安全性。By adopting the technical solutions provided by the above embodiments, some obstacles entering the blind area of the lidar can be identified, avoiding the risk of collision caused by misidentifying their length and distance, and effectively improving the driving safety of unmanned vehicles.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that for the foregoing method embodiments, for the sake of simple description, they are expressed as a series of action combinations, but those skilled in the art should know that the present application is not limited by the described action sequence. Depending on the application, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification belong to preferred embodiments, and the actions and modules involved are not necessarily required by this application.
以上是关于方法实施例的介绍,以下通过装置实施例,对本发明所述方案进行进一步说明。The above is the introduction about the method embodiment, and the solution of the present invention will be further described through the device embodiment below.
图2为本申请一实施例提供的激光雷达障碍物识别装置的结构示意图,如图2所示,包括:Fig. 2 is a schematic structural diagram of a lidar obstacle recognition device provided by an embodiment of the present application, as shown in Fig. 2 , including:
获取模块21,用于获取激光雷达扫描无人驾驶车辆周围的连续N+1帧的待识别障碍物的信息;Obtaining module 21, is used for obtaining the information of the obstacle to be identified of the consecutive N+1 frames around the laser radar scanning unmanned vehicle;
截断模块22,用于根据所述N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;The truncation module 22 is used to determine whether the obstacle to be identified in the N+1th frame enters the lidar blind area according to the information of the obstacle to be identified in each frame of the first N frames in the N+1 frame ; If entering the lidar blind area, truncate the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the lidar blind area;
补全模块23,用于根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;The completion module 23 is used to match the laser point cloud of the obstacle to be identified in the N+1th frame according to the truncated laser point cloud of the obstacle to be identified; if the matching is successful, the N+1th Complete the laser point cloud of the obstacle to be identified in the frame;
识别模块24,用于根据补全后的所述待识别障碍物激光点云,进行障碍物识别。The identification module 24 is configured to perform obstacle identification according to the completed laser point cloud of the obstacle to be identified.
在获取模块21的一种优选实现方式中,In a preferred implementation of the acquisition module 21,
本实施例的障碍物识别方法应用在无人驾驶技术领域中。在无人驾驶中,需要无人驾驶车辆能够自动识别道路中的障碍物,以在车辆行驶中及时做出决策与控制,便于车辆的安全行驶。本实施例的障碍物识别方法的执行主体可以为障碍物识别装置,该障碍物识别装置可以采用多个模块集成而得,该障碍物识别装置具体可以设置在无人驾驶车辆中,以对无人驾驶的车辆进行控制。The obstacle recognition method of this embodiment is applied in the technical field of unmanned driving. In unmanned driving, unmanned vehicles need to be able to automatically identify obstacles in the road, so as to make timely decisions and control during vehicle driving, so as to facilitate the safe driving of vehicles. The execution body of the obstacle recognition method in this embodiment can be an obstacle recognition device, which can be obtained by integrating multiple modules, and the obstacle recognition device can be specifically set in an unmanned vehicle to Human-driven vehicles are controlled.
本实施例的待识别障碍物的信息可以采用激光雷达扫射得到的。激光雷达的规格可以采用16线、32线或者64线等等。其中线数越高表示激光雷达的单位能量密度越大。本实施例中,装在当前车辆上的激光雷达在每一秒中旋转360次,扫描当前车辆周围一圈的待识别的障碍物的信息,为一帧待识别的障碍物的信息。本实施例中的待识别的障碍物的信息可以包括待识别的障碍物的点云以及待识别的障碍物的反射值。当前车辆周围的待识别的障碍物可以有一个,也可以有多个。激光雷达扫描之后,可以以当前车辆的质心位置为坐标系的原点,并取平行于水平面的两个方向分别为x方向和y方向,作为长度方向和宽度方向,垂直于地面的方向为z方向,作为高度方向。然后可以根据待识别障碍物的点云中的每一个点与原点的相对位置和距离,在坐标系中标识待识别的障碍物。The information of the obstacle to be identified in this embodiment can be obtained by laser radar scanning. The specification of lidar can adopt 16 lines, 32 lines or 64 lines and so on. The higher the number of lines, the greater the unit energy density of the lidar. In this embodiment, the laser radar mounted on the current vehicle rotates 360 times per second, and scans the information of obstacles to be identified around the current vehicle, which is a frame of information of obstacles to be identified. The information of the obstacle to be identified in this embodiment may include a point cloud of the obstacle to be identified and a reflection value of the obstacle to be identified. There may be one or more obstacles to be identified around the current vehicle. After the lidar scan, the center of mass position of the current vehicle can be used as the origin of the coordinate system, and the two directions parallel to the horizontal plane are taken as the x direction and the y direction as the length direction and the width direction, and the direction perpendicular to the ground is the z direction , as the height direction. Then, the obstacle to be identified can be identified in the coordinate system according to the relative position and distance of each point in the point cloud of the obstacle to be identified and the origin.
优选地,对N+1帧中的前N帧中的各帧中的待识别障碍物进行匹配,确定各帧中的待识别障碍物的对应关系,并对各帧中的待识别障碍物进行识别。通过匹配操作,可以避免在复杂场景下(例如存在多个障碍物)待识别障碍物跟踪失败,将不同帧中的同一待识别障碍物识别为不同障碍物的问题。Preferably, the obstacles to be identified in each frame of the first N frames in the N+1 frame are matched, the corresponding relationship of the obstacles to be identified in each frame is determined, and the obstacles to be identified in each frame are determined. identify. Through the matching operation, it is possible to avoid the problem that the same obstacle to be recognized in different frames is recognized as a different obstacle when the obstacle to be recognized fails to be tracked in a complex scene (for example, there are multiple obstacles).
优选地,所述识别是获取激光雷达扫描无人驾驶车辆周围的每一帧的待识别障碍物的信息后分别进行的。Preferably, the identification is performed after acquiring the information of the obstacle to be identified in each frame around the unmanned vehicle scanned by the lidar.
这样,在每一帧的待识别的障碍物的点云中,可以根据各待识别的障碍物中的每一点与当前车辆的相对位置,得出各待识别障碍物的点云。另外,激光雷达还可以检测出每一个待识别障碍物中每一个点的反射值。实际应用中,坐标系还可以以激光雷达的质心位置为原点,其它方向不变。本实施例的N的数值,可以根据实际需求来取。In this way, in the point cloud of each obstacle to be identified, the point cloud of each obstacle to be identified can be obtained according to the relative position of each point in each obstacle to be identified and the current vehicle. In addition, lidar can also detect the reflection value of each point in each obstacle to be identified. In practical applications, the coordinate system can also take the center of mass position of the lidar as the origin, and the other directions remain unchanged. The value of N in this embodiment may be selected according to actual requirements.
在截断模块22的一种优选实现方式中,In a preferred implementation of the truncation module 22,
根据N+1帧中的前N帧中的各帧中的待识别障碍物的信息,判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区;若在第N+1帧中所述待识别障碍物进入激光雷达盲区,根据所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断;According to the information of the obstacle to be identified in each frame of the first N frames in the N+1 frame, it is judged whether the obstacle to be identified in the N+1 frame enters the lidar blind area; if the N+1 frame The obstacle to be identified enters the lidar blind area, and the laser point cloud of the obstacle to be identified in the Nth frame is truncated according to the length of the obstacle to be identified entering the lidar blind area;
优选地,在前N帧中的各帧中,得出各待识别障碍物的点云,与前一帧中的所述待识别障碍物的点云进行匹配,以确定同一待识别障碍物在前N帧中的各帧中的点云。Preferably, in each frame of the previous N frames, the point cloud of each obstacle to be identified is obtained, and matched with the point cloud of the obstacle to be identified in the previous frame, to determine that the same obstacle to be identified is in Point clouds in each of the previous N frames.
通过同一待识别障碍物在前N帧中的各帧中的点云,确定所述待识别障碍物在第N帧中,以当前无人驾驶车辆的质心位置为原点的坐标系中的位置、移动速度和移动方向。Through the point clouds of the same obstacle to be identified in each frame in the previous N frames, determine the position of the obstacle to be identified in the Nth frame, with the current center of mass position of the unmanned vehicle as the origin in the coordinate system, Movement speed and direction of movement.
判断在第N+1帧中所述待识别障碍物是否进入激光雷达盲区,本实施例中,以当前无人驾驶车辆的质心位置为原点的8m范围内为激光雷达盲区。It is judged whether the obstacle to be identified in the N+1th frame enters the lidar blind zone. In this embodiment, the laser radar blind zone is within 8 meters of the current unmanned vehicle's centroid position as the origin.
若在第N帧中,所述待识别障碍物已经位于激光雷达盲区边缘,且无人驾驶车辆与所述待识别障碍物的相对速度为正值,则在第N+1帧中所述待识别障碍物进入激光雷达盲区。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物部分进入激光雷达盲区。If in the Nth frame, the obstacle to be identified is already located at the edge of the lidar blind zone, and the relative speed between the unmanned vehicle and the obstacle to be identified is a positive value, then the obstacle to be identified in the N+1 frame Identify obstacles entering the lidar blind zone. Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be identified in the frame is determined to be a new obstacle, rather than the obstacle to be identified in the Nth frame partially entering the lidar blind area.
因此,根据所述待识别障碍物在第N帧中以当前无人驾驶车辆的质心位置为原点的坐标系中的位置、移动速度和移动方向,以及激光雷达盲区范围,确定第N+1帧中所述待识别障碍物进入激光雷达盲区的长度。Therefore, according to the position, moving speed and moving direction of the obstacle to be identified in the coordinate system with the current center of mass position of the unmanned vehicle as the origin in the Nth frame, and the range of the lidar blind zone, determine the N+1th frame The length of the obstacle to be identified entering the blind zone of the lidar.
根据第N+1帧中所述待识别障碍物进入激光雷达盲区的长度对第N帧中的所述待识别障碍物激光点云进行截断,以便与第N+1帧中部分进入激光雷达盲区的所述待识别障碍物的激光点云进行匹配。Cut off the laser point cloud of the obstacle to be identified in the Nth frame according to the length of the obstacle to be identified entering the blind zone of the laser radar in the N+1th frame, so as to enter the blind zone of the laser radar in the N+1th frame Match the laser point cloud of the obstacle to be identified.
优选地,对第N帧中的所述待识别障碍物激光点云截取进入激光雷达盲区长度的部分,所述截取考虑到激光雷达被无人驾驶车辆车体遮挡的三维模型,而非简单的垂直截取。Preferably, the laser point cloud of the obstacle to be identified in the Nth frame is intercepted into the part of the length of the lidar blind zone, and the interception takes into account the three-dimensional model of the lidar being blocked by the body of the unmanned vehicle, rather than a simple Cut vertically.
在补全模块23的一种优选实现方式中,In a preferred implementation of the completion module 23,
根据截断后的所述待识别障碍物激光点云与第N+1帧中的所述待识别障碍物激光点云进行匹配;Matching the truncated laser point cloud of the obstacle to be identified with the laser point cloud of the obstacle to be identified in the N+1th frame;
其中,对第N帧中的所述待识别障碍物激光点云进行截断得到的激光点云是预测的第N+1帧中的所述待识别障碍物激光点云,通过与第N+1帧中实际的所述待识别障碍物激光点云进行匹配,可以提高匹配精度及成功率。Wherein, the laser point cloud obtained by truncating the laser point cloud of the obstacle to be identified in the Nth frame is the predicted laser point cloud of the obstacle to be identified in the N+1th frame, by combining with the N+1th Matching the laser point cloud of the actual obstacle to be identified in the frame can improve the matching accuracy and success rate.
若匹配成功,则对第N+1帧中的所述待识别障碍物激光点云进行补全;即对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。If the matching is successful, the laser point cloud of the obstacle to be identified in the N+1th frame is completed; that is, the laser point of the part where the obstacle to be identified in the N+1th frame enters the blind area of the lidar Cloud completes.
优选地,所述补全是根据第N帧中的所述待识别障碍物激光点云对第N+1帧中的所述待识别障碍物进入激光雷达盲区的部分的激光点云进行补全。由于是在第N+1帧中的所述待识别障碍物激光点云的基础上进行补全,补全后的激光点云更为符合实际的障碍物的形状。Preferably, the completion is based on the laser point cloud of the obstacle to be identified in the Nth frame to complete the laser point cloud of the part where the obstacle to be identified in the N+1th frame enters the blind area of the lidar . Since the completion is performed on the basis of the laser point cloud of the obstacle to be recognized in the N+1th frame, the completed laser point cloud is more in line with the shape of the actual obstacle.
在识别模块24的一种优选实现方式中,In a preferred implementation of the identification module 24,
根据补全后的所述待识别障碍物激光点云,进行障碍物识别。Obstacle identification is performed according to the completed laser point cloud of the obstacle to be identified.
优选地,根据补全后的所述待识别障碍物的激光点云,可以得到所述待识别障碍物的精确的长度、距离等参数,可以为所述待识别障碍物识别提供更精确的激光点云,保证了所述待识别障碍物识别的准确性。Preferably, according to the completed laser point cloud of the obstacle to be identified, the precise length, distance and other parameters of the obstacle to be identified can be obtained, and more accurate laser light can be provided for the identification of the obstacle to be identified. The point cloud ensures the accuracy of identifying the obstacle to be identified.
在本实施例的另一种优选实现方式中,若在第N帧中,所述待识别障碍物已经进入部分进入激光雷达盲区,且无人驾驶车辆与所述待识别障碍物的相对速度为正值,在第N+1帧中所述待识别障碍物仍然继续部分进入激光雷达盲区。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物继续进入激光雷达盲区。采用与本申请上述实施例同样的方式,进行截断、匹配及补全。其中,所述第N帧中所述待识别障碍物的激光点云为对第N帧中所述待识别障碍物的激光点云进行补全后得到的所述待识别障碍物的激光点云。In another preferred implementation of this embodiment, if in the Nth frame, the obstacle to be identified has entered part of the lidar blind area, and the relative speed between the unmanned vehicle and the obstacle to be identified is If the value is positive, the obstacle to be recognized in the N+1th frame continues to partially enter the lidar blind area. Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be recognized in the frame is determined to be a new obstacle, but the obstacle to be recognized in the Nth frame continues to enter the lidar blind area. Truncation, matching and completion are performed in the same manner as in the above-mentioned embodiments of the present application. Wherein, the laser point cloud of the obstacle to be identified in the Nth frame is the laser point cloud of the obstacle to be identified obtained after completing the laser point cloud of the obstacle to be identified in the Nth frame .
在本实施例的另一种优选实现方式中,若在第N帧中,所述待识别障碍物已经进入部分进入激光雷达盲区,且无人驾驶车辆与所述待识别障碍物的相对速度为负值,在第N+1帧中所述待识别障碍物仍然继续部分进入激光雷达盲区(进入激光雷达盲区的部分较少或全部退出激光雷达盲区)。由于激光雷达扫描周期较快,如360次每秒,则在1/360秒内,所述待识别障碍物只能部分进入激光雷达盲区,这就造成了激光雷达只能获取所述待识别障碍物的部分激光点云。第N+1帧中所述待识别障碍物的部分点云与第N帧中所述待识别障碍物的点云分布发生了变化,则进行匹配的过程中会匹配失败,将第N+1帧中所述待识别障碍物判定为新的障碍物,而非第N帧中所述待识别障碍物继续进入激光雷达盲区。采用与本申请上述实施例同样的方式,通过对第N帧中所述待识别障碍物的激光点云进行补全,与第N+1帧中的所述待识别障碍物的激光点云进行匹配,匹配成功后,对第N+1帧中的所述待识别障碍物的激光点云进行补全,根据补全后的障碍物激光点云,进行障碍物识别,并判断其是否影响无人驾驶车辆的行驶。其中,所述第N帧中所述待识别障碍物的激光点云为对第N帧中所述待识别障碍物的激光点云进行补全后得到的所述待识别障碍物的激光点云。In another preferred implementation of this embodiment, if in the Nth frame, the obstacle to be identified has entered part of the lidar blind area, and the relative speed between the unmanned vehicle and the obstacle to be identified is Negative value, the obstacle to be recognized in frame N+1 still continues to partially enter the lidar blind area (the part that enters the lidar blind area is less or all exits the lidar blind area). Since the lidar scan cycle is fast, such as 360 times per second, within 1/360 second, the obstacle to be identified can only partially enter the blind area of the lidar, which causes the lidar to only acquire the obstacle to be identified part of the laser point cloud of the object. Part of the point cloud of the obstacle to be identified in the N+1th frame and the point cloud distribution of the obstacle to be identified in the Nth frame have changed, then the matching will fail during the matching process, and the N+1th The obstacle to be recognized in the frame is determined to be a new obstacle, but the obstacle to be recognized in the Nth frame continues to enter the lidar blind area. Using the same method as the above-mentioned embodiment of the present application, by completing the laser point cloud of the obstacle to be identified in the Nth frame, and the laser point cloud of the obstacle to be identified in the N+1th frame Matching, after the matching is successful, complete the laser point cloud of the obstacle to be identified in the N+1th frame, perform obstacle recognition according to the completed obstacle laser point cloud, and judge whether it affects The movement of a human-driven vehicle. Wherein, the laser point cloud of the obstacle to be identified in the Nth frame is the laser point cloud of the obstacle to be identified obtained after completing the laser point cloud of the obstacle to be identified in the Nth frame .
采用上述实施例提供的技术方案,采用上述实施例提供的技术方案,可以对部分进入激光雷达盲区的障碍物进行识别,避免了错误识别其长度、距离造成碰撞的风险,有效地提高了无人驾驶车辆的行驶安全性。By adopting the technical solutions provided by the above embodiments, it is possible to identify some of the obstacles entering the blind area of the laser radar, avoiding the risk of collisions caused by misidentifying their lengths and distances, and effectively improving the safety of unmanned vehicles. driving safety of the vehicle.
在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments, descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed methods and devices may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The integrated unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
图3示出了适于用来实现本发明实施方式的示例性计算机系统/服务器012的框图。图3显示的计算机系统/服务器012仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。Figure 3 shows a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the present invention. The computer system/server 012 shown in FIG. 3 is only an example, and should not limit the functions and scope of use of the embodiments of the present invention.
如图3所示,计算机系统/服务器012以通用计算设备的形式表现。计算机系统/服务器012的组件可以包括但不限于:一个或者多个处理器或者处理单元016,系统存储器028,连接不同系统组件(包括系统存储器028和处理单元016)的总线018。As shown in Figure 3, computer system/server 012 takes the form of a general-purpose computing device. Components of computer system/server 012 may include, but are not limited to: one or more processors or processing units 016, system memory 028, bus 018 connecting various system components including system memory 028 and processing unit 016.
总线018表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。Bus 018 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. These architectures include, by way of example, but are not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
计算机系统/服务器012典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机系统/服务器012访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。Computer system/server 012 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by computer system/server 012 and include both volatile and nonvolatile media, removable and non-removable media.
系统存储器028可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)030和/或高速缓存存储器032。计算机系统/服务器012可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统034可以用于读写不可移动的、非易失性磁介质(图3未显示,通常称为“硬盘驱动器”)。尽管图3中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线018相连。存储器028可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。System memory 028 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 030 and/or cache memory 032 . The computer system/server 012 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 034 may be used to read from and write to non-removable, non-volatile magnetic media (not shown in FIG. 3, commonly referred to as a "hard drive"). Although not shown in Figure 3, a disk drive for reading and writing to removable non-volatile disks (e.g. "floppy disks") may be provided, as well as for removable non-volatile optical disks (e.g. CD-ROM, DVD-ROM or other optical media) CD-ROM drive. In these cases, each drive may be connected to bus 018 via one or more data media interfaces. Memory 028 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
具有一组(至少一个)程序模块042的程序/实用工具040,可以存储在例如存储器028中,这样的程序模块042包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块042通常执行本发明所描述的实施例中的功能和/或方法。A program/utility 040 having a set (at least one) of program modules 042, such as may be stored in memory 028, such program modules 042 including - but not limited to - an operating system, one or more application programs, other program Modules and program data, each or some combination of these examples may include the implementation of the network environment. Program modules 042 generally perform the functions and/or methods of the described embodiments of the present invention.
计算机系统/服务器012也可以与一个或多个外部设备014(例如键盘、指向设备、显示器024等)通信,在本发明中,计算机系统/服务器012与外部雷达设备进行通信,还可与一个或者多个使得用户能与该计算机系统/服务器012交互的设备通信,和/或与使得该计算机系统/服务器012能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口022进行。并且,计算机系统/服务器012还可以通过网络适配器020与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图3所示,网络适配器020通过总线018与计算机系统/服务器012的其它模块通信。应当明白,尽管图3中未示出,可以结合计算机系统/服务器012使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The computer system/server 012 can also communicate with one or more external devices 014 (such as keyboards, pointing devices, displays 024, etc.). In the present invention, the computer system/server 012 communicates with external radar devices, and can also communicate with one or Devices that enable a user to interact with the computer system/server 012, and/or communicate with any device that enables the computer system/server 012 to communicate with one or more other computing devices (e.g., network cards, modems, etc.) communication. Such communication may occur through input/output (I/O) interface 022 . Also, the computer system/server 012 can also communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 020 . As shown in FIG. 3 , network adapter 020 communicates with other modules of computer system/server 012 via bus 018 . It should be appreciated that although not shown in FIG. 3, other hardware and/or software modules may be used in conjunction with computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems , tape drives, and data backup storage systems.
处理单元016通过运行存储在系统存储器028中的程序,从而执行本发明所描述的实施例中的功能和/或方法。The processing unit 016 executes the functions and/or methods in the described embodiments of the present invention by running the programs stored in the system memory 028 .
上述的计算机程序可以设置于计算机存储介质中,即该计算机存储介质被编码有计算机程序,该程序在被一个或多个计算机执行时,使得一个或多个计算机执行本发明上述实施例中所示的方法流程和/或装置操作。The above-mentioned computer program can be set in a computer storage medium, that is, the computer storage medium is encoded with a computer program, and when the program is executed by one or more computers, one or more computers can execute the computer programs shown in the above-mentioned embodiments of the present invention. Method flow and/or device operation.
随着时间、技术的发展,介质含义越来越广泛,计算机程序的传播途径不再受限于有形介质,还可以直接从网络下载等。可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。With the development of time and technology, the meaning of medium has become more and more extensive, and the transmission path of computer programs is no longer limited to tangible media, and can also be downloaded directly from the Internet. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including - but not limited to - electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to - wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out the operations of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, rather than limiting them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent replacements are made to some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the present application.
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811533572.0A CN109633688B (en) | 2018-12-14 | 2018-12-14 | LiDAR obstacle recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811533572.0A CN109633688B (en) | 2018-12-14 | 2018-12-14 | LiDAR obstacle recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109633688A CN109633688A (en) | 2019-04-16 |
CN109633688B true CN109633688B (en) | 2019-12-24 |
Family
ID=66073985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811533572.0A Active CN109633688B (en) | 2018-12-14 | 2018-12-14 | LiDAR obstacle recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109633688B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111374B (en) * | 2019-04-29 | 2020-11-17 | 上海电机学院 | Laser point cloud matching method based on grouped stepped threshold judgment |
CN110281838B (en) * | 2019-06-28 | 2022-11-18 | 上海理工大学 | An automatic conversion method of automobile headlights based on two-dimensional laser radar |
CN110262504B (en) * | 2019-07-02 | 2021-06-01 | 吉林大学 | Multi-laser radar coupling system with adjustable structure and control method thereof |
CN110428661A (en) * | 2019-08-12 | 2019-11-08 | 深圳成谷科技有限公司 | A kind of protection pedestrian crosses the method, apparatus and equipment of zebra stripes |
CN112528711B (en) * | 2019-09-19 | 2024-06-25 | 阿波罗智能技术(北京)有限公司 | Method and device for processing information |
CN112630799B (en) * | 2019-09-24 | 2022-11-29 | 阿波罗智能技术(北京)有限公司 | Method and apparatus for outputting information |
CN112558035B (en) * | 2019-09-24 | 2023-06-09 | 北京百度网讯科技有限公司 | Method and device for estimating the ground |
CN113424079A (en) * | 2019-12-30 | 2021-09-21 | 深圳元戎启行科技有限公司 | Obstacle detection method, obstacle detection device, computer device, and storage medium |
CN111326005B (en) * | 2020-03-04 | 2021-12-28 | 北京百度网讯科技有限公司 | Image processing method, apparatus, computer device and medium for automatic driving |
CN111798700B (en) * | 2020-06-30 | 2022-02-25 | 北京行易道科技有限公司 | Blind area monitoring alarm method and device |
CN113859228B (en) * | 2020-06-30 | 2023-07-25 | 上海商汤智能科技有限公司 | Vehicle control method and device, electronic equipment and storage medium |
CN112330702A (en) * | 2020-11-02 | 2021-02-05 | 蘑菇车联信息科技有限公司 | Point cloud completion method and device, electronic equipment and storage medium |
CN112731449B (en) * | 2020-12-23 | 2023-04-14 | 深圳砺剑天眼科技有限公司 | A laser radar obstacle recognition method and system |
CN116184992A (en) * | 2021-11-29 | 2023-05-30 | 上海商汤临港智能科技有限公司 | Vehicle control method, device, electronic equipment and storage medium |
CN114715024A (en) * | 2022-04-15 | 2022-07-08 | 东风柳州汽车有限公司 | Automobile acousto-optic reminding method, automobile acousto-optic reminding equipment, storage medium and automobile acousto-optic reminding device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4753765B2 (en) * | 2006-03-30 | 2011-08-24 | ダイハツ工業株式会社 | Obstacle recognition method |
CA2635155C (en) * | 2007-06-18 | 2015-11-24 | Institut National D'optique | Method for detecting objects with visible light |
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN106774306B (en) * | 2016-11-30 | 2018-09-11 | 百度在线网络技术(北京)有限公司 | Startup detection method, device and system applied to automatic driving vehicle |
CN108318895B (en) * | 2017-12-19 | 2020-02-07 | 深圳市海梁科技有限公司 | Obstacle identification method and device for unmanned vehicle and terminal equipment |
-
2018
- 2018-12-14 CN CN201811533572.0A patent/CN109633688B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109633688A (en) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109633688B (en) | LiDAR obstacle recognition method and device | |
US11042762B2 (en) | Sensor calibration method and device, computer device, medium, and vehicle | |
WO2020147486A1 (en) | Vehicle control method, apparatus, device, and computer storage medium | |
CN109116374B (en) | Method, device and equipment for determining distance of obstacle and storage medium | |
WO2020147485A1 (en) | Information processing method, system and equipment, and computer storage medium | |
US11080539B2 (en) | Traffic light state recognizing method and apparatus, computer device and readable medium | |
CN106951847B (en) | Obstacle detection method, apparatus, device and storage medium | |
US11599120B2 (en) | Obstacle detecting method, apparatus, device and computer storage medium | |
JP2022507995A (en) | Obstacle avoidance methods and equipment for unmanned vehicles | |
CN112650300B (en) | Unmanned aerial vehicle obstacle avoidance method and device | |
CN109840454B (en) | Target positioning method, device, storage medium and equipment | |
CN112613424A (en) | Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium | |
CN114662600B (en) | Lane line detection method, device and storage medium | |
CN114312836A (en) | Method, device, device, and storage medium for autonomous vehicle yielding to pedestrians | |
JP2025522178A (en) | Method, device, vehicle and storage medium for determining angle of course for oblique parking space | |
CN115339453B (en) | Method, device, device and computer medium for generating vehicle lane-changing decision information | |
WO2023025007A1 (en) | Vehicle avoidance method and apparatus, vehicle-mounted device, and storage medium | |
US9301722B1 (en) | Guiding computational perception through a shared auditory space | |
CN114852068B (en) | Pedestrian collision avoidance method, device, equipment and storage medium | |
CN115019511A (en) | Method and device for identifying illegal lane changes of motor vehicles based on autonomous vehicles | |
CN115447606A (en) | Automatic driving vehicle control method and device based on blind area recognition | |
CN114842431A (en) | A method, device, device and storage medium for identifying road guardrails | |
CN118535459A (en) | Road testing method and system for autonomous driving | |
CN110497906A (en) | Vehicle control method, device, device and medium | |
CN117636307A (en) | Object detection method and device based on semantic information and automatic driving vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211019 Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085 Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd. Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085 Patentee before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |