CN115077537A - High-precision map perception container design method and device, storage medium and terminal - Google Patents
High-precision map perception container design method and device, storage medium and terminal Download PDFInfo
- Publication number
- CN115077537A CN115077537A CN202110262648.6A CN202110262648A CN115077537A CN 115077537 A CN115077537 A CN 115077537A CN 202110262648 A CN202110262648 A CN 202110262648A CN 115077537 A CN115077537 A CN 115077537A
- Authority
- CN
- China
- Prior art keywords
- information
- perception
- map
- vehicle
- sensorized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
技术领域technical field
本发明涉及自动驾驶汽车感知领域,具体涉及一种面向多车联合感知的高精度地图感知容器设计方法及装置、存储介质、终端。The invention relates to the field of perception of autonomous vehicles, in particular to a design method and device, a storage medium and a terminal for a high-precision map perception container oriented to multi-vehicle joint perception.
背景技术Background technique
单车环境感知存在盲区难以消除、感知精度较低等瓶颈问题。V2X(Vehicle ToEverything)车联网可以实现车与车、车与人、车与路侧以及车与云平台之间的通信,实现信息之间的共享,使自动驾驶告别单车智能时代。Bicycle environment perception has bottlenecks such as difficulty in eliminating blind spots and low perception accuracy. V2X (Vehicle ToEverything) Internet of Vehicles can realize the communication between vehicles and vehicles, vehicles and people, vehicles and roadside, and vehicles and cloud platforms, realize the sharing of information, and make autonomous driving bid farewell to the era of bicycle intelligence.
近年来随着车联网技术的发展,单车可获取信息的范围大大增加。通过融合车联网中其他车辆的感知信息,可以有效消除车辆感知遮挡、超视野盲区,提升单车的感知能力;另外,还可以通过电子地图获取道路环境先验信息,这为融合外部感知资源进一步提升单车感知能力提供了技术基础。In recent years, with the development of Internet of Vehicles technology, the range of information that can be obtained by bicycles has greatly increased. By integrating the perception information of other vehicles in the Internet of Vehicles, it can effectively eliminate vehicle perception occlusion and super-view blind spots, and improve the perception ability of bicycles; in addition, the prior information of the road environment can also be obtained through the electronic map, which further improves the integration of external perception resources. Bicycle perception provides the technical foundation.
然而,在现有的基于车联网技术的自动驾驶汽车感知技术中,多以主车局部坐标系为信息融合基准,依赖于多车之间视野的重叠程度,未能形成对车辆位姿的有效约束,单车感知存在盲区,且难以通过其他联网车辆的视野数据进行消除。However, in the existing autonomous vehicle perception technology based on the Internet of Vehicles technology, the local coordinate system of the main vehicle is used as the information fusion benchmark, and it depends on the overlapping degree of vision between multiple vehicles. Constraints, there are blind spots in the perception of bicycles, and it is difficult to eliminate them through the visual field data of other connected vehicles.
在现有技术中,有研究基于全球导航卫星系统(Global Navigation SatelliteSystem,GNSS)基准进行多车联合感知,但在复杂场景中,尤其小范围的联合感知场景,GNSS坐标系不稳定,经常存在沿某个方向的整体漂移,造成网联融合感知的精度下降。In the prior art, there are studies on multi-vehicle joint perception based on the Global Navigation Satellite System (GNSS) benchmark. However, in complex scenarios, especially small-scale joint perception scenarios, the GNSS coordinate system is unstable, and there are often edges along the way. The overall drift in a certain direction causes the accuracy of the fusion perception of the network to decrease.
亟需一种面向多车联合感知的高精度地图感知容器设计方法,可以为多车联合感知的信息融合提供统一基准,该基准不随GNSS信号的退化而产生漂移,可为多车联合感知结果提供稳定的空间参考基准,有效消除感知盲区和提升感知精度。There is an urgent need for a high-precision map perception container design method for multi-vehicle joint perception, which can provide a unified benchmark for information fusion of multi-vehicle joint perception. The benchmark does not drift with the degradation of GNSS signals, and can provide multi-vehicle joint perception results. Stable spatial reference, effectively eliminating blind spots and improving perception accuracy.
发明内容SUMMARY OF THE INVENTION
本发明解决的技术问题是提供一种面向多车联合感知的高精度地图感知容器设计方法及装置、存储介质、终端,可以有效消除感知盲区和提升感知精度。The technical problem solved by the present invention is to provide a high-precision map sensing container design method and device, storage medium and terminal for multi-vehicle joint sensing, which can effectively eliminate sensing blind spots and improve sensing accuracy.
为解决上述技术问题,本发明实施例提供一种面向多车联合感知的高精度地图感知容器设计方法,包括以下步骤:获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息;提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果;其中,所述高精度地图为定位误差在预设误差范围内的地图。To solve the above technical problem, an embodiment of the present invention provides a high-precision map perception container design method for multi-vehicle joint perception, including the following steps: acquiring perception target indication information, where the perception target indication information is used to indicate the automatic driving. The environment state information that the car needs to perceive; obtain a high-precision map, establish a map perception container, and perform feature extraction on at least a part of the perception targets in the map to obtain sensorized information; extract the high-precision map data in addition to the above-mentioned sensorized information The non-sensorized information is superimposed with ordinary sensor perception information and map sensor perception information, and fusion perception is performed under the unified map space-time reference, and the environment perception result of the autonomous vehicle is obtained by solving the problem through state estimation; The map is the map with the positioning error within the preset error range.
可选的,所述至少一部分感知目标包括联网车辆以及除所述联网车辆之外的其他障碍物;其中,所述传感器化信息包括对联网车辆进行检测的联网车辆检测信息以及对除所述联网车辆之外的其他障碍物进行检测的非联网车辆检测信息。Optionally, the at least a part of the perception targets include connected vehicles and other obstacles other than the connected vehicles; wherein the sensorized information includes connected vehicle detection information for detecting connected vehicles and information on connected vehicles other than the connected vehicles. Non-connected vehicle detection information that detects obstacles other than the vehicle.
可选的,所述非联网车辆检测信息包括以下一项或多项:对除所述联网车辆之外的动态障碍物进行检测的动态障碍物信息;对所述地图上的静态障碍物进行检测的地图检测信息。Optionally, the non-networked vehicle detection information includes one or more of the following: dynamic obstacle information for detecting dynamic obstacles other than the networked vehicle; detecting static obstacles on the map map detection information.
可选的,采用地图传感器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息包括:根据所述感知目标指示信息确定所述需要感知的感知目标的位置信息;根据所述地图的精度,计算所述至少一部分感知目标的位置信息的协方差,并作为所述传感器化信息。Optionally, using a map sensor to perform feature extraction on at least a part of the sensing targets in the map to obtain sensorized information includes: determining the position information of the sensing target to be sensed according to the sensing target indication information; The accuracy of the map is calculated, and the covariance of the position information of the at least a part of the perceptual target is calculated as the sensorized information.
可选的,所述提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果包括:获取一个或多个传感器对所述至少一部分感知目标进行检测的传感器信息;对所述至少一部分感知目标的传感器信息以及所述传感器化信息进行融合,以得到所述至少一部分感知目标中各个感知目标的融合信息;根据所述融合信息,确定所述至少一部分感知目标中各个感知目标的观测量集合;采用最大似然估计算法,根据所述至少一部分感知目标中各个感知目标的观测量集合,确定所述至少一部分感知目标中各个感知目标的状态量估计。Optionally, the non-sensorized information other than the above-mentioned sensorized information in the extracted high-precision map data is superimposed with the common sensor perception information and the map sensor perception information, and fusion perception is performed under a unified map space-time reference, through The state estimation solution problem to obtain the environmental perception result of the autonomous vehicle includes: acquiring sensor information of one or more sensors for detecting the at least a part of the perception target; Fusion to obtain fusion information of each sensing target in the at least a part of the sensing targets; according to the fusion information, determine the observation set of each sensing target in the at least a part of the sensing targets; using a maximum likelihood estimation algorithm, according to the A set of observation quantities of each sensing target in at least a part of the sensing targets, and determining the state quantity estimation of each sensing target in the at least part of the sensing targets.
可选的,所述提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果还包括:将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息叠加,以得到所述自动驾驶汽车的感知结果。Optionally, the non-sensorized information other than the above-mentioned sensorized information in the extracted high-precision map data is superimposed with the common sensor perception information and the map sensor perception information, and fusion perception is performed under a unified map space-time reference, through Solving the problem of state estimation to obtain the environmental perception result of the self-driving car further includes: superimposing the state quantity estimation of each perception target in the at least a part of the perception targets and the non-sensorized information to obtain the perception result of the self-driving car.
可选的,采用下述公式,根据所述至少一部分感知目标中各个感知目标的观测量集合,确定所述至少一部分感知目标中各个感知目标的状态量估计:Optionally, the following formula is used to determine the state quantity estimation of each sensing target in the at least a part of the sensing targets according to the observation quantity set of each sensing target in the at least a part of the sensing targets:
其中,用于表示极大似然意义下驾驶环境空间状态参数的估计结果,XD用于表示驾驶环境空间中一定时间深度内的观测量集合,Zp用于表示智能汽车可获取的异步异构的感知空间中的测量集合,P(Zp|XD)是在给定状态参数XD下观测量集合Zp的条件概率。in, It is used to represent the estimation results of the state parameters of the driving environment space in the sense of maximum likelihood, X D is used to represent the set of observations within a certain time depth in the driving environment space, and Z p is used to represent the asynchronous and heterogeneous data available to smart cars. The set of measurements in perceptual space, P(Z p |X D ) is the conditional probability of observing the set of measurements Z p given the state parameter X D .
为解决上述技术问题,本发明实施例提供一种面向多车联合感知的高精度地图感知容器设计装置,包括:信息获取模块,用于获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;传感器化信息确定模块,用于获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息;感知结果求解模块,用于提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果;其中,所述高精度地图为定位误差在预设误差范围内的地图。In order to solve the above technical problem, an embodiment of the present invention provides a high-precision map perception container design device for multi-vehicle joint perception, including: an information acquisition module for acquiring perception target indication information, the perception target indication information is used to indicate The self-driving car needs to perceive environmental state information; a sensorized information determination module is used to obtain a high-precision map, establish a map perception container, and perform feature extraction on at least a part of the perception targets in the map to obtain sensory information; perception; The result solving module is used to extract the non-sensorized information other than the above-mentioned sensorized information in the high-precision map data, superimpose it with the common sensor perception information and map sensor perception information, and perform fusion perception under the unified map space-time reference. The state estimation solution problem obtains the environment perception result of the autonomous vehicle; wherein, the high-precision map is a map with a positioning error within a preset error range.
为解决上述技术问题,本发明实施例提供一种存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执行上述面向多车联合感知的高精度地图感知容器设计方法的步骤。In order to solve the above technical problem, an embodiment of the present invention provides a storage medium on which a computer program is stored, and the computer program executes the steps of the above-mentioned multi-vehicle joint perception-oriented high-precision map perception container design method when the computer program is run by a processor.
为解决上述技术问题,本发明实施例提供一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,其特征在于,所述处理器运行所述计算机程序时执行上述面向多车联合感知的高精度地图感知容器设计方法的步骤。To solve the above technical problem, an embodiment of the present invention provides a terminal, including a memory and a processor, the memory stores a computer program that can run on the processor, and is characterized in that the processor runs the The computer program executes the steps of the above-mentioned multi-vehicle joint perception-oriented high-precision map-aware container design method.
与现有技术相比,本发明实施例的技术方案具有以下有益效果:Compared with the prior art, the technical solutions of the embodiments of the present invention have the following beneficial effects:
在本发明实施例中,通过获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息,从而根据地图原始数据生成与普通传感器格式相同的信号。另外,考虑到高精地图数据中某些信息具有远超车载传感器的置信度,将其直接叠加到感知结果。最后多车联合感知信息(包括普通传感器感知信息、地图传感器感知信息、高精度地图中其他非传感器化数据)在统一的地图时空基准下进行融合感知,确定所述自动驾驶汽车的感知结果;其中,所述地图的定位误差在预设误差范围内。本发明可以有效消除感知盲区和提升感知精度。In the embodiment of the present invention, by acquiring sensing target indication information, the sensing target indication information is used to indicate the environmental state information that the self-driving car needs to perceive; acquiring a high-precision map, establishing a map perception container in the map Feature extraction is performed on at least a portion of the perceptual objects to obtain sensorized information, thereby generating a signal in the same format as a common sensor based on the map raw data. In addition, considering that some information in the high-precision map data has far more confidence than the on-board sensors, it is directly superimposed on the perception results. Finally, the multi-vehicle joint perception information (including ordinary sensor perception information, map sensor perception information, and other non-sensorized data in high-precision maps) is fused under the unified map space-time reference to determine the perception result of the autonomous vehicle; wherein , the positioning error of the map is within a preset error range. The present invention can effectively eliminate the perception blind area and improve the perception precision.
进一步,获取一个或多个传感器对所述至少一部分感知目标进行检测的传感器信息,进而得到所述至少一部分感知目标中各个感知目标的融合信息以及观测量集合,进而确定所述至少一部分感知目标中各个感知目标的状态量估计,可以实现在地图传感器和传感器的信息融合基础上确定感知结果,进一步提升感知精度。Further, acquiring sensor information of one or more sensors for detecting the at least a part of the perceptual targets, and then obtaining fusion information and an observation set of each perceptual target in the at least a part of the perceptual targets, and then determining the at least one part of the perceptual targets. The state quantity estimation of each sensing target can realize the determination of the sensing result based on the information fusion of the map sensor and the sensor, and further improve the sensing accuracy.
进一步,还可以在所述地图中提取除所述传感器化信息之外的非传感器化信息,进而将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息叠加,以得到所述自动驾驶汽车的感知结果,从而可以在多种传感器的检测基础上,进一步添加高精度地图的其他信息,进一步提升感知的全面效果。Further, non-sensorized information other than the sensorized information may also be extracted from the map, and then the state quantity estimates of each of the at least a part of the perceptual objects are superimposed on the non-sensorized information to obtain The perception result of the self-driving car is obtained, so that other information of the high-precision map can be further added on the basis of the detection of various sensors, so as to further improve the overall effect of perception.
附图说明Description of drawings
图1是本发明实施例中一种面向多车联合感知的高精度地图感知容器设计方法的流程图;1 is a flow chart of a method for designing a high-precision map-aware container for multi-vehicle joint perception in an embodiment of the present invention;
图2是图1中步骤S13的一种具体实施方式的流程图;Fig. 2 is a flow chart of a specific implementation of step S13 in Fig. 1;
图3是本发明实施例中一种面向多车联合感知的高精度地图感知容器设计装置的结构示意图;3 is a schematic structural diagram of a high-precision map perception container design device for multi-vehicle joint perception in an embodiment of the present invention;
图4是本发明实施例中一种基于面向多车联合感知的高精度地图感知容器设计方法的感知系统的结构示意图。FIG. 4 is a schematic structural diagram of a sensing system based on a multi-vehicle joint sensing-oriented high-precision map sensing container design method according to an embodiment of the present invention.
具体实施方式Detailed ways
如前所述,在现有技术中,通过融合车联网中其他车辆的感知信息,可以有效消除车辆感知遮挡、超视野盲区,提升单车的感知能力。然而,在现有的基于车辆网技术的自动驾驶汽车感知技术中,多以主车局部坐标系为信息融合基准,依赖于多车之间视野的重叠程度,未能形成对车辆位姿的有效约束,致使基于量产车型传感器的联合感知精度难以满足智能网联汽车需求,并且多个传感器之间融合结果的一致性不足。As mentioned above, in the prior art, by integrating the perception information of other vehicles in the Internet of Vehicles, it is possible to effectively eliminate vehicle perception occlusion and over-the-field blind spots, and improve the perception capability of a bicycle. However, in the existing autonomous vehicle perception technology based on vehicle network technology, the local coordinate system of the main vehicle is often used as the information fusion benchmark, and it depends on the overlapping degree of vision between multiple vehicles, which fails to form an effective vehicle pose. Constraints, the joint perception accuracy based on mass-produced vehicle sensors is difficult to meet the needs of intelligent networked vehicles, and the consistency of fusion results between multiple sensors is insufficient.
本发明的发明人经过研究发现,在现有的研究基于GNSS基准进行多车联合感知的场景中,由于GNSS坐标系不稳定,经常存在沿某个方向的整体漂移,造成网联融合感知的精度下降。而高精地图数据可以为车辆提供高精度的绝对位置约束。同时,高精地图数据中包含精确的道路属性及拓扑等传感器较难感知的信息。The inventors of the present invention have found through research that, in the existing research scenario of multi-vehicle joint perception based on GNSS benchmarks, due to the instability of the GNSS coordinate system, there is often an overall drift along a certain direction, which results in the accuracy of network-connected fusion perception. decline. The high-precision map data can provide high-precision absolute position constraints for vehicles. At the same time, high-precision map data contains precise road attributes and topology information that is difficult for sensors to perceive.
在本发明实施例中,获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息,从而根据地图原始数据生成与普通传感器格式相同的信号。另外,考虑到高精地图数据中某些信息具有远超车载传感器的置信度,将其直接叠加到感知结果。最后多车联合感知信息(包括普通传感器感知信息、地图传感器感知信息、高精度地图中其他非传感器化数据)在统一的地图时空基准下进行融合感知,确定所述自动驾驶汽车的感知结果;其中,所述地图的定位误差在预设误差范围内。本发明可以有效消除感知盲区和提升感知精度。In the embodiment of the present invention, the sensing target indication information is obtained, and the sensing target indication information is used to indicate the environmental state information that the self-driving car needs to perceive; a high-precision map is obtained, and a map sensing container is established in the map for at least Feature extraction is performed on a part of the perceptual targets to obtain sensorized information, thereby generating signals in the same format as ordinary sensors based on the original map data. In addition, considering that some information in the high-precision map data has far more confidence than the on-board sensors, it is directly superimposed on the perception results. Finally, the multi-vehicle joint perception information (including ordinary sensor perception information, map sensor perception information, and other non-sensorized data in high-precision maps) is fused under the unified map space-time reference to determine the perception result of the autonomous vehicle; wherein , the positioning error of the map is within a preset error range. The present invention can effectively eliminate the perception blind area and improve the perception precision.
为使本发明的上述目的、特征和有益效果能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。In order to make the above objects, features and beneficial effects of the present invention more clearly understood, specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
参照图1,图1是本发明实施例中一种面向多车联合感知的高精度地图感知容器设计方法的流程图。所述面向多车联合感知的高精度地图感知容器设计方法可以包括步骤S11至步骤S13:Referring to FIG. 1 , FIG. 1 is a flowchart of a method for designing a high-precision map perception container for multi-vehicle joint perception in an embodiment of the present invention. The high-precision map perception container design method for multi-vehicle joint perception may include steps S11 to S13:
步骤S11:获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;Step S11: acquiring sensing target indication information, where the sensing target indication information is used to indicate the environmental state information that the self-driving car needs to perceive;
步骤S12:获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息;Step S12: acquiring a high-precision map, establishing a map perception container, and performing feature extraction on at least a part of the perception targets in the map to obtain sensorized information;
步骤S13:提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果。Step S13: Extract the non-sensorized information in the high-precision map data except the above-mentioned sensory information, superimpose with the common sensor perception information and map sensor perception information, perform fusion perception under the unified map space-time reference, and solve the problem through state estimation The problem gets the results of environment perception for self-driving cars.
其中,所述高精度地图为定位误差在预设误差范围内的地图。The high-precision map is a map with a positioning error within a preset error range.
在步骤S11的具体实施中,可以通过预先设置的方式确定感知目标指示信息,也即确定预设的感知需求,或者采用用户输入的方式,从而可以确定用户的感知需求。In the specific implementation of step S11 , the sensing target indication information may be determined in a preset manner, that is, a preset sensing requirement may be determined, or a user input manner may be used, so that the user's sensing requirement may be determined.
其中,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息,所述环境状态信息可以包括所述自动驾驶汽车需要感知的空间范围,还可以属于自动驾驶汽车需要感知的一种或多种类型的感知目标,例如为自动驾驶汽车需要感知的建筑物、车辆、人员等。The sensing target indication information is used to indicate the environmental status information that the autonomous driving vehicle needs to perceive, and the environmental status information may include the spatial range that the autonomous driving vehicle needs to perceive, and may also belong to the environmental status information that the autonomous driving vehicle needs to perceive. One or more types of sensing targets, such as buildings, vehicles, people, etc., that need to be sensed for autonomous vehicles.
在步骤S12的具体实施中,可以设置采用高精度地图。In the specific implementation of step S12, a high-precision map may be set.
具体地,所述地图的定位误差在预设误差范围内。可以理解的是,所述预设误差范围中的误差值越小,对地图的精度要求越高。Specifically, the positioning error of the map is within a preset error range. It can be understood that, the smaller the error value in the preset error range is, the higher the accuracy requirement of the map is.
作为一个非限制性的例子,可以设置所述预设误差范围中的误差值的上限值选自:10cm至50cm。例如可以设置所述预设误差范围中的误差值的上限值为20cm,也即所述地图的定位误差在20cm之内。As a non-limiting example, the upper limit of the error value in the preset error range may be set to be selected from: 10 cm to 50 cm. For example, the upper limit of the error value in the preset error range may be set to 20 cm, that is, the positioning error of the map is within 20 cm.
在具体实施中,还可以建立地图感知容器。In a specific implementation, a map-aware container may also be established.
具体地,所述地图感知容器是建立在高精地图基础上的,解决智能汽车联合感知问题的理论模型,其基于车联网环境下不同车载传感器的信息及高精地图数据,在高精地图统一基准下构建驾驶环境空间,并基于多源异步的冗余信息进行状态估计;与高精地图传感器的非传感器化信息进一步叠加,最终完成对驾驶环境空间中的状态量的估计。Specifically, the map perception container is based on the high-precision map, and is a theoretical model for solving the problem of joint perception of intelligent vehicles. The driving environment space is constructed under the benchmark, and the state is estimated based on the multi-source asynchronous redundant information; it is further superimposed with the non-sensorized information of the high-precision map sensor, and finally the estimation of the state quantity in the driving environment space is completed.
进一步地,所述至少一部分感知目标可以包括联网车辆以及除所述联网车辆之外的其他障碍物;其中,所述传感器化信息可以包括对联网车辆进行检测的联网车辆检测信息以及对除所述联网车辆之外的其他障碍物进行检测的非联网车辆检测信息。Further, the at least part of the perception targets may include connected vehicles and other obstacles other than the connected vehicles; wherein, the sensorized information may include connected vehicle detection information for detecting connected vehicles and information on connected vehicles other than the connected vehicles. Non-connected vehicle detection information for obstacles other than connected vehicles.
更进一步地,所述非联网车辆检测信息可以包括以下一项或多项:对除所述联网车辆之外的动态障碍物进行检测的动态障碍物信息;对所述地图上的静态障碍物进行检测的地图检测信息。Further, the non-networked vehicle detection information may include one or more of the following: dynamic obstacle information for detecting dynamic obstacles other than the networked vehicle; dynamic obstacle information for detecting dynamic obstacles on the map; Detected map detection information.
其中,所述联网车辆可以为与所述自动驾驶汽车位于同一车联网中的车辆;所述动态障碍物可以为具有可运动态的障碍物,如未联入车联网的卡车、动物、人员等;所述静态障碍物可以为不能运动的障碍物,如建筑、红绿灯等。Wherein, the connected vehicle may be a vehicle located in the same vehicle network as the autonomous vehicle; the dynamic obstacle may be a movable obstacle, such as a truck, an animal, a person, etc. that are not connected to the vehicle network. ; The static obstacle can be an immovable obstacle, such as a building, a traffic light, and the like.
具体地,在联合感知系统中,可以根据感知需求及车辆状态对地图原始数据进行选择性加载,并给与地图数据以精度描述,形成与常见传感器数据形式一致的输出,将这个过程称为地图数据的传感器化。Specifically, in the joint perception system, the original map data can be selectively loaded according to perception requirements and vehicle status, and the map data can be described with precision to form an output consistent with common sensor data. This process is called map Sensorization of data.
具体地,对地图中关于其他联网车辆的先验信息视为对联网车的观测数据,记为地图中关于联网车辆i的数据对该联网车辆状态形成的观测,称为地图-联网车辆观测。为该观测对应的时间集合。可以定义高精地图传感器联网车辆观测集合:Specifically, the prior information about other connected vehicles in the map is regarded as the observation data of connected vehicles, It is an observation formed by the data about the connected vehicle i in the map of the state of the connected vehicle, which is called map-connected vehicle observation. is the time set corresponding to this observation. A collection of HD map sensor networked vehicle observations can be defined:
其中,所述联网车辆状态用于表示所述自动驾驶汽车需要感知的联网车辆的状态信息,例如所述联网车辆的位姿(6自由度模型,包括联网车坐标系原点在地理投影坐标系下的位置与角度)、尺寸等。Wherein, the state of the connected vehicle is used to represent the state information of the connected vehicle that the autonomous vehicle needs to perceive, such as the pose of the connected vehicle (a 6-DOF model, including the origin of the connected vehicle coordinate system in the geographic projection coordinate system) position and angle), size, etc.
对地图中关于非网联车的动态障碍物的先验信息视为对动态障碍物的观测数据,记为地图中关于联网车辆o的数据对该动态障碍物状态形成的观测,称为地图-动态障碍物观测。为该观测对应的时间集合。则高精地图传感器动态障碍物观测集合定义为:The prior information about the dynamic obstacles of the non-connected vehicles in the map is regarded as the observation data of the dynamic obstacles. The observation of the dynamic obstacle state formed by the data about the networked vehicle o in the map is called map-dynamic obstacle observation. is the time set corresponding to this observation. Then the HD map sensor dynamic obstacle observation set is defined as:
其中,所述动态障碍物状态用于表示所述自动驾驶汽车需要感知的动态障碍物的状态信息,例如所述动态障碍物在地理投影坐标系下的位置、以及尺寸、类别等。The dynamic obstacle state is used to represent the state information of the dynamic obstacle that the autonomous vehicle needs to perceive, such as the position, size, and category of the dynamic obstacle in the geographic projection coordinate system.
对地图中关于定位特征的先验信息视为对静态障碍物的观测数据,记为地图中关于静态障碍物f的数据对该特征状态形成的观测,称为地图-静态障碍物观测。将高精地图传感器静态障碍物观测集合定义为:The prior information about the localization features in the map is regarded as the observation data of static obstacles. It is the observation formed by the data about the static obstacle f in the map of the feature state, which is called map-static obstacle observation. The HD map sensor static obstacle observation set is defined as:
其中,所述静态障碍物状态用于表示所述自动驾驶汽车需要感知的静态障碍物的状态信息,例如所述静态障碍物的位置、尺寸、类别等。Wherein, the static obstacle state is used to represent the state information of the static obstacle that the autonomous vehicle needs to perceive, such as the position, size, category, etc. of the static obstacle.
综上,高精地图传感器观测建模为地图-静态障碍物特征、地图-联网车辆、地图-动态障碍物观测的集合:In summary, the high-precision map sensor observations are modeled as a collection of map-static obstacle features, map-connected vehicles, and map-dynamic obstacle observations:
ZM={ZM-F,ZM-V,ZM-o}Z M = {Z MF , Z MV , Z Mo }
更进一步地,采用地图传感器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息的步骤可以包括:根据所述感知目标指示信息确定所述需要感知的感知目标的位置信息;根据所述地图的精度,计算所述至少一部分感知目标的位置信息的协方差,并作为所述传感器化信息。Further, the step of using a map sensor to perform feature extraction on at least a part of the sensing targets in the map to obtain sensorized information may include: determining the position information of the sensing target to be sensed according to the sensing target indication information; According to the accuracy of the map, the covariance of the position information of the at least a part of the perceptual target is calculated and used as the sensorized information.
具体地,根据所述感知目标指示信息确定需要感知的感知目标,所述需要感知的感知目标又可以称为道路逻辑断点集合。具体而言,道路逻辑断点集合可以是地图传感器输入的感知需求范围,根据定位信息以及动静态障碍物,确定其在高精度地图上的相应位置(路网位置、路网中路段、道路交叉口等道路结构最小单元进行划分的参考点)。Specifically, the sensing target to be perceived is determined according to the sensing target indication information, and the sensing target to be perceived may also be referred to as a set of road logical breakpoints. Specifically, the set of road logical breakpoints can be the sensing demand range input by the map sensor, and its corresponding position on the high-precision map (road network position, road segment in the road network, road intersection, and road intersection) is determined according to the positioning information and dynamic and static obstacles. The reference point for dividing the smallest unit of road structure such as the mouth).
进一步地,协方差用于表示传感器噪声或者可以说是感知的不确定性,可以采用常规的适当的计算方式确定所述协方差,本申请实施例对此不做限制。Further, the covariance is used to represent sensor noise or perceived uncertainty, and the covariance may be determined by using a conventional appropriate calculation method, which is not limited in this embodiment of the present application.
换而言之,地图数据可以是所述地图上全局范围内所有的信息,而地图传感器可以根据车辆的感知需求范围,生成与行驶任务相关的传感器信号,因此,需要对原始地图数据进行有效的组织。In other words, the map data can be all the information in the global scope on the map, and the map sensor can generate sensor signals related to the driving task according to the perceived range of the vehicle. Therefore, the original map data needs to be effectively processed. organize.
从地图数据的可靠性出发,根据感知需求,将准确可靠的分层矢量高精地图数据区分非传感器化信息与传感器化信息,将准确可靠的实时地图数据(道路级和车道级的路网信息、其他动态交通流信息、交通事件信息和决策辅助信息)作为非传感器化信息直接输出,供驾驶环境空间中状态估计所用。Starting from the reliability of the map data, according to the perception requirements, the accurate and reliable hierarchical vector high-precision map data is divided into non-sensorized information and sensorized information, and the accurate and reliable real-time map data (road-level and lane-level road network information , other dynamic traffic flow information, traffic event information and decision assistance information) are directly output as non-sensorized information for state estimation in the driving environment space.
其他数据则通过传感器化,以地图传感器信号的形式进行输出,与多车联合感知系统中其他传感器数据进行融合。这部分数据通过地图传感器输入的感知需求范围,确定行驶任务相关的道路逻辑断点集合。在生成地图传感器信号时,首先在定位特征层和动态障碍物层,根据逻辑断点集合索引到所有相关的动静态目标位置,并根据地图数据的精度,确定数据的协方差,作为地图传感器输出信号。Other data is sensorized, output in the form of map sensor signals, and fused with other sensor data in the multi-vehicle joint perception system. This part of the data determines the set of road logical breakpoints related to the driving task through the perceived demand range input by the map sensor. When generating the map sensor signal, firstly, in the positioning feature layer and the dynamic obstacle layer, according to the set of logical breakpoints, all relevant dynamic and static target positions are indexed, and according to the accuracy of the map data, the covariance of the data is determined as the output of the map sensor. Signal.
在步骤S13的具体实施中,提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果。In the specific implementation of step S13, non-sensorized information other than the above-mentioned sensorized information in the high-precision map data is extracted, superimposed with ordinary sensor perception information and map sensor perception information, and fusion perception is performed under a unified map space-time reference. , and solve the problem through state estimation to obtain the environmental perception results of the self-driving car.
参照图2,图2是图1中步骤S13的一种具体实施方式的流程图。根据所述传感器化信息,确定所述自动驾驶汽车的感知结果的步骤可以包括步骤S21至步骤S24,还可以包括步骤S21至步骤S26,以下对各个步骤进行说明。Referring to FIG. 2 , FIG. 2 is a flowchart of a specific implementation manner of step S13 in FIG. 1 . According to the sensorized information, the step of determining the perception result of the autonomous vehicle may include steps S21 to S24, and may further include steps S21 to S26, and each step will be described below.
在步骤S21中,获取一个或多个传感器对所述至少一部分感知目标进行检测的传感器信息。In step S21, sensor information for detecting the at least a part of the sensing target by one or more sensors is acquired.
其中,所述传感器可以包括自车传感器以及他车传感器。也即所述传感器信息可以包括自车传感器对所述自动驾驶汽车自身进行检测的自身检测信息以及各个他车传感器对除自身车辆之外的其他物体(包括当前自动驾驶汽车)进行检测的外部检测信息。Wherein, the sensors may include own vehicle sensors and other vehicle sensors. That is to say, the sensor information may include self-detection information of the self-driving car itself detected by the self-vehicle sensor and external detection information of each other vehicle's sensor detecting other objects (including the current self-driving car) other than the self-driving car. information.
具体地,所述传感器信息可以包括关于自身状态的组合导航观测,以及对其他目标的对外传感器观测,其中对外观测的对象可以是其他智能网联车,也可能是动态障碍物,或者地图数据中包含或待更新的目标。Specifically, the sensor information may include combined navigation observations about its own state, and external sensor observations of other targets, wherein the externally observed objects may be other intelligent networked vehicles, dynamic obstacles, or map data. Included or to-be-updated targets.
更进一步地,在对所述自动驾驶汽车自身进行检测以得到自身检测信息的步骤中,可以将第i辆联网车辆组合定位系统在ki时刻对自身车辆状态产生的观测数据描述为第i辆联网车辆组合定位数据时刻的集合为则感知空间中直接对车辆自身状态进行观测的组合定位观测可以建模为:Further, in the step of detecting the self-driving car itself to obtain self-detection information, the observation data generated by the i-th networked vehicle integrated positioning system on the state of its own vehicle at time k i can be described as: The set of combined positioning data moments of the i-th connected vehicle is: Then the combined positioning observation that directly observes the vehicle's own state in the perception space can be modeled as:
其中,所述感知空间用于表示所述传感器能够检测到目标的环境空间。Wherein, the perception space is used to represent the environmental space in which the sensor can detect the target.
更进一步地,所述外部检测信息可以包括以下一项或多项:对车联网中的其他联网车辆进行检测的联网车辆检测信息;对除所述联网车辆之外的动态障碍物进行检测的动态障碍物信息;对所述地图上的静态障碍物以及道路信息进行检测的地图检测信息;对未记载于所述地图的静态障碍物进行检测的新增静态障碍物检测信息;对未记载于所述地图的道路信息进行检测的新增道路检测信息。Further, the external detection information may include one or more of the following: Internet-connected vehicle detection information for detecting other Internet-connected vehicles in the Internet of Vehicles; dynamic obstacles for detecting dynamic obstacles other than the Internet-connected vehicle. obstacle information; map detection information for detecting static obstacles and road information on the map; newly added static obstacle detection information for detecting static obstacles not recorded on the map; The newly added road detection information for detecting the road information of the map.
具体而言,通过车联网获取其他车辆外部目标传感器的数据,弥补单车感知能力的不足。考虑实际场景,车载外部目标传感器数据中除对动态障碍物和静态非地图目标的观测外,还包含对其他联网车辆、定位特征和道路信息等的观测。Specifically, through the Internet of Vehicles, the data of the external target sensors of other vehicles is obtained to make up for the lack of the perception ability of bicycles. Considering the actual scene, in addition to the observations of dynamic obstacles and static non-map targets, the vehicle-mounted external target sensor data also includes observations of other connected vehicles, positioning features, and road information.
更进一步地,在对车联网中的其他联网车辆进行检测以得到联网车辆检测信息的步骤中,可以将联网车辆对其他车辆的观测称为车-车观测,记为静态非地图目标的索引,为第i辆联网车辆在时刻ki对另一辆网联车的观测数据,该观测对应的时刻集合为对应的集合定义为:Further, in the step of detecting other connected vehicles in the Internet of Vehicles to obtain the detection information of the connected vehicles, the observation of other vehicles by the connected vehicle may be called vehicle-vehicle observation, and denoted as vehicle-to-vehicle observation. is the index of the static non-map object, For the i-th connected vehicle at time k i, to another connected vehicle The observation data of , the time set corresponding to the observation is The corresponding set is defined as:
在对除所述联网车辆之外的动态障碍物进行检测以得到动态障碍物信息的步骤中,可以假设某联网车辆i的传感器在时刻ki对动态障碍物o进行了一次观测,其数据描述为其对应的时刻集合为则车-障碍物观测集合为:In the step of detecting dynamic obstacles other than the connected vehicle to obtain dynamic obstacle information, it can be assumed that a sensor of a connected vehicle i observes a dynamic obstacle o at time k i , and its data describes for Its corresponding time set is Then the vehicle-obstacle observation set is:
在对所述地图上的静态障碍物以及道路信息进行检测以得到地图检测信息的步骤中,可以将联网车辆对定位特征的观测称为车-特征观测,记f为静态非地图目标的索引,为第i辆联网车辆在时刻ki对定位特征f的观测数据,该观测对应的时刻集合为对应的集合定义为:In the step of detecting the static obstacles and road information on the map to obtain the map detection information, the observation of the positioning feature by the connected vehicle can be called the vehicle-feature observation, and f is the index of the static non-map target, is the observation data of the i -th networked vehicle on the positioning feature f at time ki, and the time set corresponding to the observation is The corresponding set is defined as:
在对未记载于所述地图的静态障碍物进行检测以得到新增静态障碍物检测信息的步骤中,可以对静态非地图目标的观测称为车-静态非地图目标观测,记s为静态非地图目标的索引,为第i辆联网车辆在时刻ki对静态非地图目标s的观测数据,该观测对应的时刻集合为则对应的观测集合定义为:In the step of detecting static obstacles not recorded in the map to obtain new static obstacle detection information, the observation of the static non-map target may be called vehicle-static non-map target observation, and s is the static non-map target observation. the index of the map object, is the observation data of the i -th networked vehicle on the static non-map target s at time ki, and the time set corresponding to the observation is Then the corresponding observation set is defined as:
在对未记载于所述地图的道路信息进行检测以得到新增道路检测信息的步骤中,联网车辆对道路信息的观测称为车-道路信息观测,记r为道路信息的索引,为第i辆联网车辆在时刻ki对道路信息r的观测数据,该观测对应的时刻集合为对应的集合定义为:In the step of detecting the road information not recorded in the map to obtain the newly added road detection information, the observation of the road information by the connected vehicle is called vehicle-road information observation, and r is the index of the road information, is the observation data of the road information r of the i -th connected vehicle at time ki, and the time set corresponding to the observation is The corresponding set is defined as:
综合上述信息,可以对感知空间中外部目标传感器观测建模为:Combining the above information, the external target sensor observations in the perception space can be modeled as:
ZV={ZV-O,ZV-S,ZV-V,ZV-F,ZV-R}Z V = {Z VO , Z VS , Z VV , Z VF , Z VR }
在步骤S22中,对所述至少一部分感知目标的传感器信息以及所述传感器化信息进行融合,以得到所述至少一部分感知目标中各个感知目标的融合信息。In step S22, the sensor information of the at least a part of the sensing targets and the sensorized information are fused to obtain fusion information of each sensing target in the at least a part of the sensing targets.
需要指出的是,对所述至少一部分感知目标的传感器信息以及所述传感器化信息进行融合的技术可以采用适当的常规技术进行执行,本申请实施例对此不做限制。It should be noted that the technology for fusing the sensor information of the at least a part of the perceptual target and the sensorized information may be performed by using an appropriate conventional technology, which is not limited in this embodiment of the present application.
在步骤S23中,根据所述融合信息,确定所述至少一部分感知目标中各个感知目标的观测量集合。In step S23, according to the fusion information, an observation quantity set of each sensing target in the at least a part of the sensing targets is determined.
其中,根据本申请实施例中的感知方法,可以根据智能汽车感知需求,确定动静态目标的感知信息,以及道路交通信息。其中动态目标可以包括联网车辆和其他动态障碍物对于动态目标构建状态集合XH,动态目标状态集合可以表示为:Wherein, according to the sensing method in the embodiment of the present application, the sensing information of the dynamic and static targets and the road traffic information can be determined according to the sensing requirement of the intelligent vehicle. where dynamic target Can include connected vehicles and other dynamic obstacles For the dynamic target construction state set X H , the dynamic target state set can be expressed as:
XH={XV,XO}X H ={X V ,X O }
交通场景中的交通标识牌、车道线、锥桶等静态目标为车辆自主定位提供参考信息。另外,在决策系统中需考虑道路边沿内的静态目标。环境中的静态目标包含地图中已经包括的定位特征以及静态非地图目标也即地图待更新目标。对于静态目标构建状态集合XZ,静态目标状态集合可以表示为:Static objects such as traffic signs, lane lines, and cones in traffic scenes provide reference information for autonomous vehicle positioning. In addition, static targets within the road edge need to be considered in the decision system. Static targets in the environment Include location features already included in the map and static non-map targets That is, the target to be updated on the map. For a static target build state set X Z , the static target state set can be expressed as:
XZ={XF,XS}X Z ={X F ,X S }
道路信息状态集合包括道路路网信息车道路网信息动态交通信息以及决策辅助信息对于道路信息构建状态集合XR,道路信息状态集合可以表示为:road information state set Include road network information road network information Dynamic Traffic Information and decision aids For the road information construction state set X R , the road information state set can be expressed as:
XR={XWL,Xw,XB,XP}X R = {X WL , X w , X B , X P }
综上所述,驾驶环境空间中的目标可以由动态目标、静态目标及道路信息构成,其数学定义为:To sum up, the target in the driving environment space can be composed of dynamic target, static target and road information, and its mathematical definition is:
对应的所述驾驶汽车的感知结果,也即观测量集合可以表示为:The corresponding perception result of the driving car, that is, the observation set can be expressed as:
XD={XH,XZ,XR}X D ={X H ,X Z ,X R }
在步骤S24的具体实施中,可以采用最大似然估计算法,根据所述至少一部分感知目标中各个感知目标的观测量集合,确定所述至少一部分感知目标中各个感知目标的状态量估计。In the specific implementation of step S24, a maximum likelihood estimation algorithm may be used to determine the state quantity estimation of each sensing target in the at least a part of sensing targets according to the observation quantity set of each sensing target in the at least a part of sensing targets.
具体地,联合感知系统的各类感知输入中,普遍存在对同一个目标不同来源的观测,它们之间存在矛盾。为了得到一致的感知结果,进行最大似然估计。因为联合感知问题需要在统一的地图时空基准下进行对于状态的描述,可以归结为最大化系统全部观测量关于状态量的似然概率。根据最大似然估计(Maximum Likelihood Estimation,MLE)理论,驾驶环境空间状态估计器对以观测量集合为条件的观测量的概率进行最大化估计。Specifically, in the various perceptual inputs of the joint perception system, there are generally observations of the same target from different sources, and there are contradictions between them. To obtain consistent perceptual results, maximum likelihood estimation is performed. Because the joint perception problem needs to describe the state under a unified map space-time reference, it can be attributed to maximizing the likelihood probability of all the observed quantities of the system about the state quantity. According to the maximum likelihood estimation (Maximum Likelihood Estimation, MLE) theory, the driving environment space state estimator maximizes the probability of the observations conditioned on the set of observations.
在本发明实施例中,通过获取一个或多个传感器对所述至少一部分感知目标进行检测的传感器信息,进而得到所述至少一部分感知目标中各个感知目标的融合信息以及观测量集合,进而确定所述至少一部分感知目标中各个感知目标的状态量估计,可以实现在地图传感器和传感器的信息融合基础上确定感知结果,进一步提升感知精度。In this embodiment of the present invention, by acquiring sensor information of one or more sensors for detecting the at least a part of the perceptual targets, the fusion information and the observation quantity set of each perceptual target in the at least a part of the perceptual targets are obtained, and then the set of the perceptual objects is determined. The estimation of the state quantity of each sensing target in the at least a part of the sensing targets can realize the determination of the sensing result on the basis of the information fusion of the map sensor and the sensor, and further improve the sensing accuracy.
进一步地,可以采用下述公式,根据所述至少一部分感知目标中各个感知目标的观测量集合,确定所述至少一部分感知目标中各个感知目标的状态量估计:Further, the following formula can be used to determine the state quantity estimation of each sensing target in the at least a part of the sensing targets according to the observation quantity set of each sensing target in the at least a part of the sensing targets:
其中,用于表示极大似然意义下驾驶环境空间状态参数的估计结果,XD用于表示驾驶环境空间中一定时间深度内的观测量集合,Zp用于表示智能汽车可获取的异步异构的感知空间中的测量集合,P(Zp|XD)是在给定状态参数XD下观测量集合Zp的条件概率。in, It is used to represent the estimation results of the state parameters of the driving environment space in the sense of maximum likelihood, X D is used to represent the set of observations within a certain time depth in the driving environment space, and Z p is used to represent the asynchronous and heterogeneous data available to smart cars. The set of measurements in perceptual space, P(Z p |X D ) is the conditional probability of observing the set of measurements Z p given the state parameter X D .
假设不同传感器,不同时刻的似然概率相互独立,那么条件概率可以进一步写为一段时间内所有观测量似然概率的联乘积。Assuming that the likelihood probabilities of different sensors and different moments are independent of each other, the conditional probability can be further written as the joint product of the likelihood probabilities of all observations in a period of time.
假设噪声服从高斯分布,最大化似然问题可以转化为下面的最小二乘优化问题。该式子中,残差为实际观测与系统状态对感知结果的预测之差。在感知容器的状态估计器中,通过优化状态量的取值,使得各类观测残差加权和最小。Assuming that the noise follows a Gaussian distribution, the maximizing likelihood problem can be transformed into the following least squares optimization problem. In this formula, the residual is the difference between the actual observation and the prediction of the perception result by the system state. In the state estimator of the perception container, the weighted sum of various observation residuals is minimized by optimizing the value of the state quantity.
观测残差:Observed residuals:
上式给出地图感知容器中状态估计器对驾驶环境空间中状态XD进行估计的数学方法。需要说明的是,该式通过每一种传感器噪声的协方差矩阵的逆矩阵(CovarianceMatrix)调整不同精度的传感器数据在优化问题中权重,从而有机地融合整个联合感知系统中各车载传感器和地图传感器数据,最终形成一致性的感知结果。The above formula gives the mathematical method for estimating the state X D in the driving environment space by the state estimator in the map-aware container. It should be noted that this formula adjusts the weight of sensor data of different precisions in the optimization problem through the inverse matrix of the covariance matrix of each sensor noise, so as to organically fuse the vehicle sensors and map sensors in the entire joint perception system. data, and finally form a consistent perception result.
在本发明实施例中,采用最大似然估计算法,预估所述自动驾驶汽车的感知结果,可以有效提高预估结果的准确性。In the embodiment of the present invention, the maximum likelihood estimation algorithm is used to estimate the perception result of the self-driving car, which can effectively improve the accuracy of the estimation result.
在步骤S25的具体实施中,将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息叠加,以得到所述自动驾驶汽车的感知结果。In the specific implementation of step S25, the state quantity estimation of each sensing target in the at least a part of the sensing targets is superimposed with the non-sensorized information to obtain the sensing result of the autonomous vehicle.
进一步地,所述非传感器化信息可以包括以下一项或多项:道路信息、新增交通流信息、交通事件信息以及决策辅助信息。Further, the non-sensorized information may include one or more of the following: road information, newly added traffic flow information, traffic event information, and decision assistance information.
具体地,所述非传感器化信息可以包括道路级和车道级的路网信息、动态交通流信息、交通事件信息、决策辅助信息等。Specifically, the non-sensorized information may include road-level and lane-level road network information, dynamic traffic flow information, traffic event information, decision assistance information, and the like.
需要指出的是,在将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息叠加的步骤中,来自地图非传感器信息(路网、交通信息、)和传感器(包括实际传感器和地图虚拟传感器)的信息都统一在高精度地图全局坐标系下进行表达,实现了空间表达的一致,可以直接叠加。It should be noted that, in the step of superimposing the state quantity estimates of each of the at least a part of the perceptual objects with the non-sensorized information, non-sensor information (road network, traffic information, etc.) and sensors (including The information of the actual sensor and the map virtual sensor) are unifiedly expressed in the global coordinate system of the high-precision map, which realizes the consistency of the spatial expression and can be directly superimposed.
在本发明实施例中,还可以在所述地图中提取除所述传感器化信息之外的非传感器化信息,进而将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息叠加,以得到所述自动驾驶汽车的感知结果,从而可以在多种传感器的检测基础上,进一步添加高精度地图的其他信息,进一步提升感知的全面效果。In this embodiment of the present invention, non-sensorized information other than the sensorized information may also be extracted from the map, and further, the state quantity estimation of each of the at least a part of the perceptual objects is associated with the non-sensorized information. The information is superimposed to obtain the perception result of the self-driving car, so that other information of the high-precision map can be further added on the basis of the detection of various sensors, and the overall effect of the perception can be further improved.
在本发明实施例中,通过获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息,从而根据地图原始数据生成与普通传感器格式相同的信号。另外,考虑到高精地图数据中某些信息具有远超车载传感器的置信度,将其直接叠加到感知结果。最后多车联合感知信息(包括普通传感器感知信息、地图传感器感知信息、高精度地图中其他非传感器化数据)在统一的地图时空基准下进行融合感知,确定所述自动驾驶汽车的感知结果;其中,所述地图的定位误差在预设误差范围内。本发明可以有效消除感知盲区和提升感知精度。In the embodiment of the present invention, by acquiring sensing target indication information, the sensing target indication information is used to indicate the environmental state information that the self-driving car needs to perceive; acquiring a high-precision map, establishing a map perception container in the map Feature extraction is performed on at least a portion of the perceptual objects to obtain sensorized information, thereby generating a signal in the same format as a common sensor based on the map raw data. In addition, considering that some information in the high-precision map data has far more confidence than the on-board sensors, it is directly superimposed on the perception results. Finally, the multi-vehicle joint perception information (including ordinary sensor perception information, map sensor perception information, and other non-sensorized data in high-precision maps) is fused under the unified map space-time reference to determine the perception result of the autonomous vehicle; wherein , the positioning error of the map is within a preset error range. The present invention can effectively eliminate the perception blind area and improve the perception precision.
参照图3,图3是本发明实施例中一种面向多车联合感知的高精度地图感知容器设计装置的结构示意图。所述面向多车联合感知的高精度地图感知容器设计装置可以包括:Referring to FIG. 3 , FIG. 3 is a schematic structural diagram of an apparatus for designing a high-precision map sensing container for multi-vehicle joint sensing according to an embodiment of the present invention. The high-precision map perception container design device for multi-vehicle joint perception may include:
信息获取模块31,用于获取感知目标指示信息,所述感知目标指示信息用于指示所述自动驾驶汽车需要感知的环境状态信息;an
传感器化信息确定模块32,用于获取高精度地图,建立地图感知容器在所述地图中对至少一部分感知目标进行特征提取,以得到传感器化信息;The sensorized
感知结果求解模块33,用于提取高精度地图数据中除上述传感器化信息之外的非传感器化信息,与普通传感器感知信息、地图传感器感知信息进行叠加,在统一的地图时空基准下进行融合感知,通过状态估计求解问题得到自动驾驶汽车的环境感知结果。The perception result solving
其中,所述高精度地图为定位误差在预设误差范围内的地图。The high-precision map is a map with a positioning error within a preset error range.
关于该面向多车联合感知的高精度地图感知容器设计装置的原理、具体实现和有益效果,请参照前文描述的关于面向多车联合感知的高精度地图感知容器设计方法的相关描述,此处不再赘述。For the principle, specific implementation and beneficial effects of the multi-vehicle joint perception-oriented high-precision map perception container design device, please refer to the relevant description of the multi-vehicle joint perception-oriented high-precision map perception container design method described above. Repeat.
参照图4,图4是本发明实施例中一种基于面向多车联合感知的高精度地图感知容器设计方法的感知系统的结构示意图。Referring to FIG. 4 , FIG. 4 is a schematic structural diagram of a sensing system based on a multi-vehicle joint sensing-oriented high-precision map sensing container design method according to an embodiment of the present invention.
如图4中实线框用于表示实体模块,虚线框用于表示确定的信息。As shown in Figure 4, the solid line box is used to represent the entity module, and the dotted line box is used to represent the determined information.
首先,采集多个传感器S1、S2、S3至S4确定的传感器信息。First, sensor information determined by the plurality of sensors S 1 , S 2 , S 3 to S 4 is collected.
采用地图传感器MS确定所述地图信息M,以及确定感知目标指示信息Pr,以输出传感器化信息ZM。The map sensor M S is used to determine the map information M, and the sensing target indication information Pr is determined to output sensorized information Z M .
进而采用驾驶环境空间构建器DC对所述至少一部分感知目标的传感器信息以及所述传感器化信息进行融合,以得到所述至少一部分感知目标中各个感知目标的融合信息,以及根据所述融合信息,确定所述至少一部分感知目标中各个感知目标的观测量集合。Further, the driving environment space builder DC is used to fuse the sensor information and the sensorized information of the at least a part of the perceptual targets, so as to obtain the fusion information of each perceptual target in the at least a part of the perceptual targets, and according to the fusion information , determining an observation set of each sensing target in the at least a part of the sensing targets.
在具体实施中,DC还可以根据多源传感器在地图坐标系下的表达结果,对驾驶环境空间中的状态量进行分配。In a specific implementation, the DC can also allocate the state quantities in the driving environment space according to the expression results of the multi-source sensors in the map coordinate system.
更具体而言,Dc根据多源传感器在地理投影坐标系下的表达结果进行数据的关联,确定需要进行估计的观测量集合,对驾驶环境空间中的状态量进行分配,完成驾驶环境空间的构建。More specifically, Dc associates data according to the expression results of multi-source sensors in the geographic projection coordinate system, determines the set of observations that need to be estimated, allocates the state quantities in the driving environment space, and completes the construction of the driving environment space. .
进而采用驾驶环境空间状态估计器ME基于最大似然估计算法,根据所述至少一部分感知目标中各个感知目标的观测量集合,确定所述至少一部分感知目标中各个感知目标的状态量估计 And then adopt the driving environment space state estimator ME based on the maximum likelihood estimation algorithm, according to the observation quantity set of each sensing target in the at least a part of the sensing target, determine the state quantity estimation of each sensing target in the at least a part of the sensing target
具体地,ME为驾驶环境空间状态估计器,其基于多源异构、异步的冗余数据,实现对驾驶环境空间中状态量的估计。Specifically, ME is a driving environment space state estimator, which realizes the estimation of state quantities in the driving environment space based on multi-source heterogeneous and asynchronous redundant data.
进一步地,采用地图传感器MS在所述地图中提取除所述传感器化信息之外的非传感器化信息X0。Further, a map sensor MS is used to extract non- sensorized information X 0 in the map except the sensorized information.
将所述至少一部分感知目标中各个感知目标的状态量估计与所述非传感器化信息X0叠加,以得到所述自动驾驶汽车的感知结果 estimating the state quantity of each sensing target in the at least a part of sensing targets Superimposed with the non-sensorized information X 0 to obtain the perception result of the self-driving car
有关图4示出的感知系统的更多详细内容,可以参照前文进行执行,此处不在赘述。For more details of the sensing system shown in FIG. 4 , the execution can be performed with reference to the foregoing description, which will not be repeated here.
在本发明实施例中,还提供了一种存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时执行上述方法的步骤。所述存储介质可以是计算机可读存储介质,例如可以包括非挥发性存储器(non-volatile)或者非瞬态(non-transitory)存储器,还可以包括光盘、机械硬盘、固态硬盘等。In an embodiment of the present invention, a storage medium is also provided, on which a computer program is stored, and the computer program executes the steps of the above method when the computer program is run by a processor. The storage medium may be a computer-readable storage medium, for example, may include non-volatile memory (non-volatile) or non-transitory (non-transitory) memory, and may also include optical disks, mechanical hard disks, solid-state disks, and the like.
具体地,在本发明实施例中,所述处理器可以为中央处理单元(centralprocessing unit,简称CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,简称DSP)、专用集成电路(application specificintegrated circuit,简称ASIC)、现成可编程门阵列(field programmable gate array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。Specifically, in this embodiment of the present invention, the processor may be a central processing unit (central processing unit, CPU for short), and the processor may also be other general-purpose processors, digital signal processors (digital signal processor, DSP for short) , an application specific integrated circuit (ASIC for short), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,简称ROM)、可编程只读存储器(programmable ROM,简称PROM)、可擦除可编程只读存储器(erasable PROM,简称EPROM)、电可擦除可编程只读存储器(electricallyEPROM,简称EEPROM)或闪存。易失性存储器可以是随机存取存储器(random accessmemory,简称RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,简称RAM)可用,例如静态随机存取存储器(staticRAM,简称SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronousDRAM,简称SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,简称DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,简称ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,简称SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,简称DR RAM)。It should also be understood that the memory in the embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The non-volatile memory may be a read-only memory (ROM for short), a programmable read-only memory (PROM for short), an erasable PROM for short (EPROM) , Electrically Erasable Programmable Read-Only Memory (electrically EPROM, EEPROM for short) or flash memory. The volatile memory may be random access memory (RAM for short), which is used as an external cache memory. By way of example and not limitation, many forms of random access memory (RAM) are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic Random access memory (synchronous DRAM, referred to as SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, referred to as DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, referred to as ESDRAM), synchronous connection Dynamic random access memory (synchlink DRAM, referred to as SLDRAM) and direct memory bus random access memory (direct rambus RAM, referred to as DR RAM).
在本发明实施例中,还提供了一种终端,包括存储器和处理器,所述存储器上存储有能够在所述处理器上运行的计算机程序,所述处理器运行所述计算机程序时执行上述方法的步骤。所述终端包括但不限于汽车、汽车中控设备、外接或者集成于所述汽车的终端设备、手机、计算机、平板电脑等终端设备。In an embodiment of the present invention, a terminal is also provided, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes the above-mentioned computer program when the processor runs the computer program. steps of the method. The terminals include but are not limited to automobiles, automobile central control equipment, terminal equipment externally connected to or integrated in the automobile, mobile phones, computers, tablet computers and other terminal equipment.
虽然本发明披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。Although the present invention is disclosed above, the present invention is not limited thereto. Any person skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention should be based on the scope defined by the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110262648.6A CN115077537A (en) | 2021-03-10 | 2021-03-10 | High-precision map perception container design method and device, storage medium and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110262648.6A CN115077537A (en) | 2021-03-10 | 2021-03-10 | High-precision map perception container design method and device, storage medium and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115077537A true CN115077537A (en) | 2022-09-20 |
Family
ID=83240699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110262648.6A Pending CN115077537A (en) | 2021-03-10 | 2021-03-10 | High-precision map perception container design method and device, storage medium and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115077537A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116189145A (en) * | 2023-02-15 | 2023-05-30 | 清华大学 | Extraction method, system and readable medium of linear map elements |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106767853A (en) * | 2016-12-30 | 2017-05-31 | 中国科学院合肥物质科学研究院 | A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition |
US20180253625A1 (en) * | 2015-09-09 | 2018-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing high-precision map data, storage medium and device |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN109556615A (en) * | 2018-10-10 | 2019-04-02 | 吉林大学 | The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot |
CN111208839A (en) * | 2020-04-24 | 2020-05-29 | 清华大学 | A fusion method and system of real-time perception information and autonomous driving map |
CN111488812A (en) * | 2020-04-01 | 2020-08-04 | 腾讯科技(深圳)有限公司 | Obstacle position recognition method and device, computer equipment and storage medium |
-
2021
- 2021-03-10 CN CN202110262648.6A patent/CN115077537A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180253625A1 (en) * | 2015-09-09 | 2018-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing high-precision map data, storage medium and device |
CN106767853A (en) * | 2016-12-30 | 2017-05-31 | 中国科学院合肥物质科学研究院 | A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN109556615A (en) * | 2018-10-10 | 2019-04-02 | 吉林大学 | The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot |
CN111488812A (en) * | 2020-04-01 | 2020-08-04 | 腾讯科技(深圳)有限公司 | Obstacle position recognition method and device, computer equipment and storage medium |
CN111208839A (en) * | 2020-04-24 | 2020-05-29 | 清华大学 | A fusion method and system of real-time perception information and autonomous driving map |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116189145A (en) * | 2023-02-15 | 2023-05-30 | 清华大学 | Extraction method, system and readable medium of linear map elements |
CN116189145B (en) * | 2023-02-15 | 2024-06-11 | 清华大学 | Extraction method, system and readable medium of linear map elements |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111561937B (en) | Sensor Fusion for Precise Positioning | |
US11808581B2 (en) | Lane-level map matching | |
CN113223317B (en) | Method, device and equipment for updating map | |
JP6812404B2 (en) | Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data | |
CN108073170B (en) | Automated collaborative driving control for autonomous vehicles | |
WO2020232648A1 (en) | Lane line detection method, electronic device and storage medium | |
JP6595182B2 (en) | Systems and methods for mapping, locating, and attitude correction | |
CN111976718A (en) | Automatic parking control method and system | |
CN111391823A (en) | Multilayer map making method for automatic parking scene | |
CN110146910A (en) | A positioning method and device based on GPS and lidar data fusion | |
US11961304B2 (en) | Systems and methods for deriving an agent trajectory based on multiple image sources | |
US11961241B2 (en) | Systems and methods for deriving an agent trajectory based on tracking points within images | |
US12085403B2 (en) | Vehicle localisation | |
US11430182B1 (en) | Correcting or expanding an existing high-definition map | |
US11682124B2 (en) | Systems and methods for transferring map data between different maps | |
US12067869B2 (en) | Systems and methods for generating source-agnostic trajectories | |
CN114174137A (en) | Source lateral offset for ADAS or AD features | |
CN114248778B (en) | Positioning method and positioning device of mobile equipment | |
CN115086862B (en) | Multi-vehicle joint perception information time-space unification method and device, storage medium, and terminal | |
El Farnane Abdelhafid et al. | Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars | |
CN115096330A (en) | Map change detection method and device, computer-readable storage medium, and terminal | |
Zhou et al. | Road-pulse from IMU to enhance HD map matching for intelligent vehicle localization | |
CN115077537A (en) | High-precision map perception container design method and device, storage medium and terminal | |
CN113390422B (en) | Automobile positioning method and device and computer storage medium | |
CN119309564A (en) | A method and device for constructing a map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |