CN116912788A - Attack detection method, device and equipment for automatic driving system and storage medium - Google Patents
Attack detection method, device and equipment for automatic driving system and storage medium Download PDFInfo
- Publication number
- CN116912788A CN116912788A CN202310584658.0A CN202310584658A CN116912788A CN 116912788 A CN116912788 A CN 116912788A CN 202310584658 A CN202310584658 A CN 202310584658A CN 116912788 A CN116912788 A CN 116912788A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- detection frame
- array
- target
- information data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 378
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 13
- 230000009466 transformation Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000006872 improvement Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007123 defense Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种自动驾驶系统的攻击检测方法、装置、设备及存储介质,其中方法包括分别对三维传感器、二维传感器获取的三维信息数据、二维信息数据进行目标检测,得到第一三维检测框、第一二维检测框,且对三维信息数据、二维信息数据进行融合目标检测,得到第二三维检测框,并对第二三维检测框进行坐标变换,得到第二二维检测框,再对每个物体对应的第一二维检测框和第二二维检测框进行IoU数值计算,得到第一数组,对每个物体对应的第一三维检测框和第二三维检测框进行IoU数值计算,得到第二数组,分析第一数组和和第二数组确认是否传感器是否被攻击。本发明能够基于多模态的数据进行攻击检测并能够定位到被攻击的传感器。
The invention discloses an attack detection method, device, equipment and storage medium for an automatic driving system. The method includes performing target detection on three-dimensional information data and two-dimensional information data obtained by three-dimensional sensors and two-dimensional sensors respectively, and obtains the first three-dimensional information data. detection frame, the first two-dimensional detection frame, and perform fusion target detection on the three-dimensional information data and the two-dimensional information data to obtain the second three-dimensional detection frame, and perform coordinate transformation on the second three-dimensional detection frame to obtain the second two-dimensional detection frame , then perform IoU numerical calculations on the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each object to obtain the first array, and perform IoU on the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each object. Numerical calculation is performed to obtain the second array, and the first array and the second array are analyzed to confirm whether the sensor has been attacked. The invention can detect attacks based on multi-modal data and locate the attacked sensor.
Description
技术领域Technical field
本申请涉及传感器检测技术领域,特别是涉及一种自动驾驶系统的攻击检测方法、装置、设备及存储介质。This application relates to the field of sensor detection technology, and in particular to an attack detection method, device, equipment and storage medium for an autonomous driving system.
背景技术Background technique
无人驾驶技术是一个涉及人工智能、传感技术、地图技术以及计算机等诸多前沿科技的综合技术。无人驾驶技术的功能实现依赖于车载定位传感器系统。定位传感器系统主要由激光雷达、视觉摄像头、毫米波雷达、全球定位系统(GPS)等设备组成。定位传感器系统为无人驾驶汽车的规划与决策模块提供丰富的定位数据,即车辆速度、姿态和位置等信息。自动驾驶汽车路线规划与决策控制的安全性以定位传感器系统的安全为前提,如果自动驾驶汽车的定位传感器系统出现异常,将会导致传感器获取到错误的定位信息,进而规划错误的驾驶控制策略,对其他车辆安全及驾驶员、行人生命造成威胁。Autonomous driving technology is a comprehensive technology involving many cutting-edge technologies such as artificial intelligence, sensing technology, map technology, and computers. The function realization of driverless technology relies on the vehicle-mounted positioning sensor system. The positioning sensor system mainly consists of laser radar, visual camera, millimeter wave radar, global positioning system (GPS) and other equipment. The positioning sensor system provides rich positioning data for the planning and decision-making module of autonomous vehicles, that is, vehicle speed, attitude, position and other information. The safety of route planning and decision-making control of autonomous vehicles is based on the safety of the positioning sensor system. If the positioning sensor system of an autonomous vehicle is abnormal, it will cause the sensor to obtain wrong positioning information, and then plan the wrong driving control strategy. It poses a threat to the safety of other vehicles and the lives of drivers and pedestrians.
目前,无人驾驶技术的传感器攻击检测的方式主要是与具体的单传感器目标检测算法结合,并在此基础上进行攻击检测,但是,该种方式不具有普适性,例如,以激光雷达为主的目标检测算法搭配的攻击检测方法不能适用于以摄像头为主的目标检测算法的攻击检测场景,并且针对单一类型传感器的检测方式的攻击检测结果准确性不高。At present, the sensor attack detection method of unmanned driving technology is mainly combined with a specific single-sensor target detection algorithm, and attack detection is performed on this basis. However, this method is not universal. For example, lidar is used as the The attack detection method combined with the main target detection algorithm cannot be applied to the attack detection scenario of the camera-based target detection algorithm, and the accuracy of the attack detection results for a single type of sensor detection method is not high.
发明内容Contents of the invention
有鉴于此,本申请提供一种自动驾驶系统的攻击检测方法、装置、设备及存储介质,以解决现有自动驾驶系统攻击检测方式不具备普适性且准确性不高的问题。In view of this, this application provides an attack detection method, device, equipment and storage medium for an autonomous driving system to solve the problem that existing attack detection methods for autonomous driving systems are not universal and have low accuracy.
为解决上述技术问题,本申请采用的一个技术方案是:提供一种自动驾驶系统的攻击检测方法,其包括:分别利用三维传感器、二维传感器分别获取目标区域的三维信息数据、二维信息数据;将三维信息数据、二维信息数据分别输入预先训练好的三维检测模型、二维检测模型,得到目标区域中的每个目标物体的第一三维检测框、第一二维检测框;将三维信息数据、二维信息数据输入预先训练好的融合检测模型,得到目标区域中的每个目标物体的第二三维检测框,并对第二三维检测框进行坐标系变化,得到第二二维检测框;计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组,且计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组;基于第一数组和第二数组进行攻击检测并定位被攻击的传感器。In order to solve the above technical problems, one technical solution adopted by this application is to provide an attack detection method for the automatic driving system, which includes: using three-dimensional sensors and two-dimensional sensors to respectively obtain three-dimensional information data and two-dimensional information data of the target area. ; Input the three-dimensional information data and the two-dimensional information data into the pre-trained three-dimensional detection model and the two-dimensional detection model respectively to obtain the first three-dimensional detection frame and the first two-dimensional detection frame of each target object in the target area; Information data and two-dimensional information data are input into the pre-trained fusion detection model to obtain the second three-dimensional detection frame of each target object in the target area, and the coordinate system of the second three-dimensional detection frame is changed to obtain the second two-dimensional detection frame. Frame; calculate the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object, obtain the first array, and calculate the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object. The second IoU value of the detection frame is obtained to obtain the second array; based on the first array and the second array, attack detection is performed and the attacked sensor is located.
作为本申请的进一步改进,将三维信息数据、二维信息数据输入预先训练好的融合检测模型之前,还包括:将上一时刻的二维信息数据和当前时刻的二维信息数据进行卡方比较,得到卡方比较数值;判断卡方比较数值是否超过预设阈值;当卡方比较数值超过预设阈值时,将当前时刻的二维信息数据与上一时刻的二维信息数据进行直方图匹配,得到当前时刻的增强二维信息数据,并替换掉当前时刻的二维信息数据;当卡方比较数值未超过预设阈值时,维持当前时刻的二维信息数据。As a further improvement of this application, before inputting the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model, it also includes: performing a chi-square comparison between the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment. , obtain the chi-square comparison value; determine whether the chi-square comparison value exceeds the preset threshold; when the chi-square comparison value exceeds the preset threshold, perform histogram matching between the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment. , obtain the enhanced two-dimensional information data at the current moment, and replace the two-dimensional information data at the current moment; when the chi-square comparison value does not exceed the preset threshold, the two-dimensional information data at the current moment is maintained.
作为本申请的进一步改进,对第二三维检测框进行坐标系变化,得到第二二维检测框,包括:获取第二三维检测框的三维中心点坐标;利用三维中心点坐标确认第二三维检测框上与三维中心点在同一平面的四个点的齐次坐标;利用预先获取的相机投影矩阵、相机旋转矩阵、传感器到相机坐标系的旋转矩阵和齐次坐标计算得到四个二维坐标点;利用四个二维坐标点构建第二二维检测框。As a further improvement of this application, changing the coordinate system of the second three-dimensional detection frame to obtain the second two-dimensional detection frame includes: obtaining the three-dimensional center point coordinates of the second three-dimensional detection frame; using the three-dimensional center point coordinates to confirm the second three-dimensional detection Homogeneous coordinates of four points on the frame on the same plane as the three-dimensional center point; four two-dimensional coordinate points are calculated using the pre-obtained camera projection matrix, camera rotation matrix, sensor-to-camera coordinate system rotation matrix and homogeneous coordinates ; Use four two-dimensional coordinate points to construct a second two-dimensional detection frame.
作为本申请的进一步改进,计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组,包括:确认每个目标物体对应的目标第一二维检测框和目标第二二维检测框;分别计算目标第一二维检测框、目标第二二维检测框的第一面积、第二面积,以及目标第一二维检测框与目标第二二维检测框的重叠区域的第三面积;利用第一面积、第二面积、第三面积计算得到每个目标物体对应的第一IoU数值;利用所有目标物体对应的第一IoU数值构建第一数组。As a further improvement of this application, the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object is calculated to obtain the first array, including: confirming the first target of each target object corresponding to The two-dimensional detection frame and the second two-dimensional detection frame of the target; respectively calculate the first area and the second area of the first two-dimensional detection frame of the target and the second two-dimensional detection frame of the target, as well as the relationship between the first two-dimensional detection frame of the target and the third two-dimensional detection frame of the target. The third area of the overlapping area of the two-dimensional detection frames; use the first area, the second area, and the third area to calculate the first IoU value corresponding to each target object; use the first IoU values corresponding to all target objects to construct the third area an array.
作为本申请的进一步改进,计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组,包括:确认每个目标物体对应的目标第一三维检测框和目标第二三维检测框;分别计算目标第一三维检测框、目标第二三维检测框的第一体积、第二体积;计算目标第一三维检测框与目标第二三维检测框的重叠区域的底面面积;确认目标第一三维检测框与目标第二三维检测框的重叠区域的高度;利用底面面积和高度计算得到重叠区域的第三体积;利用第一体积、第二体积、第三体积计算得到每个目标物体对应的第二IoU数值;利用所有目标物体对应的第二IoU数值构建第二数组。As a further improvement of this application, the second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object is calculated to obtain a second array, including: confirming the first three-dimensional detection of the target corresponding to each target object frame and the target second three-dimensional detection frame; calculate the first volume and second volume of the target first three-dimensional detection frame and the target second three-dimensional detection frame respectively; calculate the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame The base area of Calculate the second IoU value corresponding to each target object; construct a second array using the second IoU values corresponding to all target objects.
作为本申请的进一步改进,基于第一数组和第二数组进行攻击检测并定位被攻击的传感器,包括:分别判断第一数组、第二数组是否存在离群值;当第一数组存在离群值、第二数组不存在离群值时,确认三维传感器被攻击;当第一数组不存在离群值、第二数组存在离群值时,确认二维传感器被攻击;当第一数组、第二数组均存在离群值时,确认三维传感器和/或二维传感器被攻击;当第一数组、第二数组均不存在离群值时,确认三维传感器和二维传感器未受到攻击。As a further improvement of this application, performing attack detection and locating the attacked sensor based on the first array and the second array includes: respectively determining whether there are outliers in the first array and the second array; when there are outliers in the first array , when there are no outliers in the second array, it is confirmed that the three-dimensional sensor is attacked; when there are no outliers in the first array, and there are outliers in the second array, it is confirmed that the two-dimensional sensor is attacked; when the first array, the second array When there are outliers in both arrays, it is confirmed that the three-dimensional sensor and/or the two-dimensional sensor has been attacked; when there are no outliers in the first array and the second array, it is confirmed that the three-dimensional sensor and the two-dimensional sensor have not been attacked.
作为本申请的进一步改进,三维传感器包括激光雷达,二维传感器包括摄像头,三维检测模型基于PointPillar算法构建,二维检测模型基于YOLOv3算法构建,融合检测模型基于AVOD算法构建。As a further improvement of this application, the three-dimensional sensor includes lidar, the two-dimensional sensor includes a camera, the three-dimensional detection model is built based on the PointPillar algorithm, the two-dimensional detection model is built based on the YOLOv3 algorithm, and the fusion detection model is built based on the AVOD algorithm.
为解决上述技术问题,本申请采用的又一个技术方案是:提供一种自动驾驶系统的攻击检测装置,其包括:获取模块,用于分别利用三维传感器、二维传感器分别获取目标区域的三维信息数据、二维信息数据;第一检测模块,用于将三维信息数据、二维信息数据分别输入预先训练好的三维检测模型、二维检测模型,得到目标区域中的每个目标物体的第一三维检测框、第一二维检测框;第二检测模块,用于将三维信息数据、二维信息数据输入预先训练好的融合检测模型,得到目标区域中的每个目标物体的第二三维检测框,并对第二三维检测框进行坐标系变化,得到第二二维检测框;计算模块,用于计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组,且计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组;分析模块,用于基于第一数组和第二数组进行攻击检测并定位被攻击的传感器。In order to solve the above technical problems, another technical solution adopted by this application is to provide an attack detection device for an automatic driving system, which includes: an acquisition module for respectively acquiring three-dimensional information of the target area using three-dimensional sensors and two-dimensional sensors. data, two-dimensional information data; the first detection module is used to input the three-dimensional information data and the two-dimensional information data into the pre-trained three-dimensional detection model and the two-dimensional detection model respectively, and obtain the first image of each target object in the target area. The three-dimensional detection frame, the first two-dimensional detection frame; the second detection module is used to input the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model to obtain the second three-dimensional detection of each target object in the target area. frame, and changes the coordinate system of the second three-dimensional detection frame to obtain the second two-dimensional detection frame; the calculation module is used to calculate the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object. IoU value, obtain the first array, and calculate the second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object, obtain the second array; the analysis module is used to calculate the first array and the second array based on the first three-dimensional detection frame and the second three-dimensional detection frame. The array performs attack detection and locates the attacked sensor.
为解决上述技术问题,本申请采用的再一个技术方案是:提供一种计算机设备,所述计算机设备包括处理器、与所述处理器耦接的存储器,所述存储器中存储有程序指令,所述程序指令被所述处理器执行时,使得所述处理器执行如上述任一项的自动驾驶系统的攻击检测方法的步骤。In order to solve the above technical problems, another technical solution adopted by this application is to provide a computer device. The computer device includes a processor and a memory coupled to the processor. Program instructions are stored in the memory. When the program instructions are executed by the processor, the processor is caused to execute the steps of the attack detection method for the automatic driving system as described in any one of the above.
为解决上述技术问题,本申请采用的再一个技术方案是:提供一种存储介质,存储有能够实现上述任一项的自动驾驶系统的攻击检测方法的程序指令。In order to solve the above technical problems, another technical solution adopted by this application is to provide a storage medium that stores program instructions that can implement any of the above attack detection methods for the automatic driving system.
本申请的有益效果是:本申请的自动驾驶系统的攻击检测方法通过分别对三维传感器、二维传感器获取的三维信息数据、二维信息数据进行目标检测,得到第一三维检测框、第一二维检测框,且对三维信息数据、二维信息数据进行融合目标检测,得到第二三维检测框,并对第二三维检测框进行坐标变换,得到第二二维检测框,再对每个物体对应的第一二维检测框和第二二维检测框进行IoU数值计算,得到第一数组,对每个物体对应的第一三维检测框和第二三维检测框进行IoU数值计算,得到第二数组,分析第一数组和和第二数组确认是否传感器是否被攻击,其利用三维传感器和二维传感器在时间和空间上的相关性进行攻击检测,从而提高攻击检测的准确性,并且利用作为自动驾驶汽车的基本配置的三维传感器和二维传感器即能够实现攻击检测,不再局限于利用其中一种传感器为主的目标检测算法进行攻击检测,具有更高的普适性。The beneficial effects of this application are: the attack detection method of the automatic driving system of this application performs target detection on the three-dimensional information data and two-dimensional information data obtained by the three-dimensional sensor and the two-dimensional sensor respectively, and obtains the first three-dimensional detection frame, the first and second 3D detection frame, and perform fusion target detection on 3D information data and 2D information data to obtain a second 3D detection frame, perform coordinate transformation on the second 3D detection frame to obtain a second 2D detection frame, and then detect each object Perform IoU numerical calculation on the corresponding first two-dimensional detection frame and second two-dimensional detection frame to obtain the first array. Perform IoU numerical calculation on the first three-dimensional detection frame and second three-dimensional detection frame corresponding to each object to obtain the second array. Array, analyze the first array and the second array to confirm whether the sensor has been attacked. It uses the correlation of the three-dimensional sensor and the two-dimensional sensor in time and space to detect the attack, thereby improving the accuracy of the attack detection, and using it as an automatic The basic configuration of three-dimensional sensors and two-dimensional sensors used in driving cars can realize attack detection. It is no longer limited to using one of the sensor-based target detection algorithms for attack detection, and has higher universality.
附图说明Description of the drawings
图1是本发明实施例的自动驾驶系统的攻击检测方法的一流程示意图;Figure 1 is a schematic flow chart of an attack detection method for an autonomous driving system according to an embodiment of the present invention;
图2是本发明实施例的第二三维检测框的示意图;Figure 2 is a schematic diagram of a second three-dimensional detection frame according to an embodiment of the present invention;
图3是本发明实施例的二维检测框重叠区域的示意图;Figure 3 is a schematic diagram of the overlapping area of two-dimensional detection frames according to the embodiment of the present invention;
图4是本发明实施例的自动驾驶系统的攻击检测装置的功能模块示意图;Figure 4 is a functional module schematic diagram of the attack detection device of the automatic driving system according to the embodiment of the present invention;
图5是本发明实施例的计算机设备的结构示意图;Figure 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention;
图6是本发明实施例的存储介质的结构示意图。Figure 6 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only some of the embodiments of the present application, rather than all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of this application.
本申请中的术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”、“第三”的特征可以明示或者隐含地包括至少一个该特征。本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。本申请实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms “first”, “second” and “third” in this application are only used for descriptive purposes and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, features defined as "first", "second", and "third" may explicitly or implicitly include at least one of these features. In the description of this application, "plurality" means at least two, such as two, three, etc., unless otherwise clearly and specifically limited. All directional indications (such as up, down, left, right, front, back...) in the embodiments of this application are only used to explain the relative positional relationship between components in a specific posture (as shown in the drawings). , sports conditions, etc., if the specific posture changes, the directional indication will also change accordingly. Furthermore, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device that includes a series of steps or units is not limited to the listed steps or units, but optionally also includes steps or units that are not listed, or optionally also includes Other steps or units inherent to such processes, methods, products or devices.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art understand, both explicitly and implicitly, that the embodiments described herein may be combined with other embodiments.
图1是本发明实施例的自动驾驶系统的攻击检测方法的流程示意图。需注意的是,若有实质上相同的结果,本发明的方法并不以图1所示的流程顺序为限。如图1所示,该自动驾驶系统的攻击检测方法包括步骤:Figure 1 is a schematic flowchart of an attack detection method for an autonomous driving system according to an embodiment of the present invention. It should be noted that, if substantially the same results are obtained, the method of the present invention is not limited to the process sequence shown in Figure 1 . As shown in Figure 1, the attack detection method of the autonomous driving system includes the following steps:
步骤S101:分别利用三维传感器、二维传感器分别获取目标区域的三维信息数据、二维信息数据。Step S101: Use three-dimensional sensors and two-dimensional sensors to obtain three-dimensional information data and two-dimensional information data of the target area respectively.
本实施例中,三维传感器优选为激光雷达,需要说明的是,本发明实施例的三维传感器不局限于激光雷达,其他可用于获取三维信息数据的传感器设备同样包括在本发明的保护范围之内。二维传感器优选为摄像头,需要说明的是,本发明实施例的二维传感器不局限于摄像头,其他可用于获取二维信息数据的传感器设备同样包括在本发明的保护范围之内。In this embodiment, the three-dimensional sensor is preferably a lidar. It should be noted that the three-dimensional sensor in the embodiment of the present invention is not limited to lidar. Other sensor devices that can be used to obtain three-dimensional information data are also included in the protection scope of the present invention. . The two-dimensional sensor is preferably a camera. It should be noted that the two-dimensional sensor in the embodiment of the present invention is not limited to the camera. Other sensor devices that can be used to obtain two-dimensional information data are also included in the protection scope of the present invention.
具体地,本实施例在使用自动驾驶系统驾驶车辆时,利用车辆上配备的三维传感器和二维传感器同时采集目标区域的三维信息数据和二维信息数据。需要说明的是,在进行攻击检测时,其检测范围局限在一个目标区域内,该目标区域为三维传感器和二维传感器视野交叠的部分,例如,对于激光雷达和摄像头而言,该交叠区域为激光雷达的FOV(Field of view,FOV)视域,通过将摄像头拍摄的图像尺寸投影到激光雷达的视觉区域之内,从而获取激光雷达采集的三维信息数据和摄像头采集的二维信息数据。Specifically, in this embodiment, when using the autonomous driving system to drive a vehicle, the three-dimensional sensor and the two-dimensional sensor equipped on the vehicle are used to simultaneously collect the three-dimensional information data and the two-dimensional information data of the target area. It should be noted that when performing attack detection, the detection range is limited to a target area, which is the overlapping field of view of the three-dimensional sensor and the two-dimensional sensor. For example, for lidar and cameras, the overlap The area is the FOV (Field of view, FOV) field of view of the lidar. By projecting the image size captured by the camera into the visual area of the lidar, the three-dimensional information data collected by the lidar and the two-dimensional information data collected by the camera are obtained. .
步骤S102:将三维信息数据、二维信息数据分别输入预先训练好的三维检测模型、二维检测模型,得到目标区域中的每个目标物体的第一三维检测框、第一二维检测框。Step S102: Input the three-dimensional information data and the two-dimensional information data into the pre-trained three-dimensional detection model and the two-dimensional detection model respectively, and obtain the first three-dimensional detection frame and the first two-dimensional detection frame of each target object in the target area.
其中,该三维检测模型基于PointPillar算法构建。具体地,三维传感器使用PointPillars(3D Object Proposal Generation and Detection from Point Cloud)算法进行三维目标检测得到第一三维目标检测框,输入数据为三维传感器的原始数据点云数据即三维信息数据,格式为[x,y,z,intensity]。三维传感器的原始特征为点云集合,可表示为矢量(64线激光雷达,每线扫描一周1800个点云)其中X表示点云在水平面的横坐标,Y表示点云在水平面的纵坐标,Z表示点云的高度,intensity表示点云的反射强度。使用two-stage方式,利用PointNet++作为主干网络,先完成segmentation任务,判断每个三维点的label,对分为前景的每个点,使用feature生成框。然后对框进行优化。目标物体检测结果用向量/> 表示。其中xlidar=[XvYvZv]T表示目标物体在三维传感器坐标系上的中心点值,[L,W,H]T表示第一三维检测框的长宽高。Among them, the three-dimensional detection model is built based on the PointPillar algorithm. Specifically, the three-dimensional sensor uses the PointPillars (3D Object Proposal Generation and Detection from Point Cloud) algorithm to perform three-dimensional target detection to obtain the first three-dimensional target detection frame. The input data is the original data point cloud data of the three-dimensional sensor, that is, the three-dimensional information data, and the format is [ x,y,z,intensity]. The original features of the three-dimensional sensor are point cloud collections, which can be expressed as vectors (64-line lidar, 1800 point clouds scanned per line) where X represents the abscissa of the point cloud on the horizontal plane, Y represents the ordinate of the point cloud on the horizontal plane, Z represents the height of the point cloud, and intensity represents the reflection intensity of the point cloud. . Using the two-stage method, using PointNet++ as the backbone network, first complete the segmentation task, determine the label of each three-dimensional point, and use feature generation boxes for each point divided into the foreground. Then optimize the box. Target object detection result vector/> express. Among them, x lidar = [X v Y v Z v ] T represents the center point value of the target object on the three-dimensional sensor coordinate system, and [L, W, H] T represents the length, width and height of the first three-dimensional detection frame.
其中,该二维检测模型基于YOLOv3算法构建。具体地,使用YOLOv3算法对二维传感器获取的二维信息数据进行二维目标检测得到目标中心点。例如,以摄像头为例,将采集到的图片以RGB格式输入二维检测模型,二维检测模型调用深度卷积神经网络算法对RGB图像进行特征提取,神经网络算法中常见的结构有卷积层,池化层,激活层,dropout层,BN(batch normalization)层,全连接层等,最终从图片中提取的特征能有效描述目标物体的信息。输入的数据格式为摄像头的RGB图像,输出的为图像坐标系上的中心点坐标及检测框宽(width)和高(height),格式为其中Xcamrra表示图像坐标系上x方向的数值,Ycamera表示y方向数值,w表示第一二维检测框的宽度,h表示第一二维检测框的高度。得到第一二维检测框后可以进一步得到第一二维检测框四个点的坐标分别如下所示:Among them, the two-dimensional detection model is built based on the YOLOv3 algorithm. Specifically, the YOLOv3 algorithm is used to perform two-dimensional target detection on the two-dimensional information data obtained by the two-dimensional sensor to obtain the target center point. For example, taking a camera as an example, the collected pictures are input into a two-dimensional detection model in RGB format. The two-dimensional detection model calls a deep convolutional neural network algorithm to extract features of the RGB image. Common structures in neural network algorithms include convolutional layers. , pooling layer, activation layer, dropout layer, BN (batch normalization) layer, fully connected layer, etc. The final features extracted from the picture can effectively describe the information of the target object. The input data format is the RGB image of the camera, and the output is the center point coordinates on the image coordinate system and the detection frame width (width) and height (height), the format is Among them, X camera represents the value in the x direction on the image coordinate system, Y camera represents the value in the y direction, w represents the width of the first two-dimensional detection frame, and h represents the height of the first two-dimensional detection frame. After obtaining the first two-dimensional detection frame, the coordinates of the four points of the first two-dimensional detection frame can be further obtained as follows:
Xcamera,1,=Xcamera,3,=Xcamera-w/2;X camera, 1 , = X camera, 3 , = X camera -w/2;
Xcamera,2,=Xcamera,4,=Xcamera+w/2;X camera, 2 ,=X camera, 4 ,=X camera +w/2;
Ycamera,1,=Ycamera,2,=Ycamera+h/2;Y camera, 1 , = Y camera, 2 , = Y camera + h/2;
Ycamera,3,=Ycamera,4,=Ycamera-h/2;Y camera, 3 , = Y camera, 4 , = Y camera -h/2;
其中Xcamera,i,i=1,2,3,4分别表示第一二维检测框左上角,右上角,左下角,右下角的图片坐标系的x方向的坐标值。Ycamera,i,i=1,2,3,4分别表示第一二维检测框左上角,右上角,左下角,右下角的图片坐标系的y方向的坐标值。Among them, X camera, i , i=1, 2, 3, and 4 respectively represent the x-direction coordinate values of the upper left corner, upper right corner, lower left corner, and lower right corner of the first two-dimensional detection frame of the image coordinate system. Y camera, i , i=1, 2, 3, 4 respectively represent the coordinate values in the y direction of the image coordinate system in the upper left corner, upper right corner, lower left corner, and lower right corner of the first two-dimensional detection frame.
步骤S103:将三维信息数据、二维信息数据输入预先训练好的融合检测模型,得到目标区域中的每个目标物体的第二三维检测框,并对第二三维检测框进行坐标系变化,得到第二二维检测框。Step S103: Input the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model to obtain the second three-dimensional detection frame of each target object in the target area, and change the coordinate system of the second three-dimensional detection frame to obtain The second two-dimensional detection frame.
具体地,该融合检测模型基于AVOD算法(Joint 3D Proposal Generation andObject Detection from ViewAggregation)构建,在得到三维信息数据和二维信息数据后,将三维信息数据和二维信息数据输入至AVOD算法进行目标检测,得到第二三维检测框。AVOD算法融合了三维信息数据和二维信息数据,当三维信息数据为激光雷达获取的点云数据信息时,AVOD算法只使用了点云的俯视图和前视图,这样既能减少计算量,又不至于丧失过多的信息。随后生成三维候选区域,把特征和候选区域融合后输出最终的第二三维检测框。Specifically, the fusion detection model is built based on the AVOD algorithm (Joint 3D Proposal Generation and Object Detection from View Aggregation). After obtaining the three-dimensional information data and the two-dimensional information data, the three-dimensional information data and the two-dimensional information data are input to the AVOD algorithm for target detection. , get the second three-dimensional detection frame. The AVOD algorithm combines three-dimensional information data and two-dimensional information data. When the three-dimensional information data is point cloud data obtained by lidar, the AVOD algorithm only uses the top view and front view of the point cloud, which can not only reduce the amount of calculation, but also eliminate the need for As for losing too much information. Then a three-dimensional candidate region is generated, and the final second three-dimensional detection frame is output after fusing the features and the candidate region.
进一步的,为了提高二维传感器的防御能力,在一些实施例中,步骤S103之前还包括:Further, in order to improve the defense capability of the two-dimensional sensor, in some embodiments, before step S103, it also includes:
1、将上一时刻的二维信息数据和当前时刻的二维信息数据进行卡方比较,得到卡方比较数值。1. Perform a chi-square comparison between the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain the chi-square comparison value.
2、判断卡方比较数值是否超过预设阈值。2. Determine whether the chi-square comparison value exceeds the preset threshold.
3、当卡方比较数值超过预设阈值时,将当前时刻的二维信息数据与上一时刻的二维信息数据进行直方图匹配,得到当前时刻的增强二维信息数据,并替换掉当前时刻的二维信息数据。3. When the chi-square comparison value exceeds the preset threshold, perform histogram matching between the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment to obtain the enhanced two-dimensional information data at the current moment and replace the current moment. two-dimensional information data.
4、当卡方比较数值未超过预设阈值时,维持当前时刻的二维信息数据。4. When the chi-square comparison value does not exceed the preset threshold, the two-dimensional information data at the current moment is maintained.
本实施例中,图像增强的目的是为了提高融合检测模型对二维传感器攻击的防御能力。通过将上一时刻的二维信息数据和当前时刻的二维信息数据进行卡方比较,得到卡方比较数值,再判断卡方比较数值是否超过预设阈值。当卡方比较数值超过预设阈值时,将当前时刻的二维信息数据与上一时刻的二维信息数据进行直方图匹配,得到当前时刻的增强二维信息数据,并替换掉当前时刻的二维信息数据。当卡方比较数值未超过预设阈值时,维持当前时刻的二维信息数据。In this embodiment, the purpose of image enhancement is to improve the defense capability of the fusion detection model against two-dimensional sensor attacks. By performing a chi-square comparison between the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment, the chi-square comparison value is obtained, and then it is determined whether the chi-square comparison value exceeds the preset threshold. When the chi-square comparison value exceeds the preset threshold, perform histogram matching between the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment to obtain the enhanced two-dimensional information data at the current moment and replace the two-dimensional information data at the current moment. dimensional information data. When the chi-square comparison value does not exceed the preset threshold, the two-dimensional information data at the current moment is maintained.
进一步的,步骤S103中对第二三维检测框进行坐标系变化,得到第二二维检测框的步骤,具体包括:Further, in step S103, the coordinate system of the second three-dimensional detection frame is changed to obtain the second two-dimensional detection frame, which specifically includes:
1、获取第二三维检测框的三维中心点坐标。1. Obtain the three-dimensional center point coordinates of the second three-dimensional detection frame.
2、利用三维中心点坐标确认第二三维检测框上与三维中心点在同一平面的四个点的齐次坐标。2. Use the coordinates of the three-dimensional center point to confirm the homogeneous coordinates of the four points on the second three-dimensional detection frame that are on the same plane as the three-dimensional center point.
3、利用预先获取的相机投影矩阵、相机旋转矩阵、传感器到相机坐标系的旋转矩阵和齐次坐标计算得到四个二维坐标点。3. Calculate four two-dimensional coordinate points using the pre-acquired camera projection matrix, camera rotation matrix, sensor-to-camera coordinate system rotation matrix and homogeneous coordinates.
4、利用四个二维坐标点构建第二二维检测框。4. Use four two-dimensional coordinate points to construct a second two-dimensional detection frame.
具体地,本实施例中,使用4个角点(只包含x坐标,y坐标)和2个高度来描述一个第二三维检测框,如图2所示,第二三维检测框以向量表示,其中ci=[xciyci]T,i=1,2,3,4表示第二三维检测框4个顶点的x、y方向的坐标,h1表示第二三维检测框底面距离三维坐标系中x轴与y轴构成的平面的高度,h2表示第二三维检测框顶面距离三维坐标系中x轴与y轴构成的平面的高度。Specifically, in this embodiment, 4 corner points (including only x coordinates and y coordinates) and 2 heights are used to describe a second three-dimensional detection frame. As shown in Figure 2, the second three-dimensional detection frame is represented by a vector represents, where c i = [x ci y ci ] T , i=1, 2, 3, 4 represents the coordinates of the four vertices of the second three-dimensional detection frame in the x and y directions, h 1 represents the distance from the bottom of the second three-dimensional detection frame The height of the plane formed by the x-axis and the y-axis in the three-dimensional coordinate system, h 2 represents the height of the top surface of the second three-dimensional detection frame from the plane formed by the x-axis and the y-axis in the three-dimensional coordinate system.
具体地,在得到第二三维检测框后,将第二三维检测框投影至图像坐标系,得到第二二维检测框。请一并参阅图2,首先将第二三维检测框的向量形式转换为常规的三维检测框格式Y3D融合检测=[XvYvZv]T,其中:Specifically, after obtaining the second three-dimensional detection frame, the second three-dimensional detection frame is projected to the image coordinate system to obtain the second two-dimensional detection frame. Please refer to Figure 2 together. First, convert the vector form of the second three-dimensional detection frame into the conventional three-dimensional detection frame format Y 3D fusion detection = [X v Y v Z v ] T , where:
Zv=(h2-h1)/2;Z v =(h 2 -h 1 )/2;
Xv=xc1-xc2;X v =x c1 -x c2 ;
Yv=yc1-yc2;Y v =y c1 -y c2 ;
其中,xci表示ci点的x轴坐标,yci表示ci点的y轴坐标,i=1,2,3,4,[XvYvZv]T表示第二三维检测框中心点的坐标。Among them, x ci represents the x-axis coordinate of point c i , y ci represents the y-axis coordinate of point c i , i=1, 2, 3, 4, [X v Y v Z v ] T represents the center of the second three-dimensional detection frame The coordinates of the point.
然后,对第二三维检测框中心点附近的四个点(图2中A、B、C、D四个点)依次进行坐标系的转换,得到四个在图像坐标系的点,具体转换过程如下:Then, the four points near the center point of the second three-dimensional detection frame (the four points A, B, C, and D in Figure 2) are sequentially converted to the coordinate system to obtain four points in the image coordinate system. The specific conversion process as follows:
Y点A齐次坐标=[Xv,1,Yv,1,Zv,1]T=[Xv-(xc2-xc1)/2,Yv,h2]; Homogeneous coordinates of point A = [X v, 1 , Y v, 1 , Z v, 1 ] T = [X v - (x c2 - x c1 )/2, Y v , h2];
Y点B齐次坐标=[Xv,2,Yv,2,Zv,2]T=[Xv+(xc2-xc1)/2,Yv,h2]; Homogeneous coordinates of point B = [X v, 2 , Y v, 2 , Z v, 2 ] T = [X v + (x c2 -x c1 )/2, Y v , h2];
Y点C齐次坐标=[Xv,3,Yv,3,Zv,3]T=[Xv-(xc2-xc1)/2,Yv,h1]; Homogeneous coordinates of point Y C = [X v, 3 , Y v, 3 , Z v, 3 ] T = [X v - (x c2 - x c1 )/2, Y v , h1];
Y点D齐次坐标=[Xv,4,Yv,4,Zv,4]T=[Xv+(xc2-xc1)/2,Yv,h1]; Homogeneous coordinates of Y point D = [X v, 4 , Y v, 4 , Z v, 4 ] T = [X v + (x c2 -x c1 )/2, Y v , h1];
Y=P*R*Trvelotocam*Y点n齐次坐标,n=A,B,C,D;Y=P*R*Tr v elo t o c am*Y point n homogeneous coordinates , n=A, B, C, D;
其中,Y表示第二二维检测框的坐标,Y点n齐次坐标表示目标物体的齐次向量,P为相机投影矩阵,R为相机旋转矩阵,Trvelotocam为3x4的三维坐标系到相机坐标系的旋转矩阵。Among them, Y represents the coordinates of the second two-dimensional detection frame, the homogeneous coordinates of the Y point n represent the homogeneous vector of the target object, P is the camera projection matrix, R is the camera rotation matrix, and Tr v elo t o c am is a 3x4 three-dimensional Rotation matrix from coordinate system to camera coordinate system.
步骤S104:计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组,且计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组。Step S104: Calculate the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object, obtain the first array, and calculate the first three-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object. The second IoU value of the three-dimensional detection frame is obtained to obtain the second array.
具体地,IoU数值可以用于表征两组检测框之间的不一致性。利用第一二维检测框和第二二维检测框进行IoU数值计算,得到表征第一二维检测框和第二二维检测框不一致性的第一IoU数值,多个目标物体则对应有多个第一IoU数值,将其构建为第一数组。利用第一三维检测框和第二三维检测框进行IoU数值计算,得到表征第一三维检测框和第二三维检测框不一致性的第二IoU数值,多个目标物体则对应有多个第二IoU数值,将其构建为第二数组。需要说明的是,两组检测框均对应的同一目标区域,该目标区域的目标物体是恒定的,因此,第一数组和第二数组中元素个数相等,每个目标物体在第一数组或第二数组中均存在对应的元素。Specifically, the IoU value can be used to characterize the inconsistency between the two sets of detection frames. The first two-dimensional detection frame and the second two-dimensional detection frame are used to calculate the IoU value to obtain the first IoU value that represents the inconsistency between the first two-dimensional detection frame and the second two-dimensional detection frame. Multiple target objects correspond to multiple The first IoU value is constructed as the first array. Use the first three-dimensional detection frame and the second three-dimensional detection frame to perform IoU numerical calculations to obtain the second IoU numerical value that represents the inconsistency between the first three-dimensional detection frame and the second three-dimensional detection frame. Multiple target objects correspond to multiple second IoU values. Numeric value, construct it as a second array. It should be noted that both sets of detection frames correspond to the same target area, and the target objects in this target area are constant. Therefore, the number of elements in the first array and the second array is equal, and each target object is in the first array or There are corresponding elements in the second array.
进一步的,步骤S104中计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组的步骤,具体包括:Further, in step S104, the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object is calculated to obtain the first array, which specifically includes:
1、确认每个目标物体对应的目标第一二维检测框和目标第二二维检测框。1. Confirm the first 2D detection frame and the second 2D detection frame of the target corresponding to each target object.
具体地,由于场景中往往存在多个目标物体,会有多个检测框,因此,在确认每个目标物体对应的目标第一二维检测框和目标第二二维检测框时,先选取一个第一二维检测框作为目标第一二维检测框,再计算该目标第一二维检测框的中心点与每个第二二维检测框的中心点的欧式距离,选取欧式距离最小的第二二维检测框作为目标第二二维检测框。Specifically, since there are often multiple target objects and multiple detection frames in the scene, when confirming the target first two-dimensional detection frame and the target second two-dimensional detection frame corresponding to each target object, first select one The first two-dimensional detection frame is used as the first two-dimensional detection frame of the target, and then the Euclidean distance between the center point of the first two-dimensional detection frame of the target and the center point of each second two-dimensional detection frame is calculated, and the Euclidean distance with the smallest Euclidean distance is selected. The two-dimensional detection frame is used as the second two-dimensional detection frame of the target.
2、分别计算目标第一二维检测框、目标第二二维检测框的第一面积、第二面积,以及目标第一二维检测框与目标第二二维检测框的重叠区域的第三面积。2. Calculate respectively the first area and the second area of the target first two-dimensional detection frame, the target second two-dimensional detection frame, and the third area of the overlapping area of the target first two-dimensional detection frame and the target second two-dimensional detection frame. area.
具体地,通过获取二维检测框四个顶点的坐标,通过四个顶点的坐标计算得到目标第一二维检测框和目标第二二维检测框的第一面积和第二面积。计算公式如下:Specifically, by obtaining the coordinates of four vertices of the two-dimensional detection frame, the first area and the second area of the target first two-dimensional detection frame and the target second two-dimensional detection frame are calculated based on the coordinates of the four vertices. Calculated as follows:
SA=|xa2-xa1|×|ya2-ya1|;S A =|x a2 -x a1 |×|y a2 -y a1 |;
SB=|xb2-xb1|×|yb2-yb1|;S B =|x b2 -x b1 |×|y b2 -y b1 |;
其中,SA表示第一面积,A1(xa1,ya1)、B1(xa1,ya2)、C1(xa2,ya1)、D1(xa2,ya2)表示目标第一二维检测框四个顶点的坐标。SB表示第二面积,A2(xb1,yb1)、B2(xb1,yb2)、C2(xb2,yb1)、D2(xb2,yb2)表示目标第二二维检测框四个顶点的坐标。Among them, S A represents the first area, A1(x a1 ,y a1 ), B1(x a1 ,y a2 ), C1(x a2 ,y a1 ), D1(x a2 ,y a2 ) represent the first two dimensions of the target. The coordinates of the four vertices of the detection box. S B represents the second area, A2(x b1 ,y b1 ), B2(x b1 ,y b2 ), C2(x b2 ,y b1 ), D2(x b2 ,y b2 ) represent the second two-dimensional detection frame of the target The coordinates of the four vertices.
以图3所示为例进行说明,两个检测框存在重叠区域时,该重叠区域同样为一个矩形框,该矩形框的四个顶点可根据重叠的目标第一二维检测框和目标第二二维检测框的四个顶点坐标得到,其左上角坐标为A2(xb1,yb1),左下角坐标为E(xb1,ya2),右上角坐标为F(xa2,yb1),右下角坐标为D1(xa2,ya2)。再利用重叠区域的四个顶点坐标计算得到重叠区域的第三面积。Taking Figure 3 as an example to illustrate, when two detection frames have an overlapping area, the overlapping area is also a rectangular frame. The four vertices of the rectangular frame can be determined according to the overlapping first two-dimensional detection frame of the target and the second two-dimensional target frame. The coordinates of the four vertices of the two-dimensional detection frame are obtained. The coordinates of the upper left corner are A2 (x b1 , y b1 ), the coordinates of the lower left corner are E (x b1 , y a2 ), and the coordinates of the upper right corner are F (x a2 , y b1 ). , the coordinates of the lower right corner are D1(x a2 ,y a2 ). Then use the four vertex coordinates of the overlapping area to calculate the third area of the overlapping area.
3、利用第一面积、第二面积、第三面积计算得到每个目标物体对应的第一IoU数值。3. Use the first area, the second area, and the third area to calculate the first IoU value corresponding to each target object.
具体地,第一IoU数值=第三面积/(第一面积+第二面积-第三面积)。Specifically, the first IoU value=third area/(first area+second area-third area).
4、利用所有目标物体对应的第一IoU数值构建第一数组。4. Construct the first array using the first IoU values corresponding to all target objects.
具体地,第一数组表示为:I=[i1,i2,…,in],n为第一IoU数值的个数。Specifically, the first array is expressed as: I=[i 1 , i 2 ,..., in ], and n is the number of first IoU values.
进一步的,步骤S104中计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组的步骤,具体包括:Further, in step S104, the second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object is calculated to obtain the second array, which specifically includes:
1、确认每个目标物体对应的目标第一三维检测框和目标第二三维检测框。1. Confirm the first three-dimensional target detection frame and the second three-dimensional target detection frame corresponding to each target object.
具体地,目标第一三维检测框和目标第二三维检测框同样利用第一三维检测框和第二三维检测框的中心点坐标之间的欧式距离确认。Specifically, the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target are also confirmed using the Euclidean distance between the center point coordinates of the first three-dimensional detection frame and the second three-dimensional detection frame.
2、分别计算目标第一三维检测框、目标第二三维检测框的第一体积、第二体积。2. Calculate the first volume and the second volume of the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target respectively.
具体地,根据三维检测框的8个顶点的坐标即可确认三维检测框的长宽高,利用长宽高计算得到三维检测框的体积。Specifically, the length, width, and height of the three-dimensional detection frame can be determined based on the coordinates of the eight vertices of the three-dimensional detection frame, and the volume of the three-dimensional detection frame is calculated using the length, width, and height.
3、计算目标第一三维检测框与目标第二三维检测框的重叠区域的底面面积。3. Calculate the bottom area of the overlapping area between the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target.
4、确认目标第一三维检测框与目标第二三维检测框的重叠区域的高度。4. Confirm the height of the overlapping area between the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target.
5、利用底面面积和高度计算得到重叠区域的第三体积。5. Use the base area and height to calculate the third volume of the overlapping area.
需要理解的是,第一三维检测框和第二三维检测框的重叠区域同样为一方体结构,其体积可由底面积乘以高求得,而第一三维检测框和第二三维检测框对应的目标物体均处于地面上,即第一三维检测框和第二三维检测框的底面处于同一平面,因此第一三维检测框和第二三维检测框的重叠区域的底面的顶点坐标可根据第一三维检测框的底面的四个顶点坐标和第二三维检测框的四个顶点坐标确认,再根据重叠区域的底面的顶点坐标可求得重叠区域的底面积,该方式与二维检测框中求取重叠面积的方式相同,此处不再赘述。而对于重叠区域的高,因第一三维检测框和第二三维检测框对应的目标物体均处于地面上,即第一三维检测框和第二三维检测框的底面处于同一平面,因此,第一三维检测框和第二三维检测框重叠区域的高为第一三维检测框和第二三维检测框中较小的高度值。在得到第一三维检测框和第二三维检测框的重叠区域的底面积和高之后,即可计算得到该重叠区域的第三体积。It should be understood that the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame is also a solid structure, and its volume can be calculated by multiplying the base area by the height, and the corresponding areas of the first three-dimensional detection frame and the second three-dimensional detection frame The target objects are all on the ground, that is, the bottom surfaces of the first three-dimensional detection frame and the second three-dimensional detection frame are on the same plane. Therefore, the vertex coordinates of the bottom surface of the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame can be determined according to the first three-dimensional detection frame. Confirm the four vertex coordinates of the bottom surface of the detection frame and the four vertex coordinates of the second three-dimensional detection frame, and then calculate the bottom area of the overlapping area based on the vertex coordinates of the bottom surface of the overlapping area. This method is the same as that obtained in the two-dimensional detection frame. The method for overlapping areas is the same and will not be described again here. As for the height of the overlapping area, since the target objects corresponding to the first three-dimensional detection frame and the second three-dimensional detection frame are both on the ground, that is, the bottom surfaces of the first three-dimensional detection frame and the second three-dimensional detection frame are on the same plane, therefore, the first three-dimensional detection frame The height of the overlapping area between the three-dimensional detection frame and the second three-dimensional detection frame is the smaller height value of the first three-dimensional detection frame and the second three-dimensional detection frame. After obtaining the bottom area and height of the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame, the third volume of the overlapping area can be calculated.
6、利用第一体积、第二体积、第三体积计算得到每个目标物体对应的第二IoU数值。6. Calculate the second IoU value corresponding to each target object using the first volume, the second volume, and the third volume.
具体地,第二IoU数值=第三体积/(第一体积+第二体积-第三体积)Specifically, the second IoU value = third volume/(first volume + second volume – third volume)
7、利用所有目标物体对应的第二IoU数值构建第二数组。7. Construct the second array using the second IoU values corresponding to all target objects.
具体地,第一数组表示为:U=[u1,u2,…,un],n为第二IoU数值的个数。Specifically, the first array is expressed as: U=[u 1 , u 2 ,..., u n ], and n is the number of second IoU values.
步骤S105:基于第一数组和第二数组进行攻击检测并定位被攻击的传感器。Step S105: Perform attack detection and locate the attacked sensor based on the first array and the second array.
具体地,在得到第一数组和第二数组后,对两个数组进行不一致性检测,从而确认是否传感器是否被攻击,并定位被攻击的三维传感器还是二维传感器。Specifically, after obtaining the first array and the second array, inconsistency detection is performed on the two arrays to confirm whether the sensor is attacked and to locate whether the attacked three-dimensional sensor or the two-dimensional sensor is a two-dimensional sensor.
进一步的,步骤S105具体包括:Further, step S105 specifically includes:
1、分别判断第一数组、第二数组是否存在离群值。1. Determine whether there are outliers in the first array and the second array respectively.
2、当第一数组存在离群值、第二数组不存在离群值时,确认三维传感器被攻击。2. When there are outliers in the first array and no outliers in the second array, it is confirmed that the three-dimensional sensor is attacked.
3、当第一数组不存在离群值、第二数组存在离群值时,确认二维传感器被攻击。3. When there are no outliers in the first array and there are outliers in the second array, it is confirmed that the two-dimensional sensor is attacked.
4、当第一数组、第二数组均存在离群值时,确认三维传感器和/或二维传感器被攻击。4. When there are outliers in both the first array and the second array, confirm that the three-dimensional sensor and/or two-dimensional sensor is attacked.
5、当第一数组、第二数组均不存在离群值时,确认三维传感器和二维传感器未受到攻击。5. When there are no outliers in the first array and the second array, confirm that the three-dimensional sensor and the two-dimensional sensor are not attacked.
本发明实施例的自动驾驶系统的攻击检测方法通过分别对三维传感器、二维传感器获取的三维信息数据、二维信息数据进行目标检测,得到第一三维检测框、第一二维检测框,且对三维信息数据、二维信息数据进行融合目标检测,得到第二三维检测框,并对第二三维检测框进行坐标变换,得到第二二维检测框,再对每个物体对应的第一二维检测框和第二二维检测框进行IoU数值计算,得到第一数组,对每个物体对应的第一三维检测框和第二三维检测框进行IoU数值计算,得到第二数组,分析第一数组和和第二数组确认是否传感器是否被攻击,其利用三维传感器和二维传感器在时间和空间上的相关性进行攻击检测,从而提高攻击检测的准确性,并且利用作为自动驾驶汽车的基本配置的三维传感器和二维传感器即能够实现攻击检测,不再局限于利用其中一种传感器为主的目标检测算法进行攻击检测,具有更高的普适性。The attack detection method of the autonomous driving system according to the embodiment of the present invention performs target detection on the three-dimensional information data and the two-dimensional information data obtained by the three-dimensional sensor and the two-dimensional sensor respectively, and obtains the first three-dimensional detection frame and the first two-dimensional detection frame, and Fusion target detection is performed on the three-dimensional information data and the two-dimensional information data to obtain the second three-dimensional detection frame, and the coordinate transformation of the second three-dimensional detection frame is performed to obtain the second two-dimensional detection frame, and then the first and second detection frames corresponding to each object are obtained. Perform IoU numerical calculations on the first three-dimensional detection frame and the second three-dimensional detection frame to obtain the first array. Perform IoU numerical calculations on the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each object to obtain the second array. Analyze the first array. The array and the second array confirm whether the sensor has been attacked. It uses the correlation of three-dimensional sensors and two-dimensional sensors in time and space for attack detection, thereby improving the accuracy of attack detection and using it as the basic configuration of self-driving cars. Three-dimensional sensors and two-dimensional sensors can realize attack detection, and are no longer limited to using one of the sensor-based target detection algorithms for attack detection, and have higher universality.
图4是本发明实施例的自动驾驶系统的攻击检测装置的功能模块示意图。如图4所示,该自动驾驶系统的攻击检测装置20包括获取模块21、第一检测模块22、第二检测模块23、计算模块24和分析模块25。Figure 4 is a schematic functional module diagram of the attack detection device of the autonomous driving system according to the embodiment of the present invention. As shown in FIG. 4 , the attack detection device 20 of the automatic driving system includes an acquisition module 21 , a first detection module 22 , a second detection module 23 , a calculation module 24 and an analysis module 25 .
获取模块21,用于分别利用三维传感器、二维传感器分别获取目标区域的三维信息数据、二维信息数据;The acquisition module 21 is used to obtain three-dimensional information data and two-dimensional information data of the target area using three-dimensional sensors and two-dimensional sensors respectively;
第一检测模块22,用于将三维信息数据、二维信息数据分别输入预先训练好的三维检测模型、二维检测模型,得到目标区域中的每个目标物体的第一三维检测框、第一二维检测框;The first detection module 22 is used to input the three-dimensional information data and the two-dimensional information data into the pre-trained three-dimensional detection model and the two-dimensional detection model respectively to obtain the first three-dimensional detection frame and the first three-dimensional detection frame of each target object in the target area. 2D detection frame;
第二检测模块23,用于将三维信息数据、二维信息数据输入预先训练好的融合检测模型,得到目标区域中的每个目标物体的第二三维检测框,并对第二三维检测框进行坐标系变化,得到第二二维检测框;The second detection module 23 is used to input the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model to obtain the second three-dimensional detection frame of each target object in the target area, and perform the second three-dimensional detection frame on the second three-dimensional detection frame. The coordinate system changes to obtain the second two-dimensional detection frame;
计算模块24,用于计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组,且计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组;The calculation module 24 is used to calculate the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object, obtain the first array, and calculate the first three-dimensional detection frame corresponding to each target object. and the second IoU value of the second three-dimensional detection frame to obtain the second array;
分析模块25,用于基于第一数组和第二数组进行攻击检测并定位被攻击的传感器。The analysis module 25 is configured to perform attack detection and locate the attacked sensor based on the first array and the second array.
可选地,第二检测模块23执行将三维信息数据、二维信息数据输入预先训练好的融合检测模型的操作之前,还用于:将上一时刻的二维信息数据和当前时刻的二维信息数据进行卡方比较,得到卡方比较数值;判断卡方比较数值是否超过预设阈值;当卡方比较数值超过预设阈值时,将当前时刻的二维信息数据与上一时刻的二维信息数据进行直方图匹配,得到当前时刻的增强二维信息数据,并替换掉当前时刻的二维信息数据;当卡方比较数值未超过预设阈值时,维持当前时刻的二维信息数据。Optionally, before performing the operation of inputting the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model, the second detection module 23 is also used to: combine the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment. Perform chi-square comparison on the information data to obtain the chi-square comparison value; determine whether the chi-square comparison value exceeds the preset threshold; when the chi-square comparison value exceeds the preset threshold, compare the two-dimensional information data at the current moment with the two-dimensional information data at the previous moment. The information data performs histogram matching to obtain the enhanced two-dimensional information data at the current moment and replace the two-dimensional information data at the current moment; when the chi-square comparison value does not exceed the preset threshold, the two-dimensional information data at the current moment is maintained.
可选地,第二检测模块23执行对第二三维检测框进行坐标系变化,得到第二二维检测框的操作,具体包括:获取第二三维检测框的三维中心点坐标;利用三维中心点坐标确认第二三维检测框上与三维中心点在同一平面的四个点的齐次坐标;利用预先获取的相机投影矩阵、相机旋转矩阵、传感器到相机坐标系的旋转矩阵和齐次坐标计算得到四个二维坐标点;利用四个二维坐标点构建第二二维检测框。Optionally, the second detection module 23 performs an operation of changing the coordinate system of the second three-dimensional detection frame to obtain the second two-dimensional detection frame, which specifically includes: obtaining the three-dimensional center point coordinates of the second three-dimensional detection frame; using the three-dimensional center point Coordinates confirm the homogeneous coordinates of four points on the same plane as the three-dimensional center point on the second three-dimensional detection frame; calculated using the pre-obtained camera projection matrix, camera rotation matrix, sensor-to-camera coordinate system rotation matrix and homogeneous coordinates Four two-dimensional coordinate points; use the four two-dimensional coordinate points to construct a second two-dimensional detection frame.
可选地,计算模块24执行计算每个目标物体对应的第一二维检测框与第二二维检测框的第一IoU数值,得到第一数组的操作,具体包括:确认每个目标物体对应的目标第一二维检测框和目标第二二维检测框;分别计算目标第一二维检测框、目标第二二维检测框的第一面积、第二面积,以及目标第一二维检测框与目标第二二维检测框的重叠区域的第三面积;利用第一面积、第二面积、第三面积计算得到每个目标物体对应的第一IoU数值;利用所有目标物体对应的第一IoU数值构建第一数组。Optionally, the calculation module 24 performs the operation of calculating the first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain the first array, which specifically includes: confirming that each target object corresponds to The first two-dimensional detection frame of the target and the second two-dimensional detection frame of the target; respectively calculate the first area and the second area of the first two-dimensional detection frame of the target, the second two-dimensional detection frame of the target, and the first two-dimensional detection frame of the target. The third area of the overlapping area between the frame and the target's second two-dimensional detection frame; use the first area, the second area, and the third area to calculate the first IoU value corresponding to each target object; use the first IoU value corresponding to all target objects IoU values construct the first array.
可选地,计算模块24执行计算每个目标物体对应的第一三维检测框和第二三维检测框的第二IoU数值,得到第二数组的操作,具体包括:确认每个目标物体对应的目标第一三维检测框和目标第二三维检测框;分别计算目标第一三维检测框、目标第二三维检测框的第一体积、第二体积;计算目标第一三维检测框与目标第二三维检测框的重叠区域的底面面积;确认目标第一三维检测框与目标第二三维检测框的重叠区域的高度;利用底面面积和高度计算得到重叠区域的第三体积;利用第一体积、第二体积、第三体积计算得到每个目标物体对应的第二IoU数值;利用所有目标物体对应的第二IoU数值构建第二数组。Optionally, the calculation module 24 performs the operation of calculating the second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain the second array, which specifically includes: confirming the target corresponding to each target object. The first three-dimensional detection frame and the second three-dimensional detection frame of the target; calculate the first volume and the second volume of the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target respectively; calculate the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target The bottom area of the overlapping area of the frame; confirm the height of the overlapping area of the first three-dimensional detection frame of the target and the second three-dimensional detection frame of the target; calculate the third volume of the overlapping area using the bottom area and height; use the first volume and the second volume , the third volume is calculated to obtain the second IoU value corresponding to each target object; the second IoU value corresponding to all target objects is used to construct the second array.
可选地,分析模块25执行基于第一数组和第二数组进行攻击检测并定位被攻击的传感器的操作,具体包括:分别判断第一数组、第二数组是否存在离群值;当第一数组存在离群值、第二数组不存在离群值时,确认三维传感器被攻击;当第一数组不存在离群值、第二数组存在离群值时,确认二维传感器被攻击;当第一数组、第二数组均存在离群值时,确认三维传感器和/或二维传感器被攻击;当第一数组、第二数组均不存在离群值时,确认三维传感器和二维传感器未受到攻击。Optionally, the analysis module 25 performs an operation of detecting an attack and locating the attacked sensor based on the first array and the second array, specifically including: determining whether there are outliers in the first array and the second array respectively; when the first array When there are outliers and there are no outliers in the second array, it is confirmed that the three-dimensional sensor has been attacked; when there are no outliers in the first array and there are outliers in the second array, it is confirmed that the two-dimensional sensor has been attacked; when the first array has no outliers, it is confirmed that the two-dimensional sensor has been attacked. When there are outliers in both the array and the second array, it is confirmed that the three-dimensional sensor and/or the two-dimensional sensor is under attack; when there are no outliers in the first array and the second array, it is confirmed that the three-dimensional sensor and the two-dimensional sensor are not under attack .
可选地,三维传感器包括激光雷达,二维传感器包括摄像头,三维检测模型基于PointPillar算法构建,二维检测模型基于YOLOv3算法构建,融合检测模型基于AVOD算法构建。Optionally, the three-dimensional sensor includes lidar, the two-dimensional sensor includes a camera, the three-dimensional detection model is built based on the PointPillar algorithm, the two-dimensional detection model is built based on the YOLOv3 algorithm, and the fusion detection model is built based on the AVOD algorithm.
关于上述实施例自动驾驶系统的攻击检测装置中各模块实现技术方案的其他细节,可参见上述实施例中的自动驾驶系统的攻击检测方法中的描述,此处不再赘述。For other details of the technical solution for implementing each module in the attack detection device of the automatic driving system in the above embodiment, please refer to the description of the attack detection method of the automatic driving system in the above embodiment, and will not be described again here.
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于装置类实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。It should be noted that each embodiment in this specification is described in a progressive manner. Each embodiment focuses on its differences from other embodiments. The same and similar parts between the various embodiments are referred to each other. Can. As for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple. For relevant details, please refer to the partial description of the method embodiment.
请参阅图5,图5为本发明实施例的计算机设备的结构示意图。如图5所示,该计算机设备30包括处理器31及和处理器31耦接的存储器32,存储器32中存储有程序指令,程序指令被处理器31执行时,使得处理器31执行上述任一实施例所述的自动驾驶系统的攻击检测方法步骤。Please refer to FIG. 5 , which is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown in FIG. 5 , the computer device 30 includes a processor 31 and a memory 32 coupled to the processor 31 . The memory 32 stores program instructions. When the program instructions are executed by the processor 31 , the processor 31 executes any of the above. The attack detection method steps of the automatic driving system described in the embodiment.
其中,处理器31还可以称为CPU(Central Processing Unit,中央处理单元)。处理器31可能是一种集成电路芯片,具有信号的处理能力。处理器31还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 31 may also be called a CPU (Central Processing Unit). The processor 31 may be an integrated circuit chip with signal processing capabilities. The processor 31 may also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. . A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
参阅图6,图6为本发明实施例的存储介质的结构示意图。本发明实施例的存储介质存储有能够实现上述自动驾驶系统的攻击检测方法的程序指令41,其中,该程序指令41可以以软件产品的形式存储在上述存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质,或者是计算机、服务器、手机、平板等计算机设备。Refer to FIG. 6 , which is a schematic structural diagram of a storage medium according to an embodiment of the present invention. The storage medium in the embodiment of the present invention stores program instructions 41 that can implement the above-mentioned attack detection method of the automatic driving system. The program instructions 41 can be stored in the above-mentioned storage medium in the form of a software product and include several instructions to enable an A computer device (which may be a personal computer, a server, a network device, etc.) or a processor executes all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. , or computer equipment such as computers, servers, mobile phones, tablets, etc.
在本申请所提供的几个实施例中,应该理解到,所揭露的计算机设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed computer equipment, apparatus and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。以上仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。In addition, each functional unit in various embodiments of the present invention can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit. The above integrated units can be implemented in the form of hardware or software functional units. The above are only embodiments of the present application, and do not limit the patent scope of the present application. Any equivalent structure or equivalent process transformation made using the contents of the description and drawings of this application, or directly or indirectly applied in other related technical fields, All are similarly included in the patent protection scope of this application.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310584658.0A CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
PCT/CN2023/137634 WO2024239605A1 (en) | 2023-05-23 | 2023-12-08 | Attack detection method and apparatus for autonomous driving system, device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310584658.0A CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116912788A true CN116912788A (en) | 2023-10-20 |
Family
ID=88357100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310584658.0A Pending CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116912788A (en) |
WO (1) | WO2024239605A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024239605A1 (en) * | 2023-05-23 | 2024-11-28 | 深圳先进技术研究院 | Attack detection method and apparatus for autonomous driving system, device, and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626217B (en) * | 2020-05-28 | 2023-08-22 | 宁波博登智能科技有限公司 | Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion |
EP4330716A4 (en) * | 2021-04-26 | 2025-02-26 | Neural Propulsion Systems, Inc. | MOBILE APERTURE LIDAR |
CN113111978B (en) * | 2021-06-11 | 2021-10-01 | 之江实验室 | Three-dimensional target detection system and method based on point cloud and image data |
CN114187579A (en) * | 2021-12-14 | 2022-03-15 | 智道网联科技(北京)有限公司 | Object detection method, device and computer-readable storage medium for automatic driving |
CN114596358A (en) * | 2022-03-03 | 2022-06-07 | 深圳一清创新科技有限公司 | Object detection method and device and electronic equipment |
CN115453589A (en) * | 2022-08-19 | 2022-12-09 | 中国科学院深圳先进技术研究院 | Attack detection method based on automatic driving, terminal device and storage medium |
CN116912788A (en) * | 2023-05-23 | 2023-10-20 | 深圳先进技术研究院 | Attack detection method, device and equipment for automatic driving system and storage medium |
-
2023
- 2023-05-23 CN CN202310584658.0A patent/CN116912788A/en active Pending
- 2023-12-08 WO PCT/CN2023/137634 patent/WO2024239605A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024239605A1 (en) * | 2023-05-23 | 2024-11-28 | 深圳先进技术研究院 | Attack detection method and apparatus for autonomous driving system, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2024239605A1 (en) | 2024-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112785702B (en) | A SLAM method based on tightly coupled 2D lidar and binocular camera | |
CN111563415B (en) | A three-dimensional target detection system and method based on binocular vision | |
US11216971B2 (en) | Three-dimensional bounding box from two-dimensional image and point cloud data | |
WO2021233029A1 (en) | Simultaneous localization and mapping method, device, system and storage medium | |
WO2022188663A1 (en) | Target detection method and apparatus | |
CN111665842B (en) | Indoor SLAM mapping method and system based on semantic information fusion | |
US8199977B2 (en) | System and method for extraction of features from a 3-D point cloud | |
CN112292711A (en) | Correlating LIDAR data and image data | |
CN110807350A (en) | System and method for visual SLAM for scan matching | |
CN106940186A (en) | A kind of robot autonomous localization and air navigation aid and system | |
CN112097732A (en) | Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium | |
KR20210022703A (en) | Moving object detection and intelligent driving control methods, devices, media and devices | |
Song et al. | End-to-end learning for inter-vehicle distance and relative velocity estimation in ADAS with a monocular camera | |
Saha et al. | 3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments | |
KR20190060679A (en) | Apparatus and method for learning pose of a moving object | |
WO2020238073A1 (en) | Method for determining orientation of target object, intelligent driving control method and apparatus, and device | |
JP2018073308A (en) | Recognition device, program | |
CN116912788A (en) | Attack detection method, device and equipment for automatic driving system and storage medium | |
Vatavu et al. | Modeling and tracking of dynamic obstacles for logistic plants using omnidirectional stereo vision | |
EP4148375A1 (en) | Ranging method and apparatus | |
CN114648639B (en) | Target vehicle detection method, system and device | |
CN117152199A (en) | Dynamic target motion vector estimation method, system, equipment and storage medium | |
US11417063B2 (en) | Determining a three-dimensional representation of a scene | |
CN115222815A (en) | Obstacle distance detection method, device, computer equipment and storage medium | |
US12296488B2 (en) | Method and system to detect and estimate dynamic obstacles using RGB-D sensors for robot navigation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |