CN116125996B - Safety monitoring method and system for unmanned vehicle - Google Patents
Safety monitoring method and system for unmanned vehicle Download PDFInfo
- Publication number
- CN116125996B CN116125996B CN202310350094.4A CN202310350094A CN116125996B CN 116125996 B CN116125996 B CN 116125996B CN 202310350094 A CN202310350094 A CN 202310350094A CN 116125996 B CN116125996 B CN 116125996B
- Authority
- CN
- China
- Prior art keywords
- data
- fusion
- unmanned vehicle
- vehicle
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 107
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004891 communication Methods 0.000 claims abstract description 9
- 230000004927 fusion Effects 0.000 claims description 102
- 230000008447 perception Effects 0.000 claims description 50
- 239000013598 vector Substances 0.000 claims description 48
- 238000004458 analytical method Methods 0.000 claims description 25
- 239000002131 composite material Substances 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 18
- 238000012216 screening Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 238000013145 classification model Methods 0.000 claims description 8
- 238000007405 data analysis Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims 1
- 238000012795 verification Methods 0.000 description 16
- 230000009286 beneficial effect Effects 0.000 description 15
- 230000006399 behavior Effects 0.000 description 10
- 150000001875 compounds Chemical class 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 206010025482 malaise Diseases 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Alarm Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
本发明提出了一种无人驾驶车辆的安全监控方法及系统,包括:建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系;在预设区域内多辆无人驾驶车辆行驶时,对无人驾驶车辆的内部进行环境感知,得到第一感知数据;对无人驾驶车辆的外部进行环境感知,得到第二感知数据;后台服务器根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控。实现对预设区域内多辆无人驾驶车辆进行安全监控,同时基于多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,降低了后台服务器的负载,提高了监控资源的利用率。
The present invention proposes a safety monitoring method and system for unmanned vehicles, including: establishing a communication relationship between multiple unmanned vehicles and a background server in a preset area; , to sense the environment inside the unmanned vehicle to obtain the first sensing data; to sense the environment outside the unmanned vehicle to obtain the second sensing data; the background server determines how many According to the type of unmanned vehicle, the corresponding level of safety monitoring is performed. Realize the safety monitoring of multiple unmanned vehicles in the preset area. At the same time, based on the type of multiple unmanned vehicles, the corresponding level of safety monitoring is performed according to the type, which reduces the load on the background server and improves the utilization of monitoring resources. utilization rate.
Description
技术领域technical field
本发明涉及汽车电子技术领域,特别涉及一种无人驾驶车辆的安全监控方法及系统。The invention relates to the technical field of automotive electronics, in particular to a safety monitoring method and system for an unmanned vehicle.
背景技术Background technique
无人驾驶车辆是智能汽车的一种,主要依靠车载传感系统感知道路环境,自动规划行车路线并控制车辆到达预定目的地。它利用车载传感器来感知车辆周围环境,并根据感知所获得的道路、车辆位置和障碍物信息,控制车辆的转向和速度,从而使车辆能够安全、可靠地在道路上行驶。现有技术中多是对单独的一辆无人驾驶车辆进行安全监控,无法实现对预设区域内多辆无人驾驶车辆进行安全监控,如果在对预设区域内多辆无人驾驶车辆进行安全监控时执行相同等级的安全监控,会给后台服务器造成极大的负载,也会造成资源的浪费。An unmanned vehicle is a type of smart car, which mainly relies on the on-board sensor system to perceive the road environment, automatically plan the driving route and control the vehicle to reach the predetermined destination. It uses on-board sensors to perceive the surrounding environment of the vehicle, and controls the steering and speed of the vehicle based on the road, vehicle position and obstacle information obtained from the perception, so that the vehicle can drive safely and reliably on the road. In the prior art, most of the safety monitoring is performed on a single unmanned vehicle, and it is impossible to realize the safety monitoring on multiple unmanned vehicles in the preset area. If multiple unmanned vehicles in the preset area are monitored Performing the same level of security monitoring during security monitoring will cause a huge load on the background server and waste resources.
发明内容Contents of the invention
本发明旨在至少一定程度上解决上述技术中的技术问题之一。为此,本发明的第一个目的在于提出一种无人驾驶车辆的安全监控方法,实现对预设区域内多辆无人驾驶车辆进行安全监控,同时基于多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,降低了后台服务器的负载,提高了监控资源的利用率。The present invention aims to solve one of the technical problems in the above-mentioned technologies at least to a certain extent. For this reason, the first purpose of the present invention is to propose a safety monitoring method for unmanned vehicles, which can realize the safety monitoring of multiple unmanned vehicles in a preset area, and at the same time, based on the types of multiple unmanned vehicles, The corresponding level of security monitoring is performed according to the type, which reduces the load of the background server and improves the utilization rate of monitoring resources.
本发明的第二个目的在于提出一种无人驾驶车辆的安全监控系统。The second purpose of the present invention is to propose a safety monitoring system for unmanned vehicles.
为达到上述目的,本发明第一方面实施例提出了一种无人驾驶车辆的安全监控方法,包括:In order to achieve the above purpose, the embodiment of the first aspect of the present invention proposes a safety monitoring method for unmanned vehicles, including:
建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系;Establish a communication relationship between multiple unmanned vehicles in the preset area and the background server;
在预设区域内多辆无人驾驶车辆行驶时,对无人驾驶车辆的内部进行环境感知,得到第一感知数据;对无人驾驶车辆的外部进行环境感知,得到第二感知数据;When multiple unmanned vehicles are driving in the preset area, the environment perception of the interior of the unmanned vehicle is performed to obtain the first perception data; the environment perception of the exterior of the unmanned vehicle is obtained to obtain the second perception data;
后台服务器根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控。The background server determines the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and performs corresponding level of safety monitoring according to the types.
根据本发明的一些实施例,对无人驾驶车辆的内部进行环境感知,得到第一感知数据,包括:According to some embodiments of the present invention, environment perception is performed on the interior of the unmanned vehicle to obtain the first perception data, including:
获取无人驾驶车辆内部的车载终端的控制面板图像及内部的场景图像;Obtain the image of the control panel of the vehicle-mounted terminal inside the unmanned vehicle and the image of the scene inside;
检验控制面板图像及场景图像的图像质量,在确定图像质量不达标时,进行参数调节处理,得到目标控制面板图像及目标场景图像;Check the image quality of the control panel image and the scene image, and perform parameter adjustment processing to obtain the target control panel image and the target scene image when it is determined that the image quality is not up to standard;
对目标控制面板图像进行解析,确定车辆的控制节点运行信息、车辆状态参数信息及车机设备的运行信息;Analyze the image of the target control panel to determine the operation information of the vehicle's control nodes, vehicle state parameter information, and vehicle-machine equipment operation information;
对目标场景图像进行解析,确定乘客的行为特征;Analyze the image of the target scene to determine the behavior characteristics of passengers;
根据控制节点运行信息、车辆状态参数信息、车机设备的运行信息及乘客的行为特征确定第一感知数据。The first sensing data is determined according to the operation information of the control node, the vehicle state parameter information, the operation information of the on-board equipment, and the behavior characteristics of passengers.
根据本发明的一些实施例,检验控制面板图像的图像质量,在确定图像质量不达标时,进行参数调节处理,得到目标控制面板图像,包括:According to some embodiments of the present invention, the image quality of the control panel image is checked, and when it is determined that the image quality is not up to standard, parameter adjustment processing is performed to obtain the target control panel image, including:
基于梯度算法利用Sobel算子计算控制面板图像在水平方向的第一梯度值及垂直方向的第二梯度值;Based on the gradient algorithm, the Sobel operator is used to calculate the first gradient value of the control panel image in the horizontal direction and the second gradient value in the vertical direction;
根据第一梯度值及第二梯度值查询预设的第一梯度值-第二梯度值-清晰度数据表,确定清晰度值;Querying the preset first gradient value-second gradient value-sharpness data table according to the first gradient value and the second gradient value to determine the sharpness value;
在确定清晰度值小于预设清晰度阈值时,表示图像质量不达标,确定预设清晰度阈值与清晰度值的差值,根据差值查询差值-焦距矫正值数据表,确定焦距矫正值,根据焦距矫正值进行参数调节处理,得到目标控制面板图像。When it is determined that the sharpness value is less than the preset sharpness threshold, it means that the image quality is not up to standard, determine the difference between the preset sharpness threshold and the sharpness value, query the difference-focus correction value data table according to the difference, and determine the focus correction value , and perform parameter adjustment processing according to the focal length correction value to obtain a target control panel image.
根据本发明的一些实施例,对无人驾驶车辆的外部进行环境感知,得到第二感知数据,包括:According to some embodiments of the present invention, environment perception is performed on the outside of the unmanned vehicle to obtain second perception data, including:
获取无人驾驶车辆的外部环境图像及雷达信息;Obtain external environment images and radar information of unmanned vehicles;
根据外部环境图像及雷达信息确定无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险,根据无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险确定第二感知数据。Determining the obstacle information around the unmanned vehicle and the collision risk with each obstacle according to the external environment image and radar information, and determining the second sensing data according to the obstacle information around the unmanned vehicle and the collision risk with each obstacle.
根据本发明的一些实施例,后台服务器根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,包括:According to some embodiments of the present invention, the background server determines the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and performs corresponding levels of safety monitoring according to the types, including:
后台服务器将第一感知数据输入预先训练好的数据分类模型中进行分类,输出若干个第一分类数据;The background server inputs the first perception data into the pre-trained data classification model for classification, and outputs several first classification data;
将第二感知数据输入预先训练好的数据分类模型中进行分类,输出若干个第二分类数据;Inputting the second perception data into the pre-trained data classification model for classification, and outputting several second classification data;
将同一数据类型的第一分类数据及第二分类数据进行数据融合,作为一组融合数据,进而得到若干组融合数据;performing data fusion on the first classification data and the second classification data of the same data type as a set of fusion data, and then obtaining several sets of fusion data;
将若干组融合数据分别输入至对应的单一识别模型中,输出第一解析结果;在确定若干个第一解析结果中至少有一个表示存在风险数据,则确定融合数据对应的无人驾驶车辆为第一类型车辆,并执行一级安全监控。Input several sets of fusion data into the corresponding single identification model, and output the first analysis result; after determining that at least one of the several first analysis results indicates the existence of risk data, then determine that the unmanned vehicle corresponding to the fusion data is the first A type of vehicle, and implement a level of security monitoring.
根据本发明的一些实施例,还包括:According to some embodiments of the present invention, also include:
在确定若干个第一解析结果中全部表示不存在风险数据时,在若干组融合数据中选取目标融合数据;When it is determined that all of the several first analysis results indicate that there is no risk data, select the target fusion data from several sets of fusion data;
将除目标融合数据外的其他融合数据均进行特征提取,得到特征向量,根据特征向量转换至目标融合数据对应的类型,得到转换特征向量;Perform feature extraction on all fusion data except the target fusion data to obtain a feature vector, and convert the feature vector to the type corresponding to the target fusion data to obtain a converted feature vector;
将目标融合数据及各个转换特征向量进行匹配,确定目标融合数据与各个转换特征向量的关联关系;Matching the target fusion data and each conversion feature vector to determine the correlation between the target fusion data and each conversion feature vector;
根据所述关联关系确定目标集合;determining a target set according to the association relationship;
根据所述目标集合生成数据体系;generating a data system according to the target set;
将数据体系输入复合识别模型中,输出第二解析结果;inputting the data system into the composite identification model, and outputting the second analysis result;
根据第二解析结果确定存在风险数据时,则确定融合数据对应的无人驾驶车辆为第二类型车辆,并执行二级安全监控;根据第二解析结果确定不存在风险数据时,则确定融合数据对应的无人驾驶车辆为第三类型车辆,并执行三级安全监控。When it is determined that there is risk data according to the second analysis result, it is determined that the unmanned vehicle corresponding to the fusion data is the second type of vehicle, and a secondary safety monitoring is performed; when it is determined that there is no risk data according to the second analysis result, then it is determined that the fusion data The corresponding unmanned vehicle is the third type of vehicle, and performs three-level security monitoring.
根据本发明的一些实施例,对单一识别模型进行训练,方法包括:According to some embodiments of the present invention, a single recognition model is trained, and the method includes:
获取样本融合数据;Obtain sample fusion data;
基于单一识别模型的数据层对样本融合数据进行预处理,确定风险因子;Based on the data layer of the single identification model, the sample fusion data is preprocessed to determine the risk factor;
基于单一识别模型的指标层对风险因子进行数据分析,确定若干指标,并进行筛选,得到风险指标;Based on the index layer of the single identification model, the data analysis of the risk factors is carried out, and several indicators are determined and screened to obtain the risk indicators;
基于单一识别模型的模型参数层对各种风险指标进行组合,得到多种组合结果,筛选出风险概率预测准确率最高的组合结果,作为目标组合结果;Based on the model parameter layer of the single identification model, various risk indicators are combined to obtain multiple combination results, and the combination result with the highest risk probability prediction accuracy is selected as the target combination result;
基于单一识别模型的输出层输出对样本融合数据的预测结果,在预测结果与样本融合数据对应的真实结果一致时,表示训练完成。The output layer based on the single recognition model outputs the prediction result of the sample fusion data. When the prediction result is consistent with the real result corresponding to the sample fusion data, it means that the training is completed.
根据本发明的一些实施例,根据所述关联关系确定目标集合,包括:According to some embodiments of the present invention, determining the target set according to the association relationship includes:
将目标融合数据作为关键节点,转换特征向量作为所述关键节点的关联节点;Taking the target fusion data as a key node, and converting the feature vector as an associated node of the key node;
根据关键节点、关联节点及关联关系构建筛选体系;所述筛选体系中包括每个关联节点到关键节点的距离值;Constructing a screening system according to key nodes, associated nodes and associated relationships; the screening system includes a distance value from each associated node to a key node;
筛选出距离值小于预设距离值的关联节点,作为目标关联节点;Filter out the associated nodes whose distance value is less than the preset distance value as the target associated node;
根据目标关联节点及关键节点确定目标集合。Determine the target set according to the target associated nodes and key nodes.
根据本发明的一些实施例,对复合识别模型进行训练,方法包括:According to some embodiments of the present invention, the compound recognition model is trained, and the method includes:
确定无人驾驶车辆的大数据,根据大数据构建驾驶场景的图谱文本;Determine the big data of unmanned vehicles, and construct the map text of the driving scene based on the big data;
提取图谱文本中的各个节点,并确定各个节点之间的关联关系,构建驾驶场景对应的数据链的知识图谱;Extract each node in the map text, determine the relationship between each node, and construct a knowledge map of the data link corresponding to the driving scene;
确定每个节点的风险指标信息;Determine the risk indicator information for each node;
提取知识图谱中的关联特征向量,根据关联特征向量及风险指标信息进行特征融合,得到融合向量;Extract the associated feature vectors in the knowledge map, perform feature fusion according to the associated feature vectors and risk index information, and obtain the fusion vector;
基于融合向量对复合识别模型进行训练。The composite recognition model is trained based on the fused vectors.
为达到上述目的,本发明第二方面实施例提出了一种无人驾驶车辆的安全监控系统,包括:In order to achieve the above purpose, the embodiment of the second aspect of the present invention proposes a safety monitoring system for unmanned vehicles, including:
建立模块,用于建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系;Create a module for establishing the communication relationship between multiple unmanned vehicles and the background server in the preset area;
感知模块,用于在预设区域内多辆无人驾驶车辆行驶时,对无人驾驶车辆的内部进行环境感知,得到第一感知数据;对无人驾驶车辆的外部进行环境感知,得到第二感知数据;The sensing module is used to sense the environment inside the unmanned vehicle to obtain the first sensing data when multiple unmanned vehicles are driving in the preset area; to sense the environment outside the unmanned vehicle to obtain the second sensory data;
后台服务器,用于根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控。The background server is used to determine the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and perform corresponding level of safety monitoring according to the types.
本发明提出了一种无人驾驶车辆的安全监控方法及系统,有益效果为:The present invention proposes a safety monitoring method and system for unmanned vehicles, the beneficial effects of which are:
1、实现对预设区域内多辆无人驾驶车辆进行安全监控,同时基于多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,降低了后台服务器的负载,提高了监控资源的利用率。1. Realize the safety monitoring of multiple unmanned vehicles in the preset area. At the same time, based on the type of multiple unmanned vehicles, the corresponding level of safety monitoring is performed according to the type, which reduces the load on the background server and improves the monitoring. resource utilization.
2、基于对无人驾驶车辆的内部及外部均进行环境感知,得到第一感知数据及第二感知数据,实现对无人驾驶车辆所处环境的准确感知,提高获取数据的全面性。2. Based on the environment perception of the interior and exterior of the unmanned vehicle, the first perception data and the second perception data are obtained, so as to realize the accurate perception of the environment where the unmanned vehicle is located and improve the comprehensiveness of the acquired data.
3、在区分无人驾驶车辆的类型时,基于单一识别模型进行单一数据的识别,及基于复合识别模型进行复合数据的识别,便于准确全面的判断第一类型车辆、第二类型车辆及第三类型车辆,并执行相应等级的安全监控。3. When distinguishing the types of unmanned vehicles, the identification of single data based on a single identification model, and the identification of composite data based on a composite identification model are convenient for accurate and comprehensive judgment of the first type of vehicle, the second type of vehicle and the third type. types of vehicles, and implement the corresponding level of security monitoring.
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and appended drawings.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.
附图说明Description of drawings
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the description, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention. In the attached picture:
图1是根据本发明一个实施例的一种无人驾驶车辆的安全监控方法的流程图;Fig. 1 is a flow chart of a safety monitoring method for an unmanned vehicle according to an embodiment of the present invention;
图2是根据本发明一个实施例的得到第一感知数据的方法的流程图;FIG. 2 is a flowchart of a method for obtaining first sensing data according to an embodiment of the present invention;
图3是根据本发明一个实施例的一种无人驾驶车辆的安全监控系统的框图。Fig. 3 is a block diagram of a safety monitoring system for an unmanned vehicle according to an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention will be described below in conjunction with the accompanying drawings. It should be understood that the preferred embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.
如图1所示,本发明第一方面实施例提出了一种无人驾驶车辆的安全监控方法,包括步骤S1-S3:As shown in Figure 1, the embodiment of the first aspect of the present invention proposes a safety monitoring method for unmanned vehicles, including steps S1-S3:
S1、建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系;S1. Establish the communication relationship between multiple unmanned vehicles and the background server in the preset area;
S2、在预设区域内多辆无人驾驶车辆行驶时,对无人驾驶车辆的内部进行环境感知,得到第一感知数据;对无人驾驶车辆的外部进行环境感知,得到第二感知数据;S2. When multiple unmanned vehicles are driving in the preset area, perform environment perception on the interior of the unmanned vehicle to obtain first perception data; perform environment perception on the outside of the unmanned vehicle to obtain second perception data;
S3、后台服务器根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控。S3. The background server determines the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and performs corresponding level of safety monitoring according to the types.
上述技术方案的工作原理:建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系,便于实现后台服务器获取预设区域内多辆无人驾驶车辆的初始监控信息,初始监控信息是基于设置在无人驾驶车辆上的感知模块获取的。预设区域为规划的无人驾驶车辆的运行区域。The working principle of the above technical solution: establish the communication relationship between multiple unmanned vehicles in the preset area and the background server, so that the background server can obtain the initial monitoring information of multiple unmanned vehicles in the preset area. The initial monitoring information is based on Acquired by the perception module installed on the unmanned vehicle. The preset area is the planned operating area of the unmanned vehicle.
该实施例中,第一感知数据为对无人驾驶车辆的内部进行环境感知获取的数据。示例的,第一感知数据为控制节点运行信息、车辆状态参数信息、车机设备的运行信息及乘客的行为特征。In this embodiment, the first sensing data is the data acquired through environmental sensing of the interior of the unmanned vehicle. Exemplarily, the first sensing data is control node operation information, vehicle state parameter information, vehicle-machine equipment operation information, and passenger behavior characteristics.
该实施例中,第二感知数据为对无人驾驶车辆的外部进行环境感知获取的数据。示例的,为基于无人驾驶车辆的外部环境图像及雷达信息确定的无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险。In this embodiment, the second sensing data is the data acquired through environmental sensing of the outside of the unmanned vehicle. For example, the obstacle information around the unmanned vehicle and the collision risk with each obstacle determined based on the external environment image and radar information of the unmanned vehicle.
基于第一感知数据及第二感知数据便于对无人驾驶车辆的内部及外部进行全面的感知,便于获取全面的感知数据,进而便于后台服务器准确确定每辆无人驾驶车辆的类型。Based on the first sensing data and the second sensing data, it is convenient to conduct a comprehensive perception of the interior and exterior of the unmanned vehicle, to obtain comprehensive sensing data, and to facilitate the background server to accurately determine the type of each unmanned vehicle.
该实施例中,无人驾驶车辆的类型包括第一类型车辆、第二类型车辆及第三类型车辆,分别执行一级安全监控、二级安全监控及三级安全监控。In this embodiment, the types of unmanned vehicles include the first type of vehicles, the second type of vehicles and the third type of vehicles, which respectively implement the first-level safety monitoring, the second-level safety monitoring and the third-level safety monitoring.
该实施例中,一级安全监控、二级安全监控、三级安全监控的监控要求是依次降低的,对应的一级安全监控造成的对后台服务器的负载是最大的、二级安全监控其次、三级安全监控最低。示例的,在对第一类型车辆执行一级安全监控时,获取对第一类型车辆进行环境感知获取的数据的监控间隔为1s。在对第二类型车辆执行二级安全监控时,获取对第二类型车辆进行环境感知获取的数据的监控间隔为2s。在对第三类型车辆执行三级安全监控时,获取对第三类型车辆进行环境感知获取的数据的监控间隔为3s。In this embodiment, the monitoring requirements of the first-level security monitoring, the second-level security monitoring, and the third-level security monitoring are successively reduced, and the load on the background server caused by the corresponding first-level security monitoring is the largest, followed by the second-level security monitoring. Level 3 security monitoring is the lowest. Exemplarily, when the first type of vehicle is performing level one safety monitoring, the monitoring interval for acquiring the data of environment awareness acquisition for the first type of vehicle is 1s. When the secondary safety monitoring is performed on the second type of vehicle, the monitoring interval for acquiring the data of environment perception acquisition on the second type of vehicle is 2s. When three-level security monitoring is performed on the third type of vehicle, the monitoring interval for acquiring the data of environment perception acquisition on the third type of vehicle is 3s.
上述技术方案的有益效果:实现对预设区域内多辆无人驾驶车辆进行安全监控,同时基于多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,降低了后台服务器的负载,提高了监控资源的利用率。The beneficial effect of the above-mentioned technical solution: realize the security monitoring of multiple unmanned vehicles in the preset area, and at the same time, based on the types of multiple unmanned vehicles, perform the corresponding level of security monitoring according to the type, reducing the background server. load, improving the utilization of monitoring resources.
如图2所示,根据本发明的一些实施例,对无人驾驶车辆的内部进行环境感知,得到第一感知数据,包括步骤S21-S25:As shown in FIG. 2, according to some embodiments of the present invention, environment perception is performed on the inside of the unmanned vehicle to obtain the first perception data, including steps S21-S25:
S21、获取无人驾驶车辆内部的车载终端的控制面板图像及内部的场景图像;S21. Obtain the control panel image and the internal scene image of the vehicle-mounted terminal inside the unmanned vehicle;
S22、检验控制面板图像及场景图像的图像质量,在确定图像质量不达标时,进行参数调节处理,得到目标控制面板图像及目标场景图像;S22. Check the image quality of the control panel image and the scene image, and when it is determined that the image quality is not up to standard, perform parameter adjustment processing to obtain the target control panel image and the target scene image;
S23、对目标控制面板图像进行解析,确定车辆的控制节点运行信息、车辆状态参数信息及车机设备的运行信息;S23. Analyzing the image of the target control panel to determine the operation information of the control node of the vehicle, the vehicle state parameter information and the operation information of the vehicle equipment;
S24、对目标场景图像进行解析,确定乘客的行为特征;S24. Analyzing the image of the target scene to determine the behavior characteristics of the passengers;
S25、根据控制节点运行信息、车辆状态参数信息、车机设备的运行信息及乘客的行为特征确定第一感知数据。S25. Determine the first sensing data according to the operation information of the control node, the vehicle state parameter information, the operation information of the on-board equipment, and the behavior characteristics of the passengers.
上述技术方案的工作原理:该实施例中,车载终端的控制面板图像,示例的为获取的无人驾驶车辆内部的中控屏的图像。The working principle of the above technical solution: In this embodiment, the image of the control panel of the vehicle-mounted terminal is, for example, the acquired image of the central control panel inside the unmanned vehicle.
该实施例中,内部的场景图像为无人驾驶车辆内部的前排及后排的图像,包括乘客的乘坐图像。In this embodiment, the internal scene images are images of the front row and the rear row inside the unmanned vehicle, including images of passengers riding.
该实施例中,目标控制面板图像为对控制面板图像进行参数调节处理,得到的图像质量达标的控制面板图像。In this embodiment, the target control panel image is a control panel image whose image quality meets the standard obtained by performing parameter adjustment processing on the control panel image.
该实施例中,目标场景图像为对场景图像进行参数调节处理,得到的图像质量达标的场景图像。In this embodiment, the target scene image is a scene image obtained by performing parameter adjustment processing on the scene image so that the image quality meets the standard.
该实施例中,参数调节处理包括清晰度调节处理。In this embodiment, the parameter adjustment processing includes sharpness adjustment processing.
该实施例中,控制节点运行信息为无人驾驶车辆所搭载的控制软件中节点的运行状态信息,节点包括硬件驱动节点、人机交互节点。车辆状态参数信息包括车辆的制动踏板参数、加速踏板参数、方向盘扭矩参数等。车机设备的运行信息包括各个设备是否正常运行及指令响应。In this embodiment, the control node operation information is the operation status information of the nodes in the control software carried by the unmanned vehicle, and the nodes include hardware drive nodes and human-computer interaction nodes. The vehicle state parameter information includes brake pedal parameters, accelerator pedal parameters, steering wheel torque parameters, etc. of the vehicle. The operation information of the on-board equipment includes whether each equipment is running normally and the command response.
该实施例中,对目标场景图像进行解析,确定乘客的行为特征;便于确定对乘客的行为特征进行采集。行为特征包括是否系安全带,面部表情是否正常等。In this embodiment, the image of the target scene is analyzed to determine the behavior characteristics of the passengers; it is convenient to determine and collect the behavior characteristics of the passengers. Behavioral characteristics include whether seat belts are worn, whether facial expressions are normal, etc.
上述技术方案的有益效果:获取无人驾驶车辆内部的车载终端的控制面板图像及内部的场景图像;检验控制面板图像及场景图像的图像质量,在确定图像质量不达标时,进行参数调节处理,得到目标控制面板图像及目标场景图像;便于确定达到图像质量的目标控制面板图像及目标场景图像。根据控制节点运行信息、车辆状态参数信息、车机设备的运行信息及乘客的行为特征确定第一感知数据。便于全面的采集无人驾驶车辆内部的信息,提高得到的第一感知数据的全面性,同时对乘客的行为特征也进行采集,便于加入用户的因素,提高了安全监控的等级。The beneficial effect of the above technical solution: obtain the control panel image and the internal scene image of the vehicle-mounted terminal inside the unmanned vehicle; check the image quality of the control panel image and the scene image, and perform parameter adjustment processing when it is determined that the image quality is not up to standard, A target control panel image and a target scene image are obtained; it is convenient to determine a target control panel image and a target scene image that achieve image quality. The first sensing data is determined according to the operation information of the control node, the vehicle state parameter information, the operation information of the on-board equipment, and the behavior characteristics of passengers. It is convenient to comprehensively collect the internal information of unmanned vehicles, improve the comprehensiveness of the first perception data obtained, and at the same time collect the behavior characteristics of passengers, which is convenient to add user factors and improve the level of safety monitoring.
根据本发明的一些实施例,检验控制面板图像的图像质量,在确定图像质量不达标时,进行参数调节处理,得到目标控制面板图像,包括:According to some embodiments of the present invention, the image quality of the control panel image is checked, and when it is determined that the image quality is not up to standard, parameter adjustment processing is performed to obtain the target control panel image, including:
基于梯度算法利用Sobel算子计算控制面板图像在水平方向的第一梯度值及垂直方向的第二梯度值;Based on the gradient algorithm, the Sobel operator is used to calculate the first gradient value of the control panel image in the horizontal direction and the second gradient value in the vertical direction;
根据第一梯度值及第二梯度值查询预设的第一梯度值-第二梯度值-清晰度数据表,确定清晰度值;Querying the preset first gradient value-second gradient value-sharpness data table according to the first gradient value and the second gradient value to determine the sharpness value;
在确定清晰度值小于预设清晰度阈值时,表示图像质量不达标,确定预设清晰度阈值与清晰度值的差值,根据差值查询差值-焦距矫正值数据表,确定焦距矫正值,根据焦距矫正值进行参数调节处理,得到目标控制面板图像。When it is determined that the sharpness value is less than the preset sharpness threshold, it means that the image quality is not up to standard, determine the difference between the preset sharpness threshold and the sharpness value, query the difference-focus correction value data table according to the difference, and determine the focus correction value , and perform parameter adjustment processing according to the focal length correction value to obtain a target control panel image.
上述技术方案的工作原理:该实施例中,梯度算法包括Tenengrad梯度算法或Laplacian梯度算法。确定的第一梯度值及第二梯度值的数值越大,表征控制面板图像的清晰度值越大。The working principle of the above technical solution: in this embodiment, the gradient algorithm includes Tenengrad gradient algorithm or Laplacian gradient algorithm. The greater the value of the determined first gradient value and the second gradient value, the greater the definition value representing the image of the control panel.
该实施例中,预设的第一梯度值-第二梯度值-清晰度数据表为多次实验获取的,基于第一梯度值及第二梯度值进行对照查询,确定清晰度。In this embodiment, the preset first gradient value-second gradient value-sharpness data table is obtained from multiple experiments, and a comparative query is performed based on the first gradient value and the second gradient value to determine the sharpness.
上述技术方案的有益效果:基于得到清晰度值达标的控制面板图像,即目标控制面板图像,有利于提高对目标控制面板图像进行解析的准确性。Beneficial effects of the above technical solution: Based on obtaining the control panel image whose definition value reaches the standard, that is, the target control panel image, it is beneficial to improve the accuracy of analyzing the target control panel image.
在一实施例中,得到目标场景图像的方法与得到目标控制面板图像的方法一致。In one embodiment, the method for obtaining the target scene image is the same as the method for obtaining the target control panel image.
根据本发明的一些实施例,对无人驾驶车辆的外部进行环境感知,得到第二感知数据,包括:According to some embodiments of the present invention, environment perception is performed on the outside of the unmanned vehicle to obtain second perception data, including:
获取无人驾驶车辆的外部环境图像及雷达信息;Obtain external environment images and radar information of unmanned vehicles;
根据外部环境图像及雷达信息确定无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险,根据无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险确定第二感知数据。Determining the obstacle information around the unmanned vehicle and the collision risk with each obstacle according to the external environment image and radar information, and determining the second sensing data according to the obstacle information around the unmanned vehicle and the collision risk with each obstacle.
上述技术方案的工作原理:基于外部环境图像及雷达信息进行解析,可以确定周围存在哪些障碍物,以及障碍物的位置、运动状态,与无人驾驶车辆的距离值,分析得到无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险,作为第二感知数据。The working principle of the above technical solution: Based on the analysis of the external environment image and radar information, it can determine which obstacles exist around, as well as the position and motion state of the obstacle, and the distance value from the unmanned vehicle, and analyze the surrounding area of the unmanned vehicle. The obstacle information and the risk of collision with each obstacle are used as the second sensing data.
上述技术方案的有益效果:便于准确确定无人驾驶车辆的外部的环境感知信息,即第二感知数据。The beneficial effect of the technical solution above is that it is convenient to accurately determine the external environment perception information of the unmanned vehicle, that is, the second perception data.
根据本发明的一些实施例,后台服务器根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,包括:According to some embodiments of the present invention, the background server determines the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and performs corresponding levels of safety monitoring according to the types, including:
后台服务器将第一感知数据输入预先训练好的数据分类模型中进行分类,输出若干个第一分类数据;The background server inputs the first perception data into the pre-trained data classification model for classification, and outputs several first classification data;
将第二感知数据输入预先训练好的数据分类模型中进行分类,输出若干个第二分类数据;Inputting the second perception data into the pre-trained data classification model for classification, and outputting several second classification data;
将同一数据类型的第一分类数据及第二分类数据进行数据融合,作为一组融合数据,进而得到若干组融合数据;performing data fusion on the first classification data and the second classification data of the same data type as a set of fusion data, and then obtaining several sets of fusion data;
将若干组融合数据分别输入至对应的单一识别模型中,输出第一解析结果;在确定若干个第一解析结果中至少有一个表示存在风险数据,则确定融合数据对应的无人驾驶车辆为第一类型车辆,并执行一级安全监控。Input several sets of fusion data into the corresponding single identification model, and output the first analysis result; after determining that at least one of the several first analysis results indicates the existence of risk data, then determine that the unmanned vehicle corresponding to the fusion data is the first A type of vehicle, and implement a level of security monitoring.
上述技术方案的工作原理:该实施例中,第一感知数据及第二感知数据均包括不同类型的数据。类型包括图像、文字等。Working principle of the above technical solution: In this embodiment, both the first sensing data and the second sensing data include different types of data. Types include image, text, etc.
该实施例中,若干个第一分类数据包括图像数据、文字数据等;若干个第二分类数据包括图像数据、文字数据等。In this embodiment, the several first classification data include image data, text data, etc.; the several second classification data include image data, text data, etc.
该实施例中,将同一数据类型的第一分类数据及第二分类数据进行数据融合,作为一组融合数据,进而得到若干组融合数据;示例的,将第一分类数据对应的图像数据与第二分类数据对应的图像数据进行融合。In this embodiment, the first classification data and the second classification data of the same data type are fused together as a set of fusion data, and then several sets of fusion data are obtained; for example, the image data corresponding to the first classification data is combined with the second classification data The image data corresponding to the binary classification data are fused.
该实施例中,单一识别模型为对单一的数据类型进行识别的模型,示例的单一识别模型为图像识别模型,只进行图像类型数据的识别。In this embodiment, the single recognition model is a model that recognizes a single data type, and the example single recognition model is an image recognition model, which only recognizes image type data.
该实施例中,将若干组融合数据分别输入至对应的单一识别模型中,输出第一解析结果;示例的,融合数据A输入单一识别模型A中进行识别,融合数据B输入单一识别模型B中进行识别,一方面基于对融合数据进行识别,提高数据识别的效率,同时基于特定的单一识别模型提高了对相应数据的识别准确性。In this embodiment, several sets of fusion data are respectively input into the corresponding single recognition model, and the first analysis result is output; for example, the fusion data A is input into the single recognition model A for recognition, and the fusion data B is input into the single recognition model B For identification, on the one hand, based on the identification of fusion data, the efficiency of data identification is improved, and at the same time, the accuracy of identification of corresponding data is improved based on a specific single identification model.
该实施例中,为对某组融合数据基于对应的单一识别模型的识别结果,即判断该组融合数据中是否存在风险数据。风险数据为车辆的控制信息是否正确、乘客是否身体不适、基于当前的控制指令是否碰撞风险极大等等。In this embodiment, it is a recognition result based on a corresponding single recognition model for a certain set of fused data, that is, it is judged whether there is risk data in the set of fused data. The risk data includes whether the control information of the vehicle is correct, whether the passengers are unwell, whether the collision risk is extremely high based on the current control instructions, and so on.
上述技术方案的有益效果:实现每组融合数据均对应无人驾驶车辆的同一类型的内部感知数据与外部感知数据的融合,便于后续提高对同一类型数据的处理效率。将若干组融合数据分别输入至对应的单一识别模型中,输出第一解析结果,一方面基于对融合数据进行识别,提高数据识别的效率,同时基于特定的单一识别模型提高了对相应数据的识别准确性。在确定若干个第一解析结果中至少有一个表示存在风险数据,则确定融合数据对应的无人驾驶车辆为第一类型车辆,并执行一级安全监控。便于准确的确定为第一类型车辆的无人驾驶车辆,并进行一级安全监控。Beneficial effects of the above technical solution: each set of fusion data corresponds to the fusion of the same type of internal perception data and external perception data of the unmanned vehicle, which facilitates the subsequent improvement of the processing efficiency of the same type of data. Input several sets of fusion data into the corresponding single recognition model, and output the first analysis result. On the one hand, based on the recognition of the fusion data, the efficiency of data recognition is improved, and at the same time, the recognition of the corresponding data is improved based on the specific single recognition model. accuracy. When it is determined that at least one of the several first parsing results indicates that there is risk data, it is determined that the unmanned vehicle corresponding to the fused data is a first-type vehicle, and a first-level safety monitoring is performed. It is convenient to accurately determine the unmanned vehicle as the first type of vehicle, and perform first-level safety monitoring.
根据本发明的一些实施例,还包括:According to some embodiments of the present invention, also include:
在确定若干个第一解析结果中全部表示不存在风险数据时,在若干组融合数据中选取目标融合数据;When it is determined that all of the several first analysis results indicate that there is no risk data, select the target fusion data from several sets of fusion data;
将除目标融合数据外的其他融合数据均进行特征提取,得到特征向量,根据特征向量转换至目标融合数据对应的类型,得到转换特征向量;Perform feature extraction on all fusion data except the target fusion data to obtain a feature vector, and convert the feature vector to the type corresponding to the target fusion data to obtain a converted feature vector;
将目标融合数据及各个转换特征向量进行匹配,确定目标融合数据与各个转换特征向量的关联关系;Matching the target fusion data and each conversion feature vector to determine the correlation between the target fusion data and each conversion feature vector;
根据所述关联关系确定目标集合;determining a target set according to the association relationship;
根据所述目标集合生成数据体系;generating a data system according to the target set;
将数据体系输入复合识别模型中,输出第二解析结果;inputting the data system into the composite identification model, and outputting the second analysis result;
根据第二解析结果确定存在风险数据时,则确定融合数据对应的无人驾驶车辆为第二类型车辆,并执行二级安全监控;根据第二解析结果确定不存在风险数据时,则确定融合数据对应的无人驾驶车辆为第三类型车辆,并执行三级安全监控。When it is determined that there is risk data according to the second analysis result, it is determined that the unmanned vehicle corresponding to the fusion data is the second type of vehicle, and a secondary safety monitoring is performed; when it is determined that there is no risk data according to the second analysis result, then it is determined that the fusion data The corresponding unmanned vehicle is the third type of vehicle, and performs three-level security monitoring.
上述技术方案的工作原理:该实施例中,目标融合数据为预设的某一类型的数据,假定目标融合数据的类型为A,作为标准化数据。The working principle of the above technical solution: In this embodiment, the target fusion data is a certain type of preset data, and it is assumed that the type of the target fusion data is A, which is used as standardized data.
该实施例中,特征向量表示融合数据的关键数据。根据特征向量转换至目标融合数据对应的类型,得到转换特征向量;示例的,将融合数据B、C均转换为类型为A的数据。In this embodiment, the feature vector represents key data of the fused data. Convert the feature vector to the type corresponding to the target fusion data to obtain the converted feature vector; for example, convert the fusion data B and C into data of type A.
该实施例中,转换特征向量与目标融合数据的特征向量是一致的,便于进行数据分析,将目标融合数据及各个转换特征向量进行匹配,确定目标融合数据与各个转换特征向量的关联关系;基于目标融合数据的类型实现所有类型数据的统一,并建立全面且准确的关联关系。In this embodiment, the converted feature vector is consistent with the feature vector of the target fusion data, which is convenient for data analysis, and the target fusion data and each converted feature vector are matched to determine the relationship between the target fusion data and each converted feature vector; The type of target fusion data realizes the unification of all types of data and establishes a comprehensive and accurate relationship.
该实施例中,根据所述关联关系确定目标集合;目标集合为包括具有与目标融合数据关联度较高的转换特征向量,及目标融合数据。即剔除了与目标融合数据关联度较低的转换特征向量,减少了数据处理量,便于后续提高数据处理速率。In this embodiment, the target set is determined according to the association relationship; the target set includes converted feature vectors with a high degree of correlation with the target fusion data, and target fusion data. That is, the converted feature vectors with low correlation with the target fusion data are eliminated, which reduces the amount of data processing and facilitates the subsequent improvement of the data processing rate.
该实施例中,数据体系为基于将目标集合作为整体,进行数据资源的整合,构建的一个完整的数据模型。基于数据体系实现数据的高效的逻辑关系处理。In this embodiment, the data system is a complete data model constructed based on the integration of data resources based on the target set as a whole. Realize efficient logical relationship processing of data based on the data system.
该实施例中,第二解析结果为将数据体系输入复合识别模型得到的识别结果,判断数据体系中是否存在风险数据。In this embodiment, the second analysis result is the recognition result obtained by inputting the data system into the composite recognition model, and it is judged whether risk data exists in the data system.
该实施例中,复合识别模型中包括对复杂关系的识别规则,非单一类型数据识别。In this embodiment, the composite recognition model includes recognition rules for complex relationships, not a single type of data recognition.
上述技术方案的有益效果:基于复合识别模型便于全面准确的确定各种融合数据组合在一起之后的风险数据,便于准确识别第二类型车辆及第三类型车辆。基于单一识别模型从单一方面进行识别、基于复合识别模型从复合方面进行识别,便于准确全面的判断第一类型车辆、第二类型车辆及第三类型车辆,并执行相应等级的安全监控。The beneficial effects of the above technical solution: Based on the composite identification model, it is convenient to comprehensively and accurately determine the risk data after various fusion data are combined, and it is convenient to accurately identify the second type of vehicle and the third type of vehicle. Based on a single identification model to identify from a single aspect, based on a composite identification model to identify from a composite aspect, it is convenient to accurately and comprehensively judge the first type of vehicle, the second type of vehicle and the third type of vehicle, and perform corresponding levels of safety monitoring.
根据本发明的一些实施例,对单一识别模型进行训练,方法包括:According to some embodiments of the present invention, a single recognition model is trained, and the method includes:
获取样本融合数据;Obtain sample fusion data;
基于单一识别模型的数据层对样本融合数据进行预处理,确定风险因子;Based on the data layer of the single identification model, the sample fusion data is preprocessed to determine the risk factor;
基于单一识别模型的指标层对风险因子进行数据分析,确定若干指标,并进行筛选,得到风险指标;Based on the index layer of the single identification model, the data analysis of the risk factors is carried out, and several indicators are determined and screened to obtain the risk indicators;
基于单一识别模型的模型参数层对各种风险指标进行组合,得到多种组合结果,筛选出风险概率预测准确率最高的组合结果,作为目标组合结果;Based on the model parameter layer of the single identification model, various risk indicators are combined to obtain multiple combination results, and the combination result with the highest risk probability prediction accuracy is selected as the target combination result;
基于单一识别模型的输出层输出对样本融合数据的预测结果,在预测结果与样本融合数据对应的真实结果一致时,表示训练完成。The output layer based on the single recognition model outputs the prediction result of the sample fusion data. When the prediction result is consistent with the real result corresponding to the sample fusion data, it means that the training is completed.
上述技术方案的工作原理:该实施例中,风险因子包括影响车辆控制安全的因子及影响乘客乘坐安全的因子。The working principle of the above technical solution: In this embodiment, the risk factors include factors affecting vehicle control safety and factors affecting passenger safety.
该实施例中,风险指标包括基于指标层对风险因子进行数据分析形成的标准化指标。In this embodiment, the risk indicators include standardized indicators formed by data analysis of risk factors based on the indicator layer.
上述技术方案的有益效果:基于不同类型的样本融合数据,分别对对应的单一识别模型进行训练,在训练过程中包括对单一识别模型的数据层、指标层、模型参数层、输出层进行训练,便于得到准确的单一识别模型的模型参数,准确训练出单一识别模型。The beneficial effect of the above technical solution: based on different types of sample fusion data, the corresponding single recognition model is trained respectively, and the training process includes training the data layer, index layer, model parameter layer, and output layer of the single recognition model, It is convenient to obtain accurate model parameters of a single recognition model, and accurately train a single recognition model.
根据本发明的一些实施例,根据所述关联关系确定目标集合,包括:According to some embodiments of the present invention, determining the target set according to the association relationship includes:
将目标融合数据作为关键节点,转换特征向量作为所述关键节点的关联节点;Taking the target fusion data as a key node, and converting the feature vector as an associated node of the key node;
根据关键节点、关联节点及关联关系构建筛选体系;所述筛选体系中包括每个关联节点到关键节点的距离值;Constructing a screening system according to key nodes, associated nodes and associated relationships; the screening system includes a distance value from each associated node to a key node;
筛选出距离值小于预设距离值的关联节点,作为目标关联节点;Filter out the associated nodes whose distance value is less than the preset distance value as the target associated node;
根据目标关联节点及关键节点确定目标集合。Determine the target set according to the target associated nodes and key nodes.
上述技术方案的工作原理:该实施例中,筛选出距离值小于预设距离值的关联节点,作为目标关联节点,即为表示关联度高的节点。The working principle of the above technical solution: In this embodiment, the associated nodes whose distance value is smaller than the preset distance value are screened out as the target associated nodes, that is, nodes with a high degree of association.
该实施例中,关键节点作为中心节点及主导节点。In this embodiment, key nodes serve as central nodes and leading nodes.
筛选体系为基于关键节点、关联节点及关联关系生成的关系拓扑图,将关键节点与关联节点的关联关系进行直观展示,可以确定每个关联节点到关键节点的距离值。The screening system is a relationship topology diagram generated based on key nodes, associated nodes, and associated relationships. It visually displays the associated relationship between key nodes and associated nodes, and can determine the distance value from each associated node to the key node.
上述技术方案的有益效果:基于构建筛选体系,便于清楚的展示关键节点、关联节点及关联关系,便于量化筛选体系中每个关联节点到关键节点的距离值;进而便于准确确定目标关键节点,进而准确确定目标集合。The beneficial effect of the above technical solution: based on the construction of the screening system, it is convenient to clearly display the key nodes, associated nodes and associated relationships, and to quantify the distance value from each associated node to the key node in the screening system; thus to facilitate the accurate determination of the target key node, and then Accurately determine the target set.
根据本发明的一些实施例,对复合识别模型进行训练,方法包括:According to some embodiments of the present invention, the compound recognition model is trained, and the method includes:
确定无人驾驶车辆的大数据,根据大数据构建驾驶场景的图谱文本;Determine the big data of unmanned vehicles, and construct the map text of the driving scene based on the big data;
提取图谱文本中的各个节点,并确定各个节点之间的关联关系,构建驾驶场景对应的数据链的知识图谱;Extract each node in the map text, determine the relationship between each node, and construct a knowledge map of the data link corresponding to the driving scene;
确定每个节点的风险指标信息;Determine the risk indicator information for each node;
提取知识图谱中的关联特征向量,根据关联特征向量及风险指标信息进行特征融合,得到融合向量;Extract the associated feature vectors in the knowledge map, perform feature fusion according to the associated feature vectors and risk index information, and obtain the fusion vector;
基于融合向量对复合识别模型进行训练。The composite recognition model is trained based on the fused vectors.
上述技术方案的工作原理:该实施例中,图谱文本包括各种由无人驾驶车辆的大数据构建的驾驶场景。The working principle of the above technical solution: In this embodiment, the map text includes various driving scenes constructed from the big data of unmanned vehicles.
该实施例中,复合识别模型为风险传导融合模型,基于识别数据体系的整体,更加准确的识别整体风险数据。In this embodiment, the composite identification model is a risk conduction fusion model, based on the overall identification data system, to more accurately identify the overall risk data.
该实施例中,训练复合识别模型时,基于整个驾驶场景训练得到模型,来判断整体的驾驶风险。In this embodiment, when training the composite identification model, the model is obtained based on the training of the entire driving scene to judge the overall driving risk.
该实施例中,复合识别模型为深度神经网络模型。In this embodiment, the composite recognition model is a deep neural network model.
该实施例中,每个节点对应无人驾驶车辆的内部感知数据与外部感知数据中同一类型的数据的融合数据,且融合数据为转换成对应的与目标融合数据相匹配的转换特征向量。In this embodiment, each node corresponds to fusion data of the same type of data in the internal perception data and external perception data of the unmanned vehicle, and the fusion data is converted into a corresponding converted feature vector that matches the target fusion data.
该实施例中,知识图谱中,每个驾驶场景对应一条数据链,该数据链即各个节点的关系链。In this embodiment, in the knowledge graph, each driving scene corresponds to a data link, and the data link is the relationship link of each node.
该实施例中,风险指标信息为节点对应的主体的动态风险参数与静态风险参数。每个节点的主体为乘客或车辆。乘客的动态风险参数包括各种身体动作变化,静态风险参数包括乘客的相对固定位置,如位于后排或者前排。车辆的动态风险参数包括车辆的控制参数及执行参数变化。静态风险参数为车辆的固定不动的参数,如器件处于固定位置。In this embodiment, the risk indicator information is the dynamic risk parameters and static risk parameters of the subject corresponding to the node. The subject of each node is a passenger or a vehicle. The dynamic risk parameters of the passenger include changes in various body movements, and the static risk parameters include the relative fixed position of the passenger, such as being in the back row or the front row. The dynamic risk parameters of the vehicle include changes in the control parameters and execution parameters of the vehicle. The static risk parameter is a fixed parameter of the vehicle, such as a device in a fixed position.
该实施例中,提取知识图谱中的关联特征向量为表示数据链中各个节点的关联关系。In this embodiment, the associated feature vector in the knowledge graph is extracted to represent the associated relationship of each node in the data chain.
该实施例中,融合向量为基于各个节点之间的关联关系及每个节点的风险指标信息进行融合得到的,该融合向量表示各个节点之间的逻辑关系及整体的风险输出结果。In this embodiment, the fusion vector is obtained by fusion based on the association relationship between each node and the risk index information of each node, and the fusion vector represents the logical relationship between each node and the overall risk output result.
上述技术方案的有益效果:基于提取知识图谱中的关联特征向量,根据关联特征向量及风险指标信息进行特征融合,得到融合向量;基于融合向量对复合识别模型进行训练。便于得到准确的复合识别模型。在训练过程中,基于大数据构建驾驶场景的图谱文本,提取图谱文本中的各个节点,并确定各个节点之间的关联关系,构建驾驶场景对应的数据链的知识图谱;基于构建驾驶场景对应的数据链的知识图谱;便于准确确定无人驾驶车辆实际对应的驾驶场景,实现对各个节点的逻辑关系进行准确判断。Beneficial effects of the above technical solution: Based on the extraction of associated feature vectors in the knowledge map, feature fusion is performed according to the associated feature vectors and risk index information to obtain fusion vectors; the composite recognition model is trained based on the fusion vectors. It is convenient to obtain an accurate compound recognition model. In the training process, construct the map text of the driving scene based on big data, extract each node in the map text, and determine the association relationship between each node, and construct the knowledge map of the data link corresponding to the driving scene; The knowledge map of the data link; it is convenient to accurately determine the actual driving scene corresponding to the unmanned vehicle, and realize the accurate judgment of the logical relationship of each node.
根据本发明的一些实施例,根据所述目标集合生成数据体系,包括:According to some embodiments of the present invention, generating a data system according to the target set includes:
根据目标集合提取出数据分析的维度字段,描述维度的描述字段及进行统计的摘要字段;Dimension fields for data analysis, description fields for describing dimensions and summary fields for statistics are extracted according to the target set;
对摘要字段进行计算修改,修正统计参数;Calculate and modify the summary field, and modify the statistical parameters;
根据描述字段建立描述脚本,并在描述脚本中建立运行程序,根据运行程序及修正统计参数进行数据体系的建模;Create a description script according to the description field, and establish a running program in the description script, and model the data system according to the running program and corrected statistical parameters;
在建模过程中,通过交叉索引技术对所述维度字段进行分析处理,根据分析处理结果生成数据体系。During the modeling process, the dimension fields are analyzed and processed by cross-reference technology, and a data system is generated according to the analysis and processing results.
上述技术方案的工作原理及有益效果:构建数据体系,根据目标集合提取出数据分析的维度字段,描述维度的描述字段及进行统计的摘要字段;数据体系中包括维度字段、描述字段及摘要字段。被定义成维的字段是被做经过交叉索引处理的,可以对任意维和维之间相互快速的潜入来获取我们最需要的信息。描述字段包含了和维相关的额外信息。便于准确的确定数据体系,更好的展示目标集合的各种数据及关系。The working principle and beneficial effect of the above technical solution: construct a data system, extract dimension fields for data analysis, description fields for describing dimensions and summary fields for statistics according to the target set; the data system includes dimension fields, description fields and summary fields. The fields defined as dimensions are processed through cross-reference, which can quickly sneak into any dimension and each other to obtain the information we need most. The description field contains additional information related to the dimension. It is convenient to accurately determine the data system, and better display various data and relationships of the target set.
根据本发明的一些实施例,后台服务器在根据所述类型执行对应级别的安全监控时,基于安全监控信息更新无人驾驶车辆的类型,在确定无人驾驶车辆的类型发生变动时,执行新类型对应级别的安全监控。According to some embodiments of the present invention, the background server updates the type of the unmanned vehicle based on the safety monitoring information when performing the corresponding level of safety monitoring according to the type, and executes the new type when it is determined that the type of the unmanned vehicle changes. Corresponding level of security monitoring.
上述技术方案的有益效果:便于后台服务器根据安全监控信息更新无人驾驶车辆的类型,在确定无人驾驶车辆的类型发生变动时,执行新类型对应级别的安全监控,实现监控资源的合理配置,同时保证的监控的安全性及准确性。The beneficial effect of the above technical solution: it is convenient for the background server to update the type of the unmanned vehicle according to the safety monitoring information, and when the type of the unmanned vehicle is determined to be changed, the security monitoring of the corresponding level of the new type is performed to realize the rational allocation of monitoring resources, At the same time to ensure the safety and accuracy of monitoring.
根据本发明的一些实施例,还包括:According to some embodiments of the present invention, also include:
后台服务器接收控制终端的控制验证信息,并发送至各个无人驾驶车辆,基于如下公式针对控制验证信息进行验证:The background server receives the control verification information of the control terminal and sends it to each unmanned vehicle, and verifies the control verification information based on the following formula:
上述公式中,为控制验证信息在对应的无人驾驶车辆中的明文信息集;/>为对应的无人驾驶车辆中的解密方法;/>为控制验证信息;/>为验证结果;当/>时,验证结果为通过;当/>时,验证结果为不通过;/>为对应的无人驾驶车辆中的权限信息集;/>为集合计数函数,/>为预设界值,取值在0到1区间范围内。In the above formula, To control the verification information in the corresponding plaintext information set in the unmanned vehicle; /> is the decryption method in the corresponding unmanned vehicle; /> To control authentication information; /> To verify the result; when /> , the verification result is passed; when /> , the verification result is failed; /> is the permission information set in the corresponding unmanned vehicle; /> is a set counting function, /> It is a preset limit value, and the value is in the range of 0 to 1.
在确定验证通过时,确定验证通过的无人驾驶车辆为目标无人驾驶车辆,控制目标无人驾驶车辆执行控制验证信息包括的控制指令;When it is determined that the verification is passed, determine that the unmanned vehicle that has passed the verification is the target unmanned vehicle, and control the target unmanned vehicle to execute the control instructions included in the control verification information;
在确定验证不通过时,发送错误信息至控制终端。When it is determined that the verification fails, an error message is sent to the control terminal.
上述技术方案的工作原理及有益效果:后台服务器接收控制终端的控制验证信息,并发送至各个无人驾驶车辆,进行验证,在确定验证通过时,确定验证通过的无人驾驶车辆为目标无人驾驶车辆,控制目标无人驾驶车辆执行控制验证信息包括的控制指令;在确定验证不通过时,发送错误信息至控制终端。每个无人驾驶车辆只对特定的控制验证信息会验证通过,并执行控制验证信息中包括的控制指令,提高了控制终端对无人驾驶车辆控制的安全性,避免控制指令的错误发送,导致非对应的无人驾驶车辆执行造成的安全隐患。通过预设界值可以根据安全需求进行设置。The working principle and beneficial effects of the above technical solution: the background server receives the control verification information of the control terminal and sends it to each unmanned vehicle for verification. Drive the vehicle, control the target unmanned vehicle to execute the control instructions included in the control verification information; when it is determined that the verification fails, send an error message to the control terminal. Each unmanned vehicle only passes the verification of specific control verification information, and executes the control instructions included in the control verification information, which improves the security of the control terminal for the control of unmanned vehicles and avoids the wrong sending of control instructions, resulting in Safety hazards caused by non-corresponding unmanned vehicle execution. The preset limit value can be set according to the security requirements.
如图3所示,本发明第二方面实施例提出了一种无人驾驶车辆的安全监控系统,包括:As shown in Figure 3, the embodiment of the second aspect of the present invention proposes a safety monitoring system for unmanned vehicles, including:
建立模块,用于建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系;Create a module for establishing the communication relationship between multiple unmanned vehicles and the background server in the preset area;
感知模块,用于在预设区域内多辆无人驾驶车辆行驶时,对无人驾驶车辆的内部进行环境感知,得到第一感知数据;对无人驾驶车辆的外部进行环境感知,得到第二感知数据;The sensing module is used to sense the environment inside the unmanned vehicle to obtain the first sensing data when multiple unmanned vehicles are driving in the preset area; to sense the environment outside the unmanned vehicle to obtain the second sensory data;
后台服务器,用于根据第一感知数据及第二感知数据,确定多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控。The background server is used to determine the types of multiple unmanned vehicles according to the first sensing data and the second sensing data, and perform corresponding level of safety monitoring according to the types.
上述技术方案的工作原理:建立预设区域内多辆无人驾驶车辆与后台服务器的通讯关系,便于实现后台服务器获取预设区域内多辆无人驾驶车辆的初始监控信息,初始监控信息是基于设置在无人驾驶车辆上的感知模块获取的。预设区域为规划的无人驾驶车辆的运行区域。The working principle of the above technical solution: establish the communication relationship between multiple unmanned vehicles in the preset area and the background server, so that the background server can obtain the initial monitoring information of multiple unmanned vehicles in the preset area. The initial monitoring information is based on Acquired by the perception module installed on the unmanned vehicle. The preset area is the planned operating area of the unmanned vehicle.
该实施例中,第一感知数据为对无人驾驶车辆的内部进行环境感知获取的数据。示例的,第一感知数据为控制节点运行信息、车辆状态参数信息、车机设备的运行信息及乘客的行为特征。In this embodiment, the first sensing data is the data acquired through environmental sensing of the interior of the unmanned vehicle. Exemplarily, the first sensing data is control node operation information, vehicle state parameter information, vehicle-machine equipment operation information, and passenger behavior characteristics.
该实施例中,第二感知数据为对无人驾驶车辆的外部进行环境感知获取的数据。示例的,为基于无人驾驶车辆的外部环境图像及雷达信息确定的无人驾驶车辆周围的障碍物信息及与各个障碍物的碰撞风险。In this embodiment, the second sensing data is the data acquired through environmental sensing of the outside of the unmanned vehicle. For example, the obstacle information around the unmanned vehicle and the collision risk with each obstacle determined based on the external environment image and radar information of the unmanned vehicle.
基于第一感知数据及第二感知数据便于对无人驾驶车辆的内部及外部进行全面的感知,便于获取全面的感知数据,进而便于后台服务器准确确定每辆无人驾驶车辆的类型。Based on the first sensing data and the second sensing data, it is convenient to conduct a comprehensive perception of the interior and exterior of the unmanned vehicle, to obtain comprehensive sensing data, and to facilitate the background server to accurately determine the type of each unmanned vehicle.
该实施例中,无人驾驶车辆的类型包括第一类型车辆、第二类型车辆及第三类型车辆,分别执行一级安全监控、二级安全监控及三级安全监控。In this embodiment, the types of unmanned vehicles include the first type of vehicles, the second type of vehicles and the third type of vehicles, which respectively implement the first-level safety monitoring, the second-level safety monitoring and the third-level safety monitoring.
该实施例中,一级安全监控、二级安全监控、三级安全监控的监控要求是依次降低的,对应的一级安全监控造成的对后台服务器的负载是最大的、二级安全监控其次、三级安全监控最低。示例的,在对第一类型车辆执行一级安全监控时,获取对第一类型车辆进行环境感知获取的数据的监控间隔为1s。在对第二类型车辆执行二级安全监控时,获取对第二类型车辆进行环境感知获取的数据的监控间隔为2s。在对第三类型车辆执行三级安全监控时,获取对第三类型车辆进行环境感知获取的数据的监控间隔为3s。In this embodiment, the monitoring requirements of the first-level security monitoring, the second-level security monitoring, and the third-level security monitoring are successively reduced, and the load on the background server caused by the corresponding first-level security monitoring is the largest, followed by the second-level security monitoring. Level 3 security monitoring is the lowest. Exemplarily, when the first type of vehicle is performing level one safety monitoring, the monitoring interval for acquiring the data of environment awareness acquisition for the first type of vehicle is 1s. When the secondary safety monitoring is performed on the second type of vehicle, the monitoring interval for acquiring the data of environment perception acquisition on the second type of vehicle is 2s. When three-level security monitoring is performed on the third type of vehicle, the monitoring interval for acquiring the data of environment perception acquisition on the third type of vehicle is 3s.
上述技术方案的有益效果:实现对预设区域内多辆无人驾驶车辆进行安全监控,同时基于多辆无人驾驶车辆的类型,根据所述类型执行对应级别的安全监控,降低了后台服务器的负载,提高了监控资源的利用率。The beneficial effect of the above-mentioned technical solution: realize the security monitoring of multiple unmanned vehicles in the preset area, and at the same time, based on the types of multiple unmanned vehicles, perform the corresponding level of security monitoring according to the type, reducing the background server. load, improving the utilization of monitoring resources.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalent technologies, the present invention also intends to include these modifications and variations.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310350094.4A CN116125996B (en) | 2023-04-04 | 2023-04-04 | Safety monitoring method and system for unmanned vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310350094.4A CN116125996B (en) | 2023-04-04 | 2023-04-04 | Safety monitoring method and system for unmanned vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116125996A CN116125996A (en) | 2023-05-16 |
CN116125996B true CN116125996B (en) | 2023-06-27 |
Family
ID=86299358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310350094.4A Active CN116125996B (en) | 2023-04-04 | 2023-04-04 | Safety monitoring method and system for unmanned vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116125996B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112437111A (en) * | 2020-10-13 | 2021-03-02 | 上海京知信息科技有限公司 | Vehicle-road cooperative system based on context awareness |
CN114464216A (en) * | 2022-02-08 | 2022-05-10 | 贵州翰凯斯智能技术有限公司 | Acoustic detection method and device in unmanned driving environment |
DE102020215333A1 (en) * | 2020-12-04 | 2022-06-09 | Zf Friedrichshafen Ag | Computer-implemented method and computer program for the weakly supervised learning of 3D object classifications for environment perception, regulation and/or control of an automated driving system, classification module and classification system |
CN115326131A (en) * | 2022-07-06 | 2022-11-11 | 江苏大块头智驾科技有限公司 | A method and system for intelligent analysis of road conditions in mines for unmanned driving |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107976989A (en) * | 2017-10-25 | 2018-05-01 | 中国第汽车股份有限公司 | Comprehensive vehicle intelligent safety monitoring system and monitoring method |
CN109991971A (en) * | 2017-12-29 | 2019-07-09 | 长城汽车股份有限公司 | Automatic driving vehicle and automatic driving vehicle management system |
US11392131B2 (en) * | 2018-02-27 | 2022-07-19 | Nauto, Inc. | Method for determining driving policy |
CN108922188B (en) * | 2018-07-24 | 2020-12-29 | 河北德冠隆电子科技有限公司 | Radar tracking and positioning four-dimensional live-action traffic road condition perception early warning monitoring management system |
CN111240328B (en) * | 2020-01-16 | 2020-12-25 | 中智行科技有限公司 | Vehicle driving safety monitoring method and device and unmanned vehicle |
JP7167958B2 (en) * | 2020-03-26 | 2022-11-09 | 株式会社デンソー | Driving support device, driving support method, and driving support program |
EP4128028A1 (en) * | 2020-03-31 | 2023-02-08 | Teledyne FLIR Detection, Inc. | User-in-the-loop object detection and classification systems and methods |
CN111862389B (en) * | 2020-07-21 | 2022-10-21 | 武汉理工大学 | An intelligent navigation perception and augmented reality visualization system |
CN113968245A (en) * | 2021-04-15 | 2022-01-25 | 上海丰豹商务咨询有限公司 | In-vehicle intelligent unit and control method suitable for cooperative autonomous driving system |
CN113741485A (en) * | 2021-06-23 | 2021-12-03 | 阿波罗智联(北京)科技有限公司 | Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle |
CN114299473A (en) * | 2021-12-24 | 2022-04-08 | 杭州电子科技大学 | Driver behavior identification method based on multi-source information fusion |
CN115618932A (en) * | 2022-09-23 | 2023-01-17 | 清华大学 | Traffic incident prediction method, device and electronic equipment based on networked automatic driving |
CN115278103B (en) * | 2022-09-26 | 2022-12-20 | 合肥岭雁科技有限公司 | Security monitoring image compensation processing method and system based on environment perception |
-
2023
- 2023-04-04 CN CN202310350094.4A patent/CN116125996B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112437111A (en) * | 2020-10-13 | 2021-03-02 | 上海京知信息科技有限公司 | Vehicle-road cooperative system based on context awareness |
DE102020215333A1 (en) * | 2020-12-04 | 2022-06-09 | Zf Friedrichshafen Ag | Computer-implemented method and computer program for the weakly supervised learning of 3D object classifications for environment perception, regulation and/or control of an automated driving system, classification module and classification system |
CN114464216A (en) * | 2022-02-08 | 2022-05-10 | 贵州翰凯斯智能技术有限公司 | Acoustic detection method and device in unmanned driving environment |
CN115326131A (en) * | 2022-07-06 | 2022-11-11 | 江苏大块头智驾科技有限公司 | A method and system for intelligent analysis of road conditions in mines for unmanned driving |
Also Published As
Publication number | Publication date |
---|---|
CN116125996A (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian et al. | An automatic car accident detection method based on cooperative vehicle infrastructure systems | |
CN108469806B (en) | Driving right transfer method in alternate man-machine co-driving | |
CN113155173B (en) | Perception performance evaluation method and device, electronic device and storage medium | |
CN112699859A (en) | Target detection method, device, storage medium and terminal | |
US12073329B2 (en) | Method for recognizing an adversarial disturbance in input data of a neural network | |
CN110866427A (en) | Vehicle behavior detection method and device | |
CN114787010A (en) | Driving safety system | |
CN112835790A (en) | Test method, device, equipment and medium for automatic driving software | |
CN118013234B (en) | Multi-source heterogeneous big data-based key vehicle driver portrait intelligent generation system | |
WO2023071397A1 (en) | Detection method and system for dangerous driving behavior | |
Zhang et al. | Predicting risky driving in a connected vehicle environment | |
Silva et al. | An adaptive tinyml unsupervised online learning algorithm for driver behavior analysis | |
CN116740656A (en) | Abnormal driving behavior identification method considering driving scene | |
KR102105007B1 (en) | Edge-cloud system for collecting and providing data of connected car | |
CN116125996B (en) | Safety monitoring method and system for unmanned vehicle | |
CN119478739A (en) | A method for detecting small targets of unmanned aerial vehicles, electronic equipment and storage medium | |
CN112509321A (en) | Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium | |
KR102570295B1 (en) | Vehicle and control method thereof | |
CN112351407A (en) | AEB strategy method based on 5G hierarchical decision | |
CN117893055A (en) | Vehicle behavior evaluation method and system, electronic equipment and vehicle | |
CN118135508B (en) | Holographic traffic intersection sensing system and method based on machine vision | |
CN115426636A (en) | A networked vehicle fault prediction system and method based on gray correlation degree | |
Shubenkova et al. | Machine vision in autonomous vehicles: designing and testing the decision making algorithm based on entity attribute value model | |
CN116989818B (en) | Track generation method and device, electronic equipment and readable storage medium | |
CN119469192B (en) | A method and system for intelligent driving path planning based on reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right |
Effective date of registration: 20250306 Granted publication date: 20230627 |
|
PP01 | Preservation of patent right |