CN110091875A - Deep learning type intelligent driving context aware systems based on Internet of Things - Google Patents
Deep learning type intelligent driving context aware systems based on Internet of Things Download PDFInfo
- Publication number
- CN110091875A CN110091875A CN201910396591.1A CN201910396591A CN110091875A CN 110091875 A CN110091875 A CN 110091875A CN 201910396591 A CN201910396591 A CN 201910396591A CN 110091875 A CN110091875 A CN 110091875A
- Authority
- CN
- China
- Prior art keywords
- intelligent driving
- internet
- layer
- things
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 230000004927 fusion Effects 0.000 claims description 37
- 230000007613 environmental effect Effects 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 230000019771 cognition Effects 0.000 claims 3
- 230000001953 sensory effect Effects 0.000 claims 3
- 230000004888 barrier function Effects 0.000 claims 2
- 239000000284 extract Substances 0.000 claims 1
- 230000010354 integration Effects 0.000 claims 1
- GOLXNESZZPUPJE-UHFFFAOYSA-N spiromesifen Chemical compound CC1=CC(C)=CC(C)=C1C(C(O1)=O)=C(OC(=O)CC(C)(C)C)C11CCCC1 GOLXNESZZPUPJE-UHFFFAOYSA-N 0.000 claims 1
- 230000008447 perception Effects 0.000 abstract description 39
- 230000001149 cognitive effect Effects 0.000 abstract description 11
- 238000004519 manufacturing process Methods 0.000 abstract description 7
- 238000000034 method Methods 0.000 abstract description 6
- 238000013480 data collection Methods 0.000 abstract description 3
- 238000005070 sampling Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 239000000428 dust Substances 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000035484 reaction time Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0062—Adapting control system settings
- B60W2050/0075—Automatic parameter input, automatic initialising or calibrating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/45—External transmission of data to or from the vehicle
- B60W2556/55—External transmission of data to or from the vehicle using telemetry
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
本发明公开了基于物联网的深度学习型智能驾驶环境感知系统,属于智能感知系统领域,基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,感知系统包括感知层和认知层,智能驾驶车辆上设置有决策层和控制层,感知层用于数据采集,决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,控制层接收决策层的指令,并控制车辆的刹车、油门和档位,本方案对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。
The invention discloses a deep learning intelligent driving environment perception system based on the Internet of Things, which belongs to the field of intelligent perception systems. The deep learning intelligent driving environment perception system based on the Internet of Things includes a perception system and an intelligent driving vehicle. The perception system includes a perception layer and an intelligent driving environment. Cognitive layer, intelligent driving vehicles are equipped with a decision-making layer and a control layer, the perception layer is used for data collection, and the decision-making layer is used to process information and route planning from the cognitive layer, process them with algorithms, and output adjustments to the control layer Instructions for vehicle speed and direction, the control layer receives instructions from the decision-making layer, and controls the vehicle's brakes, accelerator and gear position. This solution improves the implementation of unmanned driving, and uses the perception system to collect all moving and stationary obstacles on the road , to send the data information of these obstacles to all intelligent driving vehicles driving on the road, while improving the safety and stability of unmanned driving, it reduces technical difficulty and production costs.
Description
技术领域technical field
本发明涉及智能感知系统领域,更具体地说,涉及基于物联网的深度学习型智能驾驶环境感知系统。The invention relates to the field of intelligent perception systems, more specifically, to a deep learning type intelligent driving environment perception system based on the Internet of Things.
背景技术Background technique
无人驾驶作为汽车未来的研究方向,其对于汽车行业甚至是交通运输业有着深远的影响,无人驾驶汽车的来临将能够解放人类的双手,降低发生交通事故发生的频率,保证了人们的安全。同时随着人工智能、传感检测等核心技术的突破和不断推进,无人驾驶必将更加智能化,同时也能够实现无人驾驶汽车的产业化。As the future research direction of automobiles, unmanned driving has a profound impact on the automobile industry and even the transportation industry. The advent of unmanned vehicles will liberate human hands, reduce the frequency of traffic accidents, and ensure people's safety. . At the same time, with the breakthrough and continuous advancement of core technologies such as artificial intelligence and sensor detection, unmanned driving will become more intelligent, and at the same time, it will be able to realize the industrialization of unmanned vehicles.
无人驾驶技术,尤其是互联网和非传统汽车企业的加入这个领域,使得这项技术还未普及应用就已经又将陷入一片“红海”之中了。但问题是,以这种智能机器人般无人驾驶汽车,能否会如同发明汽车取代马车般的又将人们乘车交通出行能力得到提升和彻底的改善呢?中国现在大中城市道路交通状况较差的主要因素是车辆保有数量太大,况且我们国情条件下人为因素造成事故和拥堵的原因占相当大的比重。大量追尾、触碰刮蹭等交通事故,可以归纳为依赖驾车人的驾驶。“独立行为能力”模式,即视觉范围内发现-判断-行动。于是就会有诸如发现不及时、反应不及时、反应时间不够等因素造成事故。而以仍然延续人工“个人独立行为能力”的智能无人驾驶模式,当然仍然存在上述的这些问题,没有从根本上解决,理论上就仍然存在上述事故发生的可能性。同时,目前限制无人驾驶车辆批量生产的两大主要问题是技术难度和成本问题,因此对无人驾驶的实现方式进行改进,从而提高无人驾驶的安全性、稳定性,降低技术难度和生产成本。Unmanned driving technology, especially the Internet and non-traditional auto companies joining this field, has made this technology fall into a "red sea" before it is widely used. But the question is, can this kind of intelligent robot-like driverless car improve and completely improve people's ability to travel by car like the invention of a car to replace a horse-drawn carriage? The main reason for the poor road traffic conditions in China's large and medium-sized cities is that there are too many vehicles. Moreover, under the conditions of our country, accidents and congestion caused by human factors account for a considerable proportion. A large number of traffic accidents such as rear-end collisions and contact scratches can be summarized as driving dependent on the driver. "Independent Behavior Ability" mode, that is, discovery-judgment-action within the visual range. So there will be factors such as untimely discovery, untimely response, and insufficient reaction time to cause accidents. However, in the intelligent unmanned driving mode that still continues the artificial "individual independent behavior ability", of course, the above-mentioned problems still exist, and they have not been fundamentally resolved. In theory, the possibility of the above-mentioned accidents still exists. At the same time, the two main problems currently restricting the mass production of unmanned vehicles are technical difficulty and cost. cost.
发明内容Contents of the invention
1.要解决的技术问题1. Technical problems to be solved
针对现有技术中存在的问题,本发明的目的在于提供基于物联网的深度学习型智能驾驶环境感知系统,它对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。In view of the problems existing in the prior art, the purpose of the present invention is to provide a deep learning type intelligent driving environment perception system based on the Internet of Things, which improves the implementation of unmanned driving, and utilizes the perception system to collect all moving and stationary objects on the road. Obstacles, the data information of these obstacles is sent to all intelligent driving vehicles driving on the road, which improves the safety and stability of unmanned driving while reducing technical difficulty and production costs.
2.技术方案2. Technical solution
为解决上述问题,本发明采用如下的技术方案。In order to solve the above problems, the present invention adopts the following technical solutions.
基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,所述感知系统包括感知层和认知层,所述智能驾驶车辆上设置有决策层和控制层,所述感知层用于数据采集,且感知层包括雷达单元、惯性导航单元、定位单元和摄像单元,所述认知层用于数据分析,所述决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,所述控制层接收决策层的指令,并控制车辆的刹车、油门和档位,本方案对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。The deep learning type intelligent driving environment perception system based on the Internet of Things includes a perception system and an intelligent driving vehicle. The perception system includes a perception layer and a cognitive layer. The intelligent driving vehicle is provided with a decision-making layer and a control layer. The perception layer is used for data collection, and the perception layer includes radar unit, inertial navigation unit, positioning unit and camera unit, the cognitive layer is used for data analysis, and the decision-making layer is used for information and route planning from the cognitive layer , use the algorithm to process, and output the instructions to adjust the speed and direction of the vehicle to the control layer, the control layer receives the instructions from the decision-making layer, and controls the brakes, accelerator and gear of the vehicle. This solution improves the implementation of unmanned driving , use the perception system to collect all moving and stationary obstacles on the road, and send the data information of these obstacles to all intelligent driving vehicles driving on the road, improving the safety and stability of unmanned driving while reducing technical difficulty and production costs.
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,所述认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。Further, the radar unit, inertial navigation unit, positioning unit and camera unit can be installed on the street lamps on both sides of the road, and the original street lamp equipment is used to install and design the perception system without adding additional large-scale hardware facilities, which can save A large number of hardware configurations save costs and do not take up space. The cognitive layer includes the analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
进一步的,所述雷达单元包括激光雷达和毫米波雷达,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。Further, the radar unit includes a laser radar and a millimeter-wave radar, and the millimeter-wave radar is a device for measuring the distance, angle and relative speed of vehicles around the car by sending and receiving radio waves. Currently, it is widely used as vehicle radar. It is not easily affected by bad weather such as fog, rain and snow, as well as dust and dirt, and can detect vehicles stably. In this system, the millimeter-wave radar is based on a multi-target detection algorithm to detect moving obstacles on the driveway and sidewalk in a fixed area The distance and speed of the object.
进一步的,所述激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。Further, the laser radar is forbidden, and the environment of its scanning is fixed. The laser radar first obtains the environmental data and stores them in the computer in the form of an array, preprocesses the acquired environmental data, and removes information such as trees and the ground. The distance information and reflection intensity information of the radar are simultaneously processed by the environmental data segmentation and clustering of the non-planar algorithm to extract the rectangular outline features of the obstacles. The Kalman filter algorithm is used to continuously predict and track dynamic obstacles.
进一步的,所述感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。Further, wireless communication is used between the perception system and the intelligent driving vehicle, and the perception system sends the identified data information to all nearby intelligent driving vehicles, so that each vehicle can clearly control all obstacles, other vehicles and other vehicles around it. The direction and speed of travel of the vehicle.
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。Further, the sensor data fusion of the radar unit, the inertial navigation unit, the positioning unit and the camera unit includes space fusion, time fusion and sensor data fusion algorithms to establish an accurate laser radar coordinate system, a three-dimensional world coordinate system, and a millimeter wave coordinate system , is the key to realize the spatial fusion of multi-sensor data. Spatial fusion of laser radar and millimeter-wave radar is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of laser radar and millimeter-wave radar needs to be fused in space, and the sensor also needs to collect data synchronously in time , to achieve time fusion. The sampling frequency of the two sensors is different. In order to ensure the reliability of the data, based on the low-sampling rate sensor, the low-frequency sensor collects a frame of image and selects the data cached on the high-frequency sensor for a frame to complete a joint sampling. Frame radar and visual fusion data, thus ensuring the time synchronization of millimeter-wave radar data and camera data; the core key of sensor data fusion is to adopt a suitable fusion algorithm, and the sensor data fusion algorithm of this system adopts extended Kalmar filter algorithm .
进一步的,所述雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。Further, the radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lamps at a fixed distance, and the positioning design is rationalized and standardized.
进一步的,相邻所述路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,保证整个系统运作的稳定性。Further, the radar unit, inertial navigation unit, positioning unit and camera unit on the adjacent street lamps are connected in parallel. When a single power-consuming component is damaged, it will not affect the normal operation of other power-consuming components, ensuring that the entire The stability of system operation.
3.有益效果3. Beneficial effect
相比于现有技术,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:
(1)本方案对无人驾驶的实现方式进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。(1) This solution improves the realization of unmanned driving, uses the perception system to collect all moving and stationary obstacles on the road, and sends the data information of these obstacles to all intelligent driving vehicles driving on the road. The safety and stability of unmanned driving are reduced while reducing technical difficulty and production costs.
(2)雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。(2) The radar unit, inertial navigation unit, positioning unit and camera unit can be installed on the street lamps on both sides of the road, and the original street lamp equipment is used for the installation and design of the perception system, without additional large-scale hardware facilities, which can save a lot of hardware Configuration, cost saving, and does not take up space. The cognitive layer includes the analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
(3)雷达单元包括激光雷达和毫米波雷达,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。(3) The radar unit includes lidar and millimeter-wave radar. The millimeter-wave radar is a device that measures the distance, angle and relative speed of vehicles around the car by sending and receiving radio waves. Currently, it is widely used as vehicle radar. It is not easily affected by bad weather such as fog, rain and snow, as well as dust and dirt, and can detect vehicles stably. In this system, the millimeter-wave radar is based on a multi-target detection algorithm to detect moving obstacles on the driveway and sidewalk in a fixed area The distance and speed of the object.
(4)激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。(4) The laser radar is prohibited, and the scanning environment is fixed. The laser radar first obtains the environmental data and stores them in the computer in the form of an array, preprocesses the acquired environmental data, and removes information such as trees and the ground. The distance information and reflection intensity information are simultaneously processed by the non-planar algorithm of environmental data segmentation and clustering to extract the circumscribed rectangular outline features of obstacles. The laser radar uses the multi-hypothesis tracking model algorithm to perform data association on the obstacle information of two consecutive frames, and uses Carl The Mann filtering algorithm continuously predicts and tracks dynamic obstacles.
(5)感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。(5) Wireless communication is used between the perception system and the intelligent driving vehicle, and the perception system sends the identified data information to all nearby intelligent driving vehicles, so that each vehicle can clearly control all obstacles around it, other vehicles and other vehicles. direction of travel and speed of travel.
(6)雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。(6) The sensor data fusion of radar unit, inertial navigation unit, positioning unit and camera unit includes space fusion, time fusion and sensor data fusion algorithm to establish accurate lidar coordinate system, three-dimensional world coordinate system and millimeter wave coordinate system. It is the key to realize the spatial fusion of multi-sensor data. Spatial fusion of laser radar and millimeter-wave radar is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of laser radar and millimeter-wave radar needs to be fused in space, and the sensor also needs to collect data synchronously in time , to achieve time fusion. The sampling frequency of the two sensors is different. In order to ensure the reliability of the data, based on the low-sampling rate sensor, the low-frequency sensor collects a frame of image and selects the data cached on the high-frequency sensor for a frame to complete a joint sampling. Frame radar and visual fusion data, thus ensuring the time synchronization of millimeter-wave radar data and camera data; the core key of sensor data fusion is to adopt a suitable fusion algorithm, and the sensor data fusion algorithm of this system adopts extended Kalmar filter algorithm .
(7)雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。(7) The radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lamps at a fixed distance, and the positioning design is rationalized and standardized.
(8)相邻路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,保证整个系统运作的稳定性。(8) The radar unit, inertial navigation unit, positioning unit and camera unit on adjacent street lamps are connected in parallel. When a single power-consuming component is damaged, it will not affect the normal operation of other power-consuming components, ensuring the operation of the entire system stability.
附图说明Description of drawings
图1为本发明的环境感知系统方案图;Fig. 1 is a scheme diagram of the environment perception system of the present invention;
图2为本发明的无人驾驶方案图;Fig. 2 is the unmanned driving scheme figure of the present invention;
图3为本发明的激光雷达识别算法流程图;Fig. 3 is the laser radar recognition algorithm flowchart of the present invention;
图4为本发明的邻近区域高精度电子构建图。Fig. 4 is a high-precision electronic construction diagram of the adjacent area of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述;显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例,基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention in conjunction with the accompanying drawings in the embodiments of the present invention; obviously, the described embodiments are only part of the embodiments of the present invention, not all embodiments, based on The embodiments of the present invention and all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
在本发明的描述中,需要说明的是,术语“上”、“下”、“内”、“外”、“顶/底端”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the orientations or positional relationships indicated by the terms "upper", "lower", "inner", "outer", "top/bottom" etc. are based on the orientations shown in the drawings Or positional relationship is only for the convenience of describing the present invention and simplifying the description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as limiting the present invention. In addition, the terms "first" and "second" are used for descriptive purposes only, and should not be understood as indicating or implying relative importance.
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“设置有”、“套设/接”、“连接”等,应做广义理解,例如“连接”,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that, unless otherwise specified and limited, the terms "installed", "set with", "sleeved/connected", "connected", etc. should be understood in a broad sense, such as " Connection" can be a fixed connection, a detachable connection, or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediary, and it can be an internal connection between two components. connectivity. Those of ordinary skill in the art can understand the specific meanings of the above terms in the present invention in specific situations.
实施例1:Example 1:
请参阅图1,基于物联网的深度学习型智能驾驶环境感知系统,包括感知系统和智能驾驶车辆,感知系统包括感知层和认知层,智能驾驶车辆上设置有决策层和控制层,感知层用于数据采集,且感知层包括雷达单元、惯性导航单元、定位单元和摄像单元,外部的计算机作为认知层用于数据分析,智能驾驶车辆上设置的决策层用于将认知层传来的信息和路线规划,用算法进行处理,并向控制层输出调整车速和方向的指令,智能驾驶车辆上设置的控制层接收决策层的指令,并控制车辆的刹车、油门和档位,本方案对无人驾驶的实现方式进行改进,同时对感知系统和智能驾驶车辆进行改进,使两者完美配合,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。Please refer to Figure 1. The deep learning intelligent driving environment perception system based on the Internet of Things includes a perception system and an intelligent driving vehicle. The perception system includes a perception layer and a cognitive layer. The intelligent driving vehicle is equipped with a decision-making layer and a control layer. The perception layer It is used for data collection, and the perception layer includes radar unit, inertial navigation unit, positioning unit and camera unit, the external computer is used as the cognitive layer for data analysis, and the decision-making layer set on the intelligent driving vehicle is used to transmit the cognitive layer The information and route planning are processed by algorithms, and the instructions for adjusting the speed and direction of the vehicle are output to the control layer. The control layer set on the intelligent driving vehicle receives the instructions of the decision-making layer and controls the brakes, accelerator and gear position of the vehicle. Improve the realization of unmanned driving, and improve the perception system and intelligent driving vehicles at the same time, so that the two can cooperate perfectly, and reduce technical difficulty and production costs while improving the safety and stability of unmanned driving.
雷达单元、惯性导航单元、定位单元和摄像单元可安装于道路两侧的路灯上,利用原有的路灯设备进行感知系统的安装和设计,无需额外增设大型硬件设施,可节省大量硬件配置,节约成本,且不占用空间,认知层包括对行人、车辆、交通物品、交通标识和车道线的分析。The radar unit, inertial navigation unit, positioning unit and camera unit can be installed on the street lamps on both sides of the road, and the original street lamp equipment is used for the installation and design of the perception system, without additional large-scale hardware facilities, which can save a lot of hardware configuration and save The cognitive layer includes the analysis of pedestrians, vehicles, traffic objects, traffic signs and lane lines.
雷达单元包括激光雷达和毫米波雷达,请参阅图3,毫米波雷达通过收发无线电波,测量与汽车周围车辆的距离,角度和相对速度的装置。目前作为车载雷达被广泛地使用。不易受大雾雨雪等恶劣天气以及尘土污垢等的影响,可稳定地检测车辆,在本系统中,毫米波雷达基于多目标检测算法,用于检测固定区域内车道上、人行道上面的移动障碍物的距离、速度。The radar unit includes lidar and millimeter-wave radar. Please refer to Figure 3. The millimeter-wave radar is a device that measures the distance, angle and relative speed of vehicles around the car by sending and receiving radio waves. Currently, it is widely used as vehicle radar. It is not easily affected by bad weather such as fog, rain and snow, as well as dust and dirt, and can detect vehicles stably. In this system, the millimeter-wave radar is based on a multi-target detection algorithm to detect moving obstacles on the driveway and sidewalk in a fixed area The distance and speed of the object.
激光雷达为禁止,其扫射的环境为固定,激光雷达首先获取环境数据并以数组形式存储于计算机中,对获取的环境数据进行预处理,剔除树木,地面等信息,对激光雷达的距离信息、反射强度信息同时进行非平面算法的环境数据分割聚类处理,提取障碍物外接矩形轮廓特征,激光雷达采用多假设跟踪模型算法对连续两帧的障碍物信息进行数据关联,并利用卡尔曼滤波算法对动态障碍物进行连续地预测和跟踪。The laser radar is prohibited, and its scanning environment is fixed. The laser radar first obtains the environmental data and stores them in the computer in the form of an array, preprocesses the acquired environmental data, removes information such as trees and ground, and performs distance information of the laser radar, At the same time, the reflection intensity information is processed by the non-planar algorithm of the environmental data segmentation and clustering process, and the circumscribed rectangular outline features of the obstacle are extracted. The lidar adopts the multi-hypothesis tracking model algorithm to perform data association on the obstacle information of two consecutive frames, and uses the Kalman filter algorithm Continuous prediction and tracking of dynamic obstacles.
请参阅图1,感知系统与智能驾驶车辆之间采用无线通信,感知系统将识别的数据信息发送给附近的所有智能驾驶车辆,使得每辆车清楚的掌控其周围的所有障碍、其他车辆以及其他车辆的行驶方向和行驶速度。Please refer to Figure 1. Wireless communication is used between the perception system and the intelligent driving vehicle. The perception system sends the identified data information to all nearby intelligent driving vehicles, so that each vehicle can clearly control all obstacles, other vehicles and other vehicles around it. The direction and speed of travel of the vehicle.
雷达单元、惯性导航单元、定位单元和摄像单元的传感器数据融合包括空间融合、时间融合和传感器数据融合算法,建立精确的激光雷达坐标系、三维世界坐标系、毫米波坐标系,是实现多传感器数据的空间融合的关键。激光雷达与毫米波雷达空间融合就是将不同传感器坐标系的测量值转换到同一个坐标系中;激光雷达和毫米波雷达信息在除在空间上需要进行融合,还需要传感器在时间上同步采集数据,实现时间的融合。两种传感器的采样频率不一样,为了保证数据的可靠性,以低采样速率传感器为基准,低频率传暗器每采一帧图像,选取高频率传感器上一帧缓存的数据,即完成共同采样一帧雷达与视觉融合的数据,从而保证了毫米波雷达数据和摄像机数据时间上的同步;传感器数据融合的核心关键还是在于采用合适的融合算法,本系统的传感器数据融合算法采用扩展卡尔马滤波算法。The sensor data fusion of radar unit, inertial navigation unit, positioning unit and camera unit includes space fusion, time fusion and sensor data fusion algorithm, and establishes accurate lidar coordinate system, three-dimensional world coordinate system and millimeter wave coordinate system, which is the realization of multi-sensor The key to spatial fusion of data. Spatial fusion of laser radar and millimeter-wave radar is to convert the measured values of different sensor coordinate systems into the same coordinate system; the information of laser radar and millimeter-wave radar needs to be fused in space, and the sensor also needs to collect data synchronously in time , to achieve time fusion. The sampling frequency of the two sensors is different. In order to ensure the reliability of the data, based on the low-sampling rate sensor, the low-frequency sensor collects a frame of image and selects the data cached on the high-frequency sensor for a frame to complete a joint sampling. Frame radar and visual fusion data, thus ensuring the time synchronization of millimeter-wave radar data and camera data; the core key of sensor data fusion is to adopt a suitable fusion algorithm, and the sensor data fusion algorithm of this system adopts extended Kalmar filter algorithm .
请参阅图4,雷达单元、惯性导航单元、定位单元和摄像单元安装于固定距离的相邻路灯上,定位设计,合理化、标准化。Please refer to Figure 4. The radar unit, inertial navigation unit, positioning unit and camera unit are installed on adjacent street lights at a fixed distance, and the positioning design is rationalized and standardized.
相邻路灯上的雷达单元、惯性导航单元、定位单元和摄像单元之间采用并联的连接方式,单个用电元件损坏时,不会影响其他用电元件的正常运作,便于维修,通过定位单元的设置可方便相关维修人员快速而准确的找到故障所在位置,加快维修进程,保证整个系统运作的稳定性。The radar unit, inertial navigation unit, positioning unit and camera unit on adjacent street lights are connected in parallel. When a single power-consuming component is damaged, it will not affect the normal operation of other power-consuming components, which is convenient for maintenance. The settings can facilitate relevant maintenance personnel to quickly and accurately find the location of the fault, speed up the maintenance process, and ensure the stability of the entire system operation.
相较于传统技术,本方案对无人驾驶的实现方式进行改进,同时对感知系统和智能驾驶车辆进行改进,利用感知系统采集道路上所有移动和静止的障碍物,将这些障碍物的数据信息发送给在道路上行驶的所有智能驾驶车辆,在提高无人驾驶的安全性和稳定性的同时降低技术难度和生产成本。Compared with the traditional technology, this solution improves the implementation of unmanned driving, and improves the perception system and intelligent driving vehicles at the same time. The perception system is used to collect all moving and stationary obstacles on the road, and the data information of these obstacles Sent to all intelligent driving vehicles on the road, while improving the safety and stability of unmanned driving, it reduces technical difficulty and production costs.
以上所述,仅为本发明较佳的具体实施方式;但本发明的保护范围并不局限于此。任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其改进构思加以等同替换或改变,都应涵盖在本发明的保护范围内。The above description is only a preferred embodiment of the present invention; however, the scope of protection of the present invention is not limited thereto. Anyone familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention and its improved concept to make equivalent replacements or changes shall fall within the scope of protection of the present invention.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910396591.1A CN110091875A (en) | 2019-05-14 | 2019-05-14 | Deep learning type intelligent driving context aware systems based on Internet of Things |
PCT/CN2020/077066 WO2020228393A1 (en) | 2019-05-14 | 2020-02-28 | Deep learning type intelligent driving environment perception system based on internet of things |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910396591.1A CN110091875A (en) | 2019-05-14 | 2019-05-14 | Deep learning type intelligent driving context aware systems based on Internet of Things |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110091875A true CN110091875A (en) | 2019-08-06 |
Family
ID=67447890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910396591.1A Pending CN110091875A (en) | 2019-05-14 | 2019-05-14 | Deep learning type intelligent driving context aware systems based on Internet of Things |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110091875A (en) |
WO (1) | WO2020228393A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111326002A (en) * | 2020-02-26 | 2020-06-23 | 公安部交通管理科学研究所 | A prediction method, device and system for environmental perception of an autonomous vehicle |
CN111833631A (en) * | 2020-06-24 | 2020-10-27 | 武汉理工大学 | Target data processing method, system and storage medium based on vehicle-road collaboration |
WO2020228393A1 (en) * | 2019-05-14 | 2020-11-19 | 长沙理工大学 | Deep learning type intelligent driving environment perception system based on internet of things |
CN112249035A (en) * | 2020-12-16 | 2021-01-22 | 国汽智控(北京)科技有限公司 | Automatic driving method, device and device based on general data flow architecture |
CN112435466A (en) * | 2020-10-23 | 2021-03-02 | 江苏大学 | Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment |
CN113442913A (en) * | 2020-03-27 | 2021-09-28 | 丰田自动车株式会社 | Automatic driving method and automatic driving platform of vehicle and vehicle system |
CN113490178A (en) * | 2021-06-18 | 2021-10-08 | 天津大学 | Intelligent networking vehicle multistage cooperative sensing system |
CN113734197A (en) * | 2021-09-03 | 2021-12-03 | 合肥学院 | Unmanned intelligent control scheme based on data fusion |
CN113911139A (en) * | 2021-11-12 | 2022-01-11 | 湖北芯擎科技有限公司 | Vehicle control method and device and electronic equipment |
CN114056351A (en) * | 2021-11-26 | 2022-02-18 | 文远苏行(江苏)科技有限公司 | Automatic driving method and device |
CN114291114A (en) * | 2022-01-05 | 2022-04-08 | 天地科技股份有限公司 | Vehicle control system and method |
CN114911246A (en) * | 2022-06-29 | 2022-08-16 | 江苏高瞻数据科技有限公司 | Intelligent unmanned vehicle driving system based on park environment |
CN120105922A (en) * | 2025-05-07 | 2025-06-06 | 名商科技有限公司 | A vehicle intelligent driving decision optimization method based on deep learning |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135777A (en) * | 2010-12-14 | 2011-07-27 | 天津理工大学 | Vehicle-mounted infrared tracking system |
CN105866790A (en) * | 2016-04-07 | 2016-08-17 | 重庆大学 | Laser radar barrier identification method and system taking laser emission intensity into consideration |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN107193012A (en) * | 2017-05-05 | 2017-09-22 | 江苏大学 | Intelligent vehicle laser radar multiple-moving target tracking method based on IMM MHT algorithms |
CN107609522A (en) * | 2017-09-19 | 2018-01-19 | 东华大学 | A kind of information fusion vehicle detecting system based on laser radar and machine vision |
CN108196535A (en) * | 2017-12-12 | 2018-06-22 | 清华大学苏州汽车研究院(吴江) | Automated driving system based on enhancing study and Multi-sensor Fusion |
CN108417087A (en) * | 2018-02-27 | 2018-08-17 | 浙江吉利汽车研究院有限公司 | System and method for safe passage of vehicles |
CN108458745A (en) * | 2017-12-23 | 2018-08-28 | 天津国科嘉业医疗科技发展有限公司 | A kind of environment perception method based on intelligent detection equipment |
CN108845579A (en) * | 2018-08-14 | 2018-11-20 | 苏州畅风加行智能科技有限公司 | A kind of automated driving system and its method of port vehicle |
CN109171684A (en) * | 2018-08-30 | 2019-01-11 | 上海师范大学 | A kind of automatic health monitor system based on wearable sensors and smart home |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4887849B2 (en) * | 2006-03-16 | 2012-02-29 | 日産自動車株式会社 | Vehicle obstacle detection device, road obstacle detection method, and vehicle with road obstacle detection device |
US10209717B2 (en) * | 2015-02-06 | 2019-02-19 | Aptiv Technologies Limited | Autonomous guidance system |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108646739A (en) * | 2018-05-14 | 2018-10-12 | 北京智行者科技有限公司 | A kind of sensor information fusion method |
CN110091875A (en) * | 2019-05-14 | 2019-08-06 | 长沙理工大学 | Deep learning type intelligent driving context aware systems based on Internet of Things |
-
2019
- 2019-05-14 CN CN201910396591.1A patent/CN110091875A/en active Pending
-
2020
- 2020-02-28 WO PCT/CN2020/077066 patent/WO2020228393A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102135777A (en) * | 2010-12-14 | 2011-07-27 | 天津理工大学 | Vehicle-mounted infrared tracking system |
CN105866790A (en) * | 2016-04-07 | 2016-08-17 | 重庆大学 | Laser radar barrier identification method and system taking laser emission intensity into consideration |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
CN107193012A (en) * | 2017-05-05 | 2017-09-22 | 江苏大学 | Intelligent vehicle laser radar multiple-moving target tracking method based on IMM MHT algorithms |
CN107609522A (en) * | 2017-09-19 | 2018-01-19 | 东华大学 | A kind of information fusion vehicle detecting system based on laser radar and machine vision |
CN108196535A (en) * | 2017-12-12 | 2018-06-22 | 清华大学苏州汽车研究院(吴江) | Automated driving system based on enhancing study and Multi-sensor Fusion |
CN108458745A (en) * | 2017-12-23 | 2018-08-28 | 天津国科嘉业医疗科技发展有限公司 | A kind of environment perception method based on intelligent detection equipment |
CN108417087A (en) * | 2018-02-27 | 2018-08-17 | 浙江吉利汽车研究院有限公司 | System and method for safe passage of vehicles |
CN108845579A (en) * | 2018-08-14 | 2018-11-20 | 苏州畅风加行智能科技有限公司 | A kind of automated driving system and its method of port vehicle |
CN109171684A (en) * | 2018-08-30 | 2019-01-11 | 上海师范大学 | A kind of automatic health monitor system based on wearable sensors and smart home |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020228393A1 (en) * | 2019-05-14 | 2020-11-19 | 长沙理工大学 | Deep learning type intelligent driving environment perception system based on internet of things |
CN111326002A (en) * | 2020-02-26 | 2020-06-23 | 公安部交通管理科学研究所 | A prediction method, device and system for environmental perception of an autonomous vehicle |
CN113442913A (en) * | 2020-03-27 | 2021-09-28 | 丰田自动车株式会社 | Automatic driving method and automatic driving platform of vehicle and vehicle system |
CN111833631A (en) * | 2020-06-24 | 2020-10-27 | 武汉理工大学 | Target data processing method, system and storage medium based on vehicle-road collaboration |
CN111833631B (en) * | 2020-06-24 | 2021-10-26 | 武汉理工大学 | Target data processing method, system and storage medium based on vehicle-road cooperation |
CN112435466A (en) * | 2020-10-23 | 2021-03-02 | 江苏大学 | Method and system for predicting take-over time of CACC vehicle changing into traditional vehicle under mixed traffic flow environment |
CN112435466B (en) * | 2020-10-23 | 2022-03-22 | 江苏大学 | Prediction method and system of takeover time for CACC vehicles to degenerate into traditional vehicles in mixed traffic flow environment |
CN112249035B (en) * | 2020-12-16 | 2021-03-16 | 国汽智控(北京)科技有限公司 | Automatic driving method, device and equipment based on general data flow architecture |
CN112249035A (en) * | 2020-12-16 | 2021-01-22 | 国汽智控(北京)科技有限公司 | Automatic driving method, device and device based on general data flow architecture |
CN113490178A (en) * | 2021-06-18 | 2021-10-08 | 天津大学 | Intelligent networking vehicle multistage cooperative sensing system |
CN113734197A (en) * | 2021-09-03 | 2021-12-03 | 合肥学院 | Unmanned intelligent control scheme based on data fusion |
CN113911139A (en) * | 2021-11-12 | 2022-01-11 | 湖北芯擎科技有限公司 | Vehicle control method and device and electronic equipment |
CN113911139B (en) * | 2021-11-12 | 2023-02-28 | 湖北芯擎科技有限公司 | Vehicle control method and device and electronic equipment |
CN114056351A (en) * | 2021-11-26 | 2022-02-18 | 文远苏行(江苏)科技有限公司 | Automatic driving method and device |
CN114056351B (en) * | 2021-11-26 | 2024-02-02 | 文远苏行(江苏)科技有限公司 | Automatic driving method and device |
CN114291114A (en) * | 2022-01-05 | 2022-04-08 | 天地科技股份有限公司 | Vehicle control system and method |
CN114911246A (en) * | 2022-06-29 | 2022-08-16 | 江苏高瞻数据科技有限公司 | Intelligent unmanned vehicle driving system based on park environment |
CN120105922A (en) * | 2025-05-07 | 2025-06-06 | 名商科技有限公司 | A vehicle intelligent driving decision optimization method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
WO2020228393A1 (en) | 2020-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110091875A (en) | Deep learning type intelligent driving context aware systems based on Internet of Things | |
CN110532896B (en) | Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision | |
CN113276769B (en) | Vehicle blind area anti-collision early warning system and method | |
WO2021259344A1 (en) | Vehicle detection method and device, vehicle, and storage medium | |
WO2019165737A1 (en) | Safe passing system and method for vehicle | |
CN111880174A (en) | A roadside service system for supporting automatic driving control decision and its control method | |
CN114415171A (en) | A drivable area detection method based on 4D millimeter wave radar | |
CN114385661A (en) | High-precision map updating system based on V2X technology | |
CN103879404B (en) | Anti-collision warning method and device capable of tracking moving objects | |
CN110446278A (en) | Intelligent driving automobile sensor blind area method of controlling security and system based on V2I | |
CN113568002A (en) | Rail transit active obstacle detection device based on laser and image data fusion | |
CN113885062A (en) | V2X-based data acquisition and fusion equipment, method and system | |
CN103389733A (en) | Vehicle line walking method and system based on machine vision | |
CN212322114U (en) | Environment sensing and road environment crack detection system for automatic driving vehicle | |
CN108986510A (en) | A kind of local dynamic map of intelligence towards crossing realizes system and implementation method | |
CN102431556A (en) | Integrated driver early warning device based on vehicle-road cooperation | |
CN111796299A (en) | An obstacle perception method, perception device and driverless sweeper | |
KR20180065196A (en) | Apparatus and method for providing self-driving information in intersection without signal lamp | |
CN101241642A (en) | Vehicle-mounted device for mobile traffic flow collection dedicated to floating vehicles | |
CN113837127A (en) | A map and V2V data fusion model and method, system and medium | |
CN115257784A (en) | Vehicle-road cooperative system based on 4D millimeter wave radar | |
CN104504363A (en) | Real-time identification method of sidewalk on the basis of time-space correlation | |
CN114445733A (en) | A night road information perception system and information fusion method based on machine vision, radar and WiFi | |
CN116052472A (en) | Vehicle-mounted V2X collision early warning method based on road perception information fusion | |
CN211742265U (en) | Intelligent roadside system for intelligently driving bus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190806 |