CN110264586A - L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading - Google Patents
L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading Download PDFInfo
- Publication number
- CN110264586A CN110264586A CN201910454082.XA CN201910454082A CN110264586A CN 110264586 A CN110264586 A CN 110264586A CN 201910454082 A CN201910454082 A CN 201910454082A CN 110264586 A CN110264586 A CN 110264586A
- Authority
- CN
- China
- Prior art keywords
- data
- output
- vehicle
- semantic
- driving
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000004458 analytical method Methods 0.000 title claims description 15
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 238000007405 data analysis Methods 0.000 claims abstract description 17
- 230000008447 perception Effects 0.000 claims abstract description 15
- 238000012805 post-processing Methods 0.000 claims abstract description 13
- 230000001133 acceleration Effects 0.000 claims description 26
- 238000013480 data collection Methods 0.000 claims description 17
- 238000004891 communication Methods 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 13
- 230000004438 eyesight Effects 0.000 claims description 8
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000012706 support-vector machine Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 230000006403 short-term memory Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000016776 visual perception Effects 0.000 claims description 3
- 238000011161 development Methods 0.000 abstract description 12
- 238000012795 verification Methods 0.000 abstract description 12
- 238000012216 screening Methods 0.000 abstract description 5
- 238000005065 mining Methods 0.000 abstract description 2
- 230000018109 developmental process Effects 0.000 description 11
- 238000012545 processing Methods 0.000 description 8
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000007418 data mining Methods 0.000 description 3
- 238000013524 data verification Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000033772 system development Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/008—Registering or indicating the working of vehicles communicating information to a remotely located station
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/08—Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
- G07C5/0841—Registering performance data
- G07C5/085—Registering performance data using electronic data carriers
- G07C5/0866—Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/44—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Abstract
本发明涉及一种L3级自动驾驶系统道路驾驶数据采集、分析及上传方法,包括下列步骤:车端驾驶数据采集,包括驾驶数据的采集与同步及驾驶数据的编码与缓存;对采集到的车端驾驶数据进行在线数据分析,包括自动驾驶系统中间结果输出接口定义、目标匹配一致性检测、定位路标语义输出、极端车辆操作检测及人机决策一致性检测;数据通信,对车端驾驶数据做好上传准备;服务器端接收并存储车端驾驶数据。本发明满足L3级自动驾驶系统感知、定位及规划决策各模块的开发与验证需求;自动驾驶极端场景检测,大大降低数据记录与上传所占用带宽;可在线运行相应算法模块,最大化前端平台数据挖掘与验证,大大降低后处理数据筛选工作人力资源需求。
The invention relates to a method for collecting, analyzing and uploading road driving data of an L3-level automatic driving system. End-to-end driving data for online data analysis, including automatic driving system intermediate result output interface definition, target matching consistency detection, positioning road sign semantic output, extreme vehicle operation detection and human-machine decision consistency detection; Ready to upload; the server receives and stores the driving data of the vehicle. The invention meets the development and verification requirements of the L3-level automatic driving system perception, positioning and planning decision-making modules; the automatic driving extreme scene detection greatly reduces the bandwidth occupied by data recording and uploading; the corresponding algorithm modules can be run online to maximize the front-end platform data. Mining and verification greatly reduces the human resource requirements for post-processing data screening.
Description
技术领域technical field
本发明涉及汽车自动驾驶系统,尤其涉及一种L3级自动驾驶系统道路驾驶数据采集、分析及上传方法。The invention relates to an automobile automatic driving system, in particular to a method for collecting, analyzing and uploading road driving data of an L3-level automatic driving system.
背景技术Background technique
智能化是如今汽车行业发展的重要趋势之一,预计2020年~2030年期间智能驾驶技术与系统将在世界范围内快速发展。自动驾驶系统按智能化程度由低到高分为L0~L5六个等级,其中L3级自动驾驶等级定义为允许相应系统在定义驾驶场景下替代驾驶员独立驾驶车辆,如在高速驾驶场景下减轻驾驶负担等。目前L1级和L2级高级辅助驾驶系统已在部分量产车型中落地,L3级自动驾驶系统目前则还处于原型开发阶段,还需大量的测试与验证工作。Intelligence is one of the important trends in the development of the automotive industry today. It is expected that intelligent driving technologies and systems will develop rapidly around the world from 2020 to 2030. The automatic driving system is divided into six levels from low to high according to the degree of intelligence, of which the L3 level of automatic driving is defined as allowing the corresponding system to replace the driver to drive the vehicle independently in the defined driving scenario, such as reducing the speed in high-speed driving scenarios. Driving burden, etc. At present, the L1-level and L2-level advanced assisted driving systems have been implemented in some mass-produced models, and the L3-level autonomous driving system is still in the prototype development stage, and a lot of testing and verification work is needed.
相较于L1级和L2级辅助驾驶系统,L3级自动驾驶系统应用场景更复杂,开发与验证所需的驾驶数据量更大。机器学习的方法在L3级及以上的系统中应用更加广泛,因而对相应驾驶场景有效数据的需求也倍增。相较于L1级和L2级辅助驾驶系统,L3级自动驾驶系统的数据传输与运算量成倍增加。因此,在道路驾驶场景中提取L3级自动驾驶系统所需的有效数据对于此类系统的产业化落地具有重要的实际应用价值。实时上传L3自动驾驶系统运行场景数据即使在5G时代也并不可行,不仅需要大量的传输带宽,还需要后续大量人力对上述驾驶场景进行分类与校验。Compared with L1-level and L2-level assisted driving systems, the application scenarios of L3-level autonomous driving systems are more complex, and the amount of driving data required for development and verification is larger. Machine learning methods are more widely used in L3 and above systems, so the demand for valid data for corresponding driving scenarios has also doubled. Compared with the L1-level and L2-level assisted driving systems, the data transmission and calculation amount of the L3-level automatic driving system are doubled. Therefore, extracting the effective data required for L3-level autonomous driving systems in road driving scenarios has important practical application value for the industrialization of such systems. Real-time uploading of L3 autonomous driving system operating scene data is not feasible even in the 5G era. It not only requires a large amount of transmission bandwidth, but also requires a lot of subsequent manpower to classify and verify the above driving scenes.
现有车载数据记录系统,多以CAN总线数据记录为主,此类系统大多为车辆本地记录,且所需数据带宽与存储空间都非常有限,因而对自动驾驶系统的数据记录不具备参考意义。部分后装行车数据记录设备(运营以及商用车用),可以记录多路视频流数据(车内以及车外),并关联部分CAN总线车辆数据(车速等)以及GPS信息等。然而此类设备不具备或未利用前端算力,其所记录数据需经过大量后处理才可用于部分视觉系统开发与测试工作(离线开发模式)。Most of the existing on-board data recording systems are based on CAN bus data recording. Most of these systems are local recordings of the vehicle, and the required data bandwidth and storage space are very limited, so they have no reference significance for the data recording of automatic driving systems. Some rear-mounted driving data recording devices (for operation and commercial vehicles) can record multi-channel video stream data (inside and outside the vehicle), and correlate some CAN bus vehicle data (vehicle speed, etc.) and GPS information. However, such devices do not have or utilize front-end computing power, and the recorded data needs to undergo a lot of post-processing before they can be used for some vision system development and testing (offline development mode).
现有车载行车记录系统数据存储及传输方式存在如下缺点:(i)不能完整记录L3级及以上自动驾驶系统所需的开发与测试数据;(ii)会记录大量冗余、对系统开发与验证帮助不大的数据,消耗不必要的数据传输以及存储资源;(iii)需要消耗大量人力资源离线筛选提取有效数据,并只能对问题进行离线验证;(iv)后装设备与实际车载运算平台运算特性及能力有差异,无法在线迭代测试算法模块。The existing vehicle driving record system data storage and transmission methods have the following shortcomings: (i) the development and testing data required for L3 and above autonomous driving systems cannot be fully recorded; (ii) a large amount of redundancy will be recorded, and system development and verification Data that does not help much consumes unnecessary data transmission and storage resources; (iii) It needs to consume a lot of human resources to screen and extract valid data offline, and can only verify the problem offline; (iv) After-installed equipment and actual vehicle computing platform There are differences in computing characteristics and capabilities, and algorithm modules cannot be iteratively tested online.
发明内容SUMMARY OF THE INVENTION
本发明为了解决上述技术问题,提供一种L3级自动驾驶系统道路驾驶数据采集、分析及上传方法,其能达到如下目的:(i)满足L3级自动驾驶系统感知、定位及规划决策各模块的开发与验证需求;(ii)自动驾驶极端场景检测,大大降低数据记录与上传所占用带宽;(iii)可在线运行相应算法模块,最大化前端平台数据挖掘与验证,大大降低后处理数据筛选工作人力资源需求。In order to solve the above technical problems, the present invention provides a method for collecting, analyzing and uploading road driving data of an L3-level automatic driving system, which can achieve the following purposes: (i) satisfy the requirements of each module of the L3-level automatic driving system perception, positioning and planning decision-making. Development and verification requirements; (ii) automatic driving extreme scene detection, greatly reducing the bandwidth occupied by data recording and uploading; (iii) online operation of corresponding algorithm modules, maximizing front-end platform data mining and verification, and greatly reducing post-processing data screening work Human resource needs.
本发明的上述技术问题主要是通过下述技术方案得以解决的:本发明的L3级自动驾驶系统道路驾驶数据采集、分析及上传方法,包括下列步骤:The above-mentioned technical problems of the present invention are mainly solved by the following technical solutions: the L3-level automatic driving system road driving data collection, analysis and upload method of the present invention comprises the following steps:
①车端驾驶数据采集;①Car-side driving data collection;
②对采集到的车端驾驶数据进行在线数据分析;②On-line data analysis of the collected vehicle-end driving data;
③数据通信,对车端驾驶数据做好上传准备;③Data communication, prepare for uploading of vehicle-side driving data;
④服务器端接收并存储车端驾驶数据。④The server side receives and stores the driving data of the vehicle side.
本发明基于车载WIFI或4G网络通信以及边缘智能计算,能够满足L3级自动驾驶功能相关的开发以及验证所需的数据记录需求。安装有L3级自动驾驶系统的汽车,汽车上安装有摄像头或相机(视觉)、毫米波雷达、组合导航设备及车载数据处理终端,车载数据处理终端包括数据采集模块、智能分析模块、通信模块三个组成部分。本发明能自动采集并上传的数据有:定位路标数据,事故场景数据,车辆极端操作场景数据,感知匹配异常场景数据,人机响应不匹配场景数据,自定义请求数据。本发明满足L3级自动驾驶系统感知、定位及规划决策各模块的开发与验证需求;自动驾驶极端场景检测,大大降低数据记录与上传所占用带宽;可在线运行相应算法模块,最大化前端平台数据挖掘与验证,大大降低后处理数据筛选工作人力资源需求。The present invention is based on on-board WIFI or 4G network communication and edge intelligent computing, and can meet the data recording requirements required for the development and verification of L3-level automatic driving functions. Vehicles equipped with L3-level autonomous driving systems are equipped with cameras or cameras (vision), millimeter-wave radars, integrated navigation equipment and in-vehicle data processing terminals. The in-vehicle data processing terminals include a data acquisition module, an intelligent analysis module, and a communication module. a component. The data that can be automatically collected and uploaded by the invention includes: positioning road sign data, accident scene data, vehicle extreme operation scene data, perception matching abnormal scene data, human-machine response mismatch scene data, and custom request data. The invention meets the development and verification requirements of the L3-level automatic driving system perception, positioning and planning decision-making modules; the automatic driving extreme scene detection greatly reduces the bandwidth occupied by data recording and uploading; the corresponding algorithm modules can be run online to maximize the front-end platform data. Mining and verification greatly reduces the human resource requirements for post-processing data screening.
作为优选,所述的步骤①包括下列步骤:Preferably, the step 1 includes the following steps:
(11)数据采集与同步:采用软件同步的方式,利用采集GPS时钟或车载终端系统时钟同步各个驾驶数据,构造包含时间、图像数据、雷达原始数据、组合导航数据及车辆动力学参数的驾驶数据结构体;(11) Data acquisition and synchronization: by software synchronization, each driving data is synchronized by collecting GPS clock or vehicle terminal system clock to construct driving data including time, image data, radar original data, integrated navigation data and vehicle dynamics parameters structure;
(12)数据编码与缓存:对图像数据采用H264或H265的方式进行编码,其他数据采用虚拟CAN信息的方式进行编码;设置数据寄存器,用于缓存驾驶数据结构体。(12) Data encoding and caching: The image data is encoded by H264 or H265, and other data is encoded by virtual CAN information; data registers are set to cache the driving data structure.
作为优选,所述的步骤②包括下列步骤:(21)自动驾驶系统中间结果输出接口定义;(22)目标匹配一致性检测;(23)定位路标语义输出;(24)极端车辆操作检测;(25)人机决策一致性检测。Preferably, the step ② includes the following steps: (21) definition of an automatic driving system intermediate result output interface; (22) target matching consistency detection; (23) semantic output of localized road signs; (24) extreme vehicle operation detection; ( 25) Consistency detection of human-machine decision-making.
作为优选,所述的步骤(22)目标匹配一致性检测的具体方法为:Preferably, the specific method for the target matching consistency detection in the step (22) is:
根据毫米波雷达以及视觉感知系统输出结果,提取时序匹配距离容差大于预设阈值dmin的驾驶数据,按如下公式计算匹配距离:According to the output results of the millimeter-wave radar and the visual perception system, extract the driving data whose timing matching distance tolerance is greater than the preset threshold dmin, and calculate the matching distance according to the following formula:
其中,为雷达输出目标坐标,为车载相机输出目标坐标,n为目标生命周期;若d>dmin,则上传相应片段的驾驶数据。in, output target coordinates for the radar, Output the target coordinates for the on-board camera, n is the target life cycle; if d>dmin, upload the driving data of the corresponding segment.
作为优选,所述的步骤(24)极端车辆操作检测的具体方法为:Preferably, the specific method for detecting extreme vehicle operation in the step (24) is:
将惯性导航系统输出的横摆角速度、纵向加速度、纵向减速度以及侧向加速度进行时序编码,利用数值分析方法或者机器学习方法对时序编码数据进行极端车辆操作分类,极端车辆操作分为急加速操作、急减速操作和急转弯操作;The yaw rate, longitudinal acceleration, longitudinal deceleration and lateral acceleration output by the inertial navigation system are time-series encoded, and the time-series encoded data are classified into extreme vehicle operations by numerical analysis methods or machine learning methods. Extreme vehicle operations are classified into rapid acceleration operations. , sharp deceleration operation and sharp turning operation;
其中数值分析方法为:若纵向加速度测量值连续大于设定阈值Almin超过N次,则确认为急加速操作;若纵向减速度测量值连续小于设定阈值A2min超过N次,则确认为急减速操作;若侧向加速度测量值以及横摆角速度测量值分别连续大于设定阈值AYmin以及Tmin超过N次,则确认为急转弯操作;The numerical analysis method is as follows: if the measured value of longitudinal acceleration is continuously greater than the set threshold value A2min for more than N times, it is confirmed as a rapid acceleration operation; if the measured value of longitudinal deceleration is continuously less than the set threshold value A2min for more than N times, it is confirmed as a rapid deceleration operation ; If the measured value of lateral acceleration and the measured value of yaw rate are continuously greater than the set threshold AYmin and Tmin for more than N times, it is confirmed as a sharp turning operation;
机器学习方法为:利用极端车辆操作驾驶时序数据样本,离线训练支持向量机或长短期记忆网络;将训练好的上述模型部署于车辆分析终端,输入为时序编码惯性导航数据,输出为急加速操作、急减速操作或急转弯操作的事件信号。The machine learning method is as follows: using extreme vehicle operation and driving time series data samples, offline training support vector machine or long short-term memory network; deploying the trained model above in the vehicle analysis terminal, the input is time-series encoded inertial navigation data, and the output is rapid acceleration operation. , an event signal for a sharp deceleration operation or a sharp turn operation.
作为优选,所述的步骤(25)人机决策一致性检测的具体方法为:Preferably, the concrete method of the man-machine decision consistency detection in the step (25) is:
根据规划层输出结果与车辆位姿输出结果,按照预设的预瞄距离,提取实际轨迹与规划轨迹的偏差大于预设阈值Dmin的驾驶数据片段;按如下公式在车辆坐标系下计算归一化轨迹偏差:According to the output result of the planning layer and the output result of the vehicle pose, according to the preset preview distance, extract the driving data segment whose deviation between the actual trajectory and the planned trajectory is greater than the preset threshold Dmin; calculate the normalization in the vehicle coordinate system according to the following formula Trajectory deviation:
其中,[Xi,Yi]为车辆坐标系下实际轨迹点坐标,[xi,yi]为车辆坐标系下规划轨迹点坐标;m为轨迹点数;若D>Dmin,则上传相应片段的驾驶数据。Among them, [Xi, Yi] is the actual track point coordinates in the vehicle coordinate system, [xi, yi] is the planned track point coordinates in the vehicle coordinate system; m is the number of track points; if D>Dmin, upload the driving data of the corresponding segment.
作为优选,所述的步骤(21)自动驾驶系统中间结果输出接口定义的具体方法为:包括感知目标输出、定位路标语义输出、车辆位姿输出和规划决策输出;Preferably, in the step (21), the specific method for defining the intermediate result output interface of the automatic driving system is: including sensing target output, positioning road sign semantic output, vehicle pose output and planning decision output;
其中感知目标输出:包括视觉系统目标输出、毫米波雷达系统目标输出和毫米波雷达系统原始目标点云输出;Among them, the perception target output: including the target output of the vision system, the target output of the millimeter wave radar system, and the original target point cloud output of the millimeter wave radar system;
定位路标语义输出:以二进制图层的方式输出定位路标语义,包括可行驶区域语义输出、车道边界语义输出以及指示路标语义输出;Semantic output of localized road signs: Output the semantics of localized road signs in the form of binary layers, including the semantic output of drivable areas, the semantic output of lane boundaries and the semantic output of indicative road signs;
车辆位姿输出:包括车辆位置、车速、航向角以及6轴惯性传感器输出;Vehicle pose output: including vehicle position, vehicle speed, heading angle and 6-axis inertial sensor output;
规划决策输出:按固定纵向空间间隔给出预瞄点轨迹。Planning decision output: The preview point trajectory is given at fixed longitudinal space intervals.
作为优选,所述的步骤(23)定位路标语义输出的具体方法为:Preferably, the specific method for locating the landmark semantic output in the step (23) is:
根据感知模块定位语义输出,并利用定位模块的车辆位姿估算结果,构建定位路标语义输出;定位路标语义输出方法采用关键帧语义输出方法或压缩全语义输出方法;According to the location semantic output of the perception module, and use the vehicle pose estimation result of the location module to construct the location landmark semantic output; the location landmark semantic output method adopts the key frame semantic output method or the compressed full semantic output method;
其中关键帧语义输出方法为:根据定位模块的里程积分估算,每50米提取一帧关键语义输出,构造关键帧语义定位寄存器,关键帧语义定位寄存器中存储有关键帧及相应时刻车辆经纬度数据;每20帧组包压缩一次,并发出上传请求信号;The key frame semantic output method is: according to the mileage integral estimation of the positioning module, extract a frame of key semantic output every 50 meters, and construct a key frame semantic location register. The key frame semantic location register stores the key frame and the vehicle latitude and longitude data at the corresponding time; Packets are compressed every 20 frames, and an upload request signal is sent;
压缩全语义输出方法为:包括车道级语义以及引导指示级语义;从视觉系统车道及可行驶区域语义输出图层中后处理提取车道数量、车道宽度、边界类型、所属车道以及偏离中心距离;从指示路标语义输出图层中后处理提取路面路标以及空间路标两部份数据;上述数据与相应时刻车辆经纬度数据一起每行驶1km组包压缩一次,并发出上传请求信号。The compressed full-semantic output method is: including lane-level semantics and guidance-level semantics; post-processing to extract the number of lanes, lane widths, boundary types, lanes, and off-center distances from the visual system lane and drivable area semantic output layers; The post-processing of the road sign semantic output layer extracts two parts of the road sign and the space road sign; the above data and the latitude and longitude data of the vehicle at the corresponding time are packaged once every 1km traveled, and an upload request signal is sent.
作为优选,所述的步骤③数据通信的具体方法为:根据步骤②的在线数据分析结果,将步骤①中采集后获得的驾驶数据队列进行压缩,并按预定义规则命名相应压缩文件;利用TCP或UDP协议,通过4G网络或无线网络向服务器透传压缩后数据;Preferably, the specific method of data communication in step (3) is: according to the online data analysis result in step (2), compress the driving data queue obtained after collecting in step (1), and name the corresponding compressed file according to predefined rules; use TCP Or UDP protocol, transparently transmit compressed data to the server through 4G network or wireless network;
所述的步骤④服务器端接收并存储车端驾驶数据的具体方法为:在服务器端,通过TCP或UDP协议,接收车载终端所发送的驾驶数据;以数据采集日期为子目录存储相关驾驶数据。The specific method for receiving and storing the vehicle-side driving data in the step (4) is as follows: on the server side, through the TCP or UDP protocol, the driving data sent by the vehicle-mounted terminal is received; the data collection date is used as a subdirectory to store the relevant driving data.
本发明的有益效果是:本发明基于车载WIFI或4G网络通信以及边缘智能计算,过滤大部分驾驶场景中对系统优化与升级意义不大的数据流,能够采集自动驾驶定位所需的压缩路标数据、事故场景数据、特定极限车辆操作下的场景数据、雷达与摄像头探测结果匹配异常场景数据及人机响应差异较大的场景数据。本发明能达到如下效果:(i)满足L3级自动驾驶系统感知、定位、规划决策各模块的开发与验证需求;(ii)自动极端场景检测,大大降低数据记录与上传所占用带宽;(iii)可在线运行相应算法模块,最大化前端平台数据挖掘与验证,大大降低后处理数据筛选工作人力资源需求。The beneficial effects of the present invention are as follows: the present invention is based on on-board WIFI or 4G network communication and edge intelligent computing, filters data streams that are not significant for system optimization and upgrading in most driving scenarios, and can collect compressed road sign data required for automatic driving positioning , accident scene data, scene data under specific extreme vehicle operation, abnormal scene data matching the detection results of radar and camera, and scene data with large differences in human-machine response. The present invention can achieve the following effects: (i) meet the development and verification requirements of the L3 level automatic driving system perception, positioning, planning and decision-making modules; (ii) automatic extreme scene detection, greatly reducing the bandwidth occupied by data recording and uploading; (iii) ) can run the corresponding algorithm modules online, maximize the data mining and verification of the front-end platform, and greatly reduce the human resource requirements for post-processing data screening.
附图说明Description of drawings
图1是本发明中汽车的一种俯视结构示意图。FIG. 1 is a schematic top view of a vehicle in the present invention.
图2是本发明的一种算法流程图。FIG. 2 is a flow chart of an algorithm of the present invention.
具体实施方式Detailed ways
下面通过实施例,并结合附图,对本发明的技术方案作进一步具体的说明。The technical solutions of the present invention will be further described in detail below through embodiments and in conjunction with the accompanying drawings.
实施例:本实施例的L3级自动驾驶系统道路驾驶数据采集、分析及上传方法,基于车载WIFI或4G网络通信以及边缘智能计算,能够满足L3级自动驾驶功能相关的开发以及验证所需的数据记录需求。安装有L3级自动驾驶系统的汽车如图1所示,汽车上安装有摄像头或相机(视觉)、毫米波雷达、组合导航设备及车载数据处理终端。车载数据处理终端包括数据采集模块、智能分析模块、通信模块三个组成部分。其中,数据采集模块主要包含各类数据接口、采集芯片(单片机)以及视频编码模块,负责采集、同步以及编码各路驾驶数据;定位模块主要包含GPS与惯性导航模块,负责定位路标提取以及事件记录与地图的关联;智能分析模块主要包含L3级自动驾驶系统处理终端以及车载智能分析终端(集成多核arm与神经网络加速单元),负责实时处理场景数据流,并按预设规则选择待记录场景数据流;通信模块包含4G与WIFI模块,主要负责压缩及上传相应编码后的驾驶数据。Embodiment: The method for collecting, analyzing, and uploading road driving data of an L3-level autonomous driving system in this embodiment is based on vehicle WIFI or 4G network communication and edge intelligent computing, and can meet the data required for the development and verification of L3-level autonomous driving functions Document requirements. A car equipped with an L3-level autonomous driving system is shown in Figure 1. The car is equipped with a camera or camera (vision), millimeter-wave radar, integrated navigation equipment, and on-board data processing terminal. The on-board data processing terminal includes three parts: data acquisition module, intelligent analysis module and communication module. Among them, the data acquisition module mainly includes various data interfaces, acquisition chips (MCU) and video encoding modules, which are responsible for collecting, synchronizing and encoding various driving data; the positioning module mainly includes GPS and inertial navigation modules, responsible for positioning road sign extraction and event recording. Association with the map; the intelligent analysis module mainly includes the L3 level automatic driving system processing terminal and the vehicle intelligent analysis terminal (integrated with multi-core arm and neural network acceleration unit), which is responsible for real-time processing of scene data streams, and selects the scene data to be recorded according to preset rules. The communication module includes 4G and WIFI modules, which are mainly responsible for compressing and uploading the corresponding encoded driving data.
L3级自动驾驶系统道路驾驶数据采集、分析及上传方法,如图2所示,包括下列步骤:The method for collecting, analyzing and uploading road driving data for L3 autonomous driving system, as shown in Figure 2, includes the following steps:
①车端驾驶数据采集:①Car-side driving data collection:
车端驾驶数据由场景数据(雷达与视觉系统原始数据)、位置姿态信息(组合导航设备数据)以及车辆动力学数据(车速、油门、制动以及转向输入)等组成;车端驾驶数据采集包括下列步骤:Vehicle-side driving data consists of scene data (raw data of radar and vision systems), position and attitude information (integrated navigation device data), and vehicle dynamics data (vehicle speed, accelerator, braking and steering input); vehicle-side driving data collection includes: The following steps:
(11)数据采集与同步:采用软件同步的方式,利用采集GPS时钟或车载终端系统时钟同步各个驾驶数据,构造包含时间、图像数据、雷达原始数据、组合导航数据及车辆动力学参数的驾驶数据结构体;驾驶数据结构体包含属性如下:(11) Data acquisition and synchronization: by software synchronization, each driving data is synchronized by collecting GPS clock or vehicle terminal system clock to construct driving data including time, image data, radar original data, integrated navigation data and vehicle dynamics parameters Structure; the driving data structure contains the following properties:
时间:驾驶数据所对应的时间点;Time: the time point corresponding to the driving data;
图像数据:8路原始图像输入(Imgl-Img8);Image data: 8 channels of original image input (Imgl-Img8);
雷达原始数据:雷达系统输出目标列表(默认最大输出个数为32:Obj1-Obj32);Radar raw data: radar system output target list (the default maximum output number is 32: Obj1-Obj32);
组合导航数据:包括经纬度以及车辆6自由度运动信息(三轴加速度与三轴角速度);Integrated navigation data: including latitude and longitude and vehicle 6-DOF motion information (three-axis acceleration and three-axis angular velocity);
车辆动力学参数:包括车速、方向盘转角、油门行程和制动行程等;Vehicle dynamics parameters: including vehicle speed, steering wheel angle, accelerator travel and braking travel, etc.;
(12)数据编码与缓存:对图像数据采用H264或H265的方式进行编码,其他数据采用虚拟CAN信息(64位)的方式进行编码;设置数据寄存器,长度可配置(默认为300,即12秒25fps数据或10秒30fps数据),用于缓存驾驶数据结构体。(12) Data encoding and caching: The image data is encoded in H264 or H265, and other data is encoded in virtual CAN information (64-bit); set the data register, the length can be configured (the default is 300, that is, 12 seconds) 25fps data or 10 seconds of 30fps data) to cache the driving data structure.
②对采集到的车端驾驶数据进行在线数据分析:②Online data analysis of the collected vehicle-end driving data:
在L3级自动驾驶系统处理终端运行的感知、融合、定位以及规划决策算法模块,输出各自定义中间层级结果,经智能车载分析终端后处理后,按用户输入上传类型,向通信模块发送数据上传指令;详细内容包括下列步骤:The L3-level autonomous driving system processes the perception, fusion, positioning, and planning decision-making algorithm modules running on the terminal, and outputs the results of each custom intermediate level. After post-processing by the intelligent on-board analysis terminal, it sends data upload instructions to the communication module according to the upload type input by the user. ; details include the following steps:
(21)自动驾驶系统中间结果输出接口定义,具体方法为:包括感知目标输出、定位路标语义输出、车辆位姿输出和规划决策输出;(21) The definition of the intermediate result output interface of the automatic driving system, the specific methods are: including the output of the perception target, the semantic output of the positioning road sign, the output of the vehicle pose and the output of the planning decision;
其中感知目标输出:包括视觉系统(前视、盲区以及后视,单一相机默认上限16个目标,属性包括目标类别、纵向距离、横向距离和相对速度等)目标输出、毫米波雷达系统(前后776Hz雷达以及盲区24GHz雷达,单一雷达默认目标上限16个,属性包括径向距离、角度以及相对速度等)目标输出和毫米波雷达系统原始目标点云输出(单一雷达默认点目标数150个,属性包括反射率、径向距离、角度以及相对速度等);Among them, perceptual target output: including vision system (front-sighted, blind-spot and rear-sighted, a single camera has a default upper limit of 16 targets, attributes include target type, vertical distance, horizontal distance and relative speed, etc.) target output, millimeter-wave radar system (front and rear 776Hz) Radar and blind spot 24GHz radar, single radar default target upper limit is 16, attributes include radial distance, angle and relative speed, etc.) target output and millimeter wave radar system original target point cloud output (single radar default point target number 150, attributes include reflectivity, radial distance, angle, and relative velocity, etc.);
定位路标语义输出:以二进制图层的方式输出定位路标语义,包括可行驶区域语义输出、车道边界语义输出以及指示路标语义输出;Semantic output of localized road signs: Output the semantics of localized road signs in the form of binary layers, including the semantic output of drivable areas, the semantic output of lane boundaries and the semantic output of indicative road signs;
车辆位姿输出:包括车辆位置(经纬度或自定义初始化世界坐标系下平面位置)、车速、航向角以及6轴惯性传感器输出(横向、纵向、侧向加速度以及横摆、俯仰、侧倾角速度);Vehicle pose output: including vehicle position (latitude and longitude or the plane position in the custom initialization world coordinate system), vehicle speed, heading angle and 6-axis inertial sensor output (lateral, longitudinal, lateral acceleration and yaw, pitch, roll angular velocity) ;
规划决策输出:按固定纵向空间间隔(默认1米)给出预瞄点(默认10个预瞄点)轨迹;Planning decision output: The trajectory of preview points (default 10 preview points) is given according to a fixed vertical space interval (default 1 meter);
(22)目标匹配一致性检测,具体方法为:(22) Target matching consistency detection, the specific method is:
根据毫米波雷达以及视觉感知系统输出结果,提取时序匹配距离容差大于预设阈值dmin的驾驶数据,按如下公式计算匹配距离:According to the output results of the millimeter-wave radar and the visual perception system, extract the driving data whose timing matching distance tolerance is greater than the preset threshold dmin, and calculate the matching distance according to the following formula:
其中,为雷达输出目标坐标,为车载相机输出目标坐标,n为目标生命周期;若d>dmin,则上传相应片段的驾驶数据;in, output target coordinates for the radar, Output target coordinates for the on-board camera, n is the target life cycle; if d>dmin, upload the driving data of the corresponding segment;
(23)定位路标语义输出,具体方法为:(23) Locating landmark semantic output, the specific method is:
根据感知模块定位语义输出,并利用定位模块的车辆位姿估算结果,构建定位路标语义输出;定位路标语义输出方法采用关键帧语义输出方法或压缩全语义输出方法;According to the location semantic output of the perception module, and use the vehicle pose estimation result of the location module to construct the location landmark semantic output; the location landmark semantic output method adopts the key frame semantic output method or the compressed full semantic output method;
其中关键帧语义输出方法为:根据定位模块的里程积分估算,每50米提取一帧关键语义输出(即经后处理后视觉系统2进制语义输出图层),构造关键帧语义定位寄存器,关键帧语义定位寄存器中存储有关键帧及相应时刻车辆经纬度数据;每20帧(即每行驶1km)组包压缩一次,并发出上传请求信号;The key frame semantic output method is: according to the mileage integral estimation of the positioning module, extract a frame of key semantic output (ie, the binary semantic output layer of the visual system after post-processing) every 50 meters, and construct the key frame semantic positioning register. The frame semantic location register stores the key frame and the latitude and longitude data of the vehicle at the corresponding time; the packet is compressed once every 20 frames (that is, every 1km of driving), and an upload request signal is sent;
压缩全语义输出方法为:包括车道级语义以及引导指示级语义;从视觉系统车道及可行驶区域语义输出图层中后处理提取车道数量、车道宽度、边界类型、所属车道以及偏离中心距离;从指示路标语义输出图层中后处理提取路面路标以及空间路标两部份数据;上述数据与相应时刻车辆经纬度数据一起每行驶1km组包压缩一次,并发出上传请求信号;The compressed full-semantic output method is: including lane-level semantics and guidance-level semantics; post-processing to extract the number of lanes, lane widths, boundary types, lanes, and off-center distances from the visual system lane and drivable area semantic output layers; Instruct the road sign semantic output layer to post-process to extract two parts of data of road signs and spatial road signs; the above data together with the latitude and longitude data of the vehicle at the corresponding time are compressed once every 1km of driving, and an upload request signal is sent;
(24)极端车辆操作检测,具体方法为:(24) Extreme vehicle operation detection, the specific method is:
将惯性导航系统输出的横摆角速度、纵向加速度、纵向减速度以及侧向加速度进行时序编码,利用数值分析方法或者机器学习方法(SVM或者LSTM)对时序编码数据进行极端车辆操作分类,极端车辆操作分为急加速操作、急减速操作和急转弯操作;Encode the yaw rate, longitudinal acceleration, longitudinal deceleration and lateral acceleration output by the inertial navigation system in time series, and use numerical analysis method or machine learning method (SVM or LSTM) to classify the time series encoded data for extreme vehicle operation, extreme vehicle operation Divided into rapid acceleration operation, rapid deceleration operation and sharp turning operation;
其中数值分析方法为:若纵向加速度测量值连续大于设定阈值A1min超过N次(N默认值为3),则确认为急加速操作;若纵向减速度测量值连续小于设定阈值A2min超过N次(N默认值为3),则确认为急减速操作;若侧向加速度测量值以及横摆角速度测量值分别连续大于设定阈值AYmin以及Tmin超过N次(N默认值为3),则确认为急转弯操作;The numerical analysis method is as follows: if the measured value of longitudinal acceleration is continuously greater than the set threshold value A1min for more than N times (the default value of N is 3), it is confirmed as a rapid acceleration operation; if the measured value of longitudinal deceleration is continuously less than the set threshold value A2min for more than N times (The default value of N is 3), it is confirmed as a sudden deceleration operation; if the measured value of lateral acceleration and the measured value of yaw rate are continuously greater than the set threshold AYmin and Tmin for more than N times (the default value of N is 3), it is confirmed as sharp turn operation;
机器学习方法为:利用极端车辆操作驾驶时序数据样本,离线训练支持向量机(SVM)或长短期记忆网络(LSTM);将训练好的上述模型部署于车辆分析终端(急加、减速二分类与急转弯二分类),输入为时序编码惯性导航数据,输出为急加速操作、急减速操作或急转弯操作的事件信号;The machine learning method is: using extreme vehicle operation and driving time series data samples, offline training support vector machine (SVM) or long short-term memory network (LSTM); deploy the trained model above in the vehicle analysis terminal (acceleration, deceleration binary classification and Two classifications of sharp turns), the input is the time-series encoded inertial navigation data, and the output is the event signal of the sudden acceleration operation, the sudden deceleration operation or the sharp turning operation;
(25)人机决策一致性检测,具体方法为:(25) Human-machine decision-making consistency detection, the specific method is:
根据规划层输出结果与车辆位姿输出结果,按照预设的预瞄距离,提取实际轨迹与规划轨迹的偏差大于预设阈值Dmin的驾驶数据片段;按如下公式在车辆坐标系下计算归一化轨迹偏差:According to the output result of the planning layer and the output result of the vehicle pose, according to the preset preview distance, extract the driving data segment whose deviation between the actual trajectory and the planned trajectory is greater than the preset threshold Dmin; calculate the normalization in the vehicle coordinate system according to the following formula Trajectory deviation:
其中,[Xi,Yi]为车辆坐标系下实际轨迹点坐标,[xi,yi]为车辆坐标系下规划轨迹点坐标;m为轨迹点数,默认为10;若D>Dmin,则上传相应片段的驾驶数据;Among them, [Xi, Yi] is the actual track point coordinates in the vehicle coordinate system, [xi, yi] is the planned track point coordinates in the vehicle coordinate system; m is the number of track points, the default is 10; if D>Dmin, upload the corresponding segment driving data;
(26)自定义数据请求:根据人机交互端口的请求信号输入,按预设规则发送数据上传指令,即上传请求时刻步骤(12)中寄存器的驾驶数据;(26) Custom data request: according to the request signal input of the human-computer interaction port, send a data upload instruction according to a preset rule, that is, upload the driving data of the register in step (12) at the request time;
③数据通信,对车端驾驶数据做好上传准备,具体方法为:③Data communication, prepare for uploading of the driving data on the vehicle side, the specific methods are as follows:
根据步骤②的在线数据分析结果,将步骤(12)中寄存器的驾驶数据队列进行压缩(可采用Lz4方式进行数据压缩),并按预定义规则命名相应压缩文件;将车载数据处理终端建立为服务器,利用TCP或UDP协议,通过4G网络或无线网络向服务器透传压缩后数据;According to the online data analysis result in step ②, compress the driving data queue in the register in step (12) (Lz4 can be used for data compression), and name the corresponding compressed file according to the predefined rules; establish the on-board data processing terminal as a server , using TCP or UDP protocol to transparently transmit compressed data to the server through 4G network or wireless network;
④服务器端接收并存储车端驾驶数据,具体方法为:④The server side receives and stores the driving data of the vehicle side, and the specific method is as follows:
在存储服务器端(云端),建立客户端,通过TCP或UDP协议,接收车载数据处理终端所发送的驾驶数据;以数据采集日期为子目录在NTFS文件系统下存储相关驾驶数据。On the storage server (cloud), a client is established to receive the driving data sent by the vehicle data processing terminal through TCP or UDP protocol; the relevant driving data is stored in the NTFS file system with the data collection date as a subdirectory.
本发明基于车载WIFI或4G网络通信以及边缘智能计算,能够在人驾驶车辆的道路驾驶场景下,自动采集并上传如下驾驶数据:The present invention is based on on-board WIFI or 4G network communication and edge intelligent computing, and can automatically collect and upload the following driving data in the road driving scenario of a human-driven vehicle:
1.定位路标数据:作为可选项,对于定义L3系统功能应用场景下(包括泊车场景以及高速场景等)的定位路标进行结构化语义(包括路面路标与空间路标等)提取,按预定义数据结构经压缩后上传服务器端(云端)。1. Positioning road sign data: As an option, extract structured semantics (including road road signs and space road signs, etc.) for the positioning road signs that define the L3 system function application scenarios (including parking scenarios and high-speed scenarios, etc.), according to the predefined data. The structure is compressed and uploaded to the server (cloud).
2.事故场景数据:作为可选项,根据碰撞传感器信号,识别车辆碰撞状态。按预定义事故记录规则,保存相应场景数据。2. Accident scene data: as an option, identify the vehicle collision state according to the collision sensor signal. Save the corresponding scene data according to the predefined accident recording rules.
3.极端车辆操作场景数据:作为可选项,根据3轴或6轴惯性导航系统(陀螺仪)测量数据,识别极端车辆动力学状态,如急转弯、急加减速等。按预定义事件关联规则,记录并上传相应场景数据。3. Extreme vehicle operation scene data: as an option, identify extreme vehicle dynamics states, such as sharp turns, sharp acceleration and deceleration, based on 3-axis or 6-axis inertial navigation system (gyroscope) measurement data. Record and upload the corresponding scene data according to the predefined event association rules.
4.感知匹配异常场景数据:作为可选项,根据毫米波雷达与视觉系统场景感知匹配结果,按照预设目标匹配容差(俯视车辆坐标系距离或图像坐标系目标重合度),记录并上传相应场景数据。4. Perceived matching abnormal scene data: as an option, according to the scene perception matching results of the millimeter-wave radar and the vision system, according to the preset target matching tolerance (the distance of the overlooking vehicle coordinate system or the coincidence of the image coordinate system target), record and upload the corresponding scene data.
5.人机响应不匹配场景数据:作为可选项,在车载运算平台运行“虚拟驾驶员”,即局部轨迹规划算法模块,与真实车辆运动学状态(即真实驾驶员车辆操作)进行匹配,按预设规则记录人机轨迹相似性不满足要求驾驶场景数据。5. The human-machine response does not match the scene data: as an option, run the "virtual driver" on the in-vehicle computing platform, that is, the local trajectory planning algorithm module, to match the real vehicle kinematics state (that is, the real driver's vehicle operation), press The preset rules record the driving scene data that the similarity of human-machine trajectory does not meet the requirements.
6.自定义请求数据:作为可选项,在交互终端提供驾驶员/测试人员输入接口,可按照预设方式(默认为预设时长片段驾驶数据全纪录)一键请求记录当前驾驶场景数据。6. Customized request data: As an option, the driver/tester input interface is provided on the interactive terminal, and the current driving scene data can be requested to record the current driving scene data with one click in a preset method (the default is the full record of the driving data of the preset duration).
本发明基于车载WIFI或4G网络通信以及边缘智能计算,过滤大部分驾驶场景中对系统优化与升级意义不大的数据流,能够采集自动驾驶定位所需的压缩路标数据、事故场景数据、特定极限车辆操作下的场景数据、雷达与摄像头探测结果匹配异常场景数据及人机响应差异较大的场景数据。本发明能达到如下效果:(i)满足L3级自动驾驶系统感知、定位、规划决策各模块的开发与验证需求;(ii)自动极端场景检测,大大降低数据记录与上传所占用带宽;(iii)可在线运行相应算法模块,最大化前端平台数据挖掘与验证,大大降低后处理数据筛选工作人力资源需求。Based on on-board WIFI or 4G network communication and edge intelligent computing, the invention filters out data streams that are not significant for system optimization and upgrade in most driving scenarios, and can collect compressed road sign data, accident scene data, and specific limit data required for automatic driving positioning. The scene data under vehicle operation, radar and camera detection results match abnormal scene data and scene data with large differences in human-machine responses. The present invention can achieve the following effects: (i) meet the development and verification requirements of the L3 level automatic driving system perception, positioning, planning and decision-making modules; (ii) automatic extreme scene detection, greatly reducing the bandwidth occupied by data recording and uploading; (iii) ) can run the corresponding algorithm modules online, maximize the data mining and verification of the front-end platform, and greatly reduce the human resource requirements for post-processing data screening.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454082.XA CN110264586A (en) | 2019-05-28 | 2019-05-28 | L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454082.XA CN110264586A (en) | 2019-05-28 | 2019-05-28 | L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110264586A true CN110264586A (en) | 2019-09-20 |
Family
ID=67915760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910454082.XA Pending CN110264586A (en) | 2019-05-28 | 2019-05-28 | L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110264586A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110703289A (en) * | 2019-10-29 | 2020-01-17 | 杭州鸿泉物联网技术股份有限公司 | Track data reporting method and moving track restoring method |
CN110785718A (en) * | 2019-09-29 | 2020-02-11 | 驭势科技(北京)有限公司 | Vehicle-mounted automatic driving test system and test method |
CN110852192A (en) * | 2019-10-23 | 2020-02-28 | 上海能塔智能科技有限公司 | Method and device for determining noise data, storage medium, terminal and vehicle |
CN111619482A (en) * | 2020-06-08 | 2020-09-04 | 武汉光庭信息技术股份有限公司 | Vehicle driving data acquisition and processing system and method |
CN111710158A (en) * | 2020-05-28 | 2020-09-25 | 深圳市元征科技股份有限公司 | Vehicle data processing method and related equipment |
CN111845728A (en) * | 2020-06-22 | 2020-10-30 | 福瑞泰克智能系统有限公司 | Driving assistance data acquisition method and system |
CN112286925A (en) * | 2020-12-09 | 2021-01-29 | 新石器慧义知行智驰(北京)科技有限公司 | Method for cleaning data collected by unmanned vehicle |
CN112346969A (en) * | 2020-10-28 | 2021-02-09 | 武汉极目智能技术有限公司 | AEB development verification system and method based on data acquisition platform |
CN113479218A (en) * | 2021-08-09 | 2021-10-08 | 哈尔滨工业大学 | Roadbed automatic driving auxiliary detection system and control method thereof |
CN113743356A (en) * | 2021-09-15 | 2021-12-03 | 东软睿驰汽车技术(沈阳)有限公司 | Data acquisition method and device and electronic equipment |
CN113903102A (en) * | 2021-10-29 | 2022-01-07 | 广汽埃安新能源汽车有限公司 | Adjustment information acquisition method, adjustment method, device, electronic device and medium |
CN114047003A (en) * | 2021-12-22 | 2022-02-15 | 吉林大学 | A data-triggered recording control method based on dynamic time warping algorithm |
CN114154510A (en) * | 2021-11-30 | 2022-03-08 | 江苏智能网联汽车创新中心有限公司 | Control method and device for automatic driving vehicle, electronic equipment and storage medium |
CN114354220A (en) * | 2022-01-07 | 2022-04-15 | 苏州挚途科技有限公司 | Driving data processing method and device and electronic equipment |
CN114548248A (en) * | 2022-02-14 | 2022-05-27 | 中汽研(天津)汽车工程研究院有限公司 | Classification triggering uploading method and system for driving data of automatic driving automobile |
CN114608592A (en) * | 2022-02-10 | 2022-06-10 | 上海追势科技有限公司 | Crowdsourcing method, system, equipment and storage medium for map |
CN114861220A (en) * | 2022-04-24 | 2022-08-05 | 宁波均胜智能汽车技术研究院有限公司 | Automatic driving data processing method and system conforming to data privacy security |
CN115107791A (en) * | 2022-04-06 | 2022-09-27 | 东软睿驰汽车技术(沈阳)有限公司 | Method for triggering data acquisition, method and device for data acquisition |
CN115203216A (en) * | 2022-05-23 | 2022-10-18 | 中国测绘科学研究院 | Geographic information data classification grading and protecting method and system for automatic driving map online updating scene |
CN115225422A (en) * | 2022-06-30 | 2022-10-21 | 际络科技(上海)有限公司 | Vehicle CAN bus data acquisition method and device |
CN116238545A (en) * | 2023-05-12 | 2023-06-09 | 禾多科技(北京)有限公司 | Automatic driving track deviation detection method and detection system |
CN116485626A (en) * | 2023-04-10 | 2023-07-25 | 北京辉羲智能科技有限公司 | Automatic driving SoC chip for sensor data dump |
CN116664964A (en) * | 2023-07-31 | 2023-08-29 | 福思(杭州)智能科技有限公司 | Data screening method, device, vehicle-mounted equipment and storage medium |
CN118013234A (en) * | 2024-04-08 | 2024-05-10 | 浙江吴霞科技有限公司 | Multi-source heterogeneous big data-based key vehicle driver portrait intelligent generation system |
CN118656312A (en) * | 2024-08-19 | 2024-09-17 | 元拓科技(大连)有限公司 | A data processing method and system for chip simulation software |
CN120086663A (en) * | 2025-04-30 | 2025-06-03 | 中广核(北京)新能源科技有限公司 | A new energy station terminal collection method |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463244A (en) * | 2014-12-04 | 2015-03-25 | 上海交通大学 | Aberrant driving behavior monitoring and recognizing method and system based on smart mobile terminal |
CN105270411A (en) * | 2015-08-25 | 2016-01-27 | 南京联创科技集团股份有限公司 | Analysis method and device of driving behavior |
CN105717939A (en) * | 2016-01-20 | 2016-06-29 | 李万鸿 | Informatization and networking implementation method of road pavement supporting automobile unmanned automatic driving |
CN106114515A (en) * | 2016-06-29 | 2016-11-16 | 北京奇虎科技有限公司 | Car steering behavior based reminding method and system |
CN106441319A (en) * | 2016-09-23 | 2017-02-22 | 中国科学院合肥物质科学研究院 | A system and method for generating a lane-level navigation map of an unmanned vehicle |
US20170371036A1 (en) * | 2015-02-06 | 2017-12-28 | Delphi Technologies, Inc. | Autonomous vehicle with unobtrusive sensors |
CN107564280A (en) * | 2017-08-22 | 2018-01-09 | 王浩宇 | Driving behavior data acquisition and analysis system and method based on environment sensing |
CN107610464A (en) * | 2017-08-11 | 2018-01-19 | 河海大学 | A kind of trajectory predictions method based on Gaussian Mixture time series models |
CN107784587A (en) * | 2016-08-25 | 2018-03-09 | 大连楼兰科技股份有限公司 | A Driving Behavior Evaluation System |
CN107895501A (en) * | 2017-09-29 | 2018-04-10 | 大圣科技股份有限公司 | Unmanned car steering decision-making technique based on the training of magnanimity driving video data |
CN108646748A (en) * | 2018-06-05 | 2018-10-12 | 北京联合大学 | A kind of place unmanned vehicle trace tracking method and system |
CN108860165A (en) * | 2018-05-11 | 2018-11-23 | 深圳市图灵奇点智能科技有限公司 | Vehicle assistant drive method and system |
CN109117718A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | A kind of semantic map structuring of three-dimensional towards road scene and storage method |
CN109459750A (en) * | 2018-10-19 | 2019-03-12 | 吉林大学 | A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision |
CN109471096A (en) * | 2018-10-31 | 2019-03-15 | 奇瑞汽车股份有限公司 | Multi-Sensor Target matching process, device and automobile |
CN109634282A (en) * | 2018-12-25 | 2019-04-16 | 奇瑞汽车股份有限公司 | Automatic driving vehicle, method and apparatus |
-
2019
- 2019-05-28 CN CN201910454082.XA patent/CN110264586A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463244A (en) * | 2014-12-04 | 2015-03-25 | 上海交通大学 | Aberrant driving behavior monitoring and recognizing method and system based on smart mobile terminal |
US20170371036A1 (en) * | 2015-02-06 | 2017-12-28 | Delphi Technologies, Inc. | Autonomous vehicle with unobtrusive sensors |
CN105270411A (en) * | 2015-08-25 | 2016-01-27 | 南京联创科技集团股份有限公司 | Analysis method and device of driving behavior |
CN105717939A (en) * | 2016-01-20 | 2016-06-29 | 李万鸿 | Informatization and networking implementation method of road pavement supporting automobile unmanned automatic driving |
CN106114515A (en) * | 2016-06-29 | 2016-11-16 | 北京奇虎科技有限公司 | Car steering behavior based reminding method and system |
CN107784587A (en) * | 2016-08-25 | 2018-03-09 | 大连楼兰科技股份有限公司 | A Driving Behavior Evaluation System |
CN106441319A (en) * | 2016-09-23 | 2017-02-22 | 中国科学院合肥物质科学研究院 | A system and method for generating a lane-level navigation map of an unmanned vehicle |
CN107610464A (en) * | 2017-08-11 | 2018-01-19 | 河海大学 | A kind of trajectory predictions method based on Gaussian Mixture time series models |
CN107564280A (en) * | 2017-08-22 | 2018-01-09 | 王浩宇 | Driving behavior data acquisition and analysis system and method based on environment sensing |
CN107895501A (en) * | 2017-09-29 | 2018-04-10 | 大圣科技股份有限公司 | Unmanned car steering decision-making technique based on the training of magnanimity driving video data |
CN108860165A (en) * | 2018-05-11 | 2018-11-23 | 深圳市图灵奇点智能科技有限公司 | Vehicle assistant drive method and system |
CN108646748A (en) * | 2018-06-05 | 2018-10-12 | 北京联合大学 | A kind of place unmanned vehicle trace tracking method and system |
CN109117718A (en) * | 2018-07-02 | 2019-01-01 | 东南大学 | A kind of semantic map structuring of three-dimensional towards road scene and storage method |
CN109459750A (en) * | 2018-10-19 | 2019-03-12 | 吉林大学 | A kind of more wireless vehicle trackings in front that millimetre-wave radar is merged with deep learning vision |
CN109471096A (en) * | 2018-10-31 | 2019-03-15 | 奇瑞汽车股份有限公司 | Multi-Sensor Target matching process, device and automobile |
CN109634282A (en) * | 2018-12-25 | 2019-04-16 | 奇瑞汽车股份有限公司 | Automatic driving vehicle, method and apparatus |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021056556A1 (en) * | 2019-09-29 | 2021-04-01 | 驭势科技(北京)有限公司 | Vehicle-mounted autonomous driving test system and test method |
CN110785718A (en) * | 2019-09-29 | 2020-02-11 | 驭势科技(北京)有限公司 | Vehicle-mounted automatic driving test system and test method |
CN110785718B (en) * | 2019-09-29 | 2021-11-02 | 驭势科技(北京)有限公司 | Vehicle-mounted automatic driving test system and test method |
CN110852192A (en) * | 2019-10-23 | 2020-02-28 | 上海能塔智能科技有限公司 | Method and device for determining noise data, storage medium, terminal and vehicle |
CN110852192B (en) * | 2019-10-23 | 2023-03-17 | 上海能塔智能科技有限公司 | Method and device for determining noise data, storage medium, terminal and vehicle |
CN110703289A (en) * | 2019-10-29 | 2020-01-17 | 杭州鸿泉物联网技术股份有限公司 | Track data reporting method and moving track restoring method |
CN110703289B (en) * | 2019-10-29 | 2021-07-06 | 杭州鸿泉物联网技术股份有限公司 | Track data reporting method and moving track restoring method |
CN111710158A (en) * | 2020-05-28 | 2020-09-25 | 深圳市元征科技股份有限公司 | Vehicle data processing method and related equipment |
CN111619482A (en) * | 2020-06-08 | 2020-09-04 | 武汉光庭信息技术股份有限公司 | Vehicle driving data acquisition and processing system and method |
CN111845728B (en) * | 2020-06-22 | 2021-09-21 | 福瑞泰克智能系统有限公司 | Driving assistance data acquisition method and system |
CN111845728A (en) * | 2020-06-22 | 2020-10-30 | 福瑞泰克智能系统有限公司 | Driving assistance data acquisition method and system |
CN112346969A (en) * | 2020-10-28 | 2021-02-09 | 武汉极目智能技术有限公司 | AEB development verification system and method based on data acquisition platform |
CN112346969B (en) * | 2020-10-28 | 2023-02-28 | 武汉极目智能技术有限公司 | AEB development verification system and method based on data acquisition platform |
CN112286925A (en) * | 2020-12-09 | 2021-01-29 | 新石器慧义知行智驰(北京)科技有限公司 | Method for cleaning data collected by unmanned vehicle |
CN113479218A (en) * | 2021-08-09 | 2021-10-08 | 哈尔滨工业大学 | Roadbed automatic driving auxiliary detection system and control method thereof |
CN113743356A (en) * | 2021-09-15 | 2021-12-03 | 东软睿驰汽车技术(沈阳)有限公司 | Data acquisition method and device and electronic equipment |
CN113743356B (en) * | 2021-09-15 | 2025-01-28 | 东软睿驰汽车技术(沈阳)有限公司 | Data collection method, device and electronic equipment |
CN113903102A (en) * | 2021-10-29 | 2022-01-07 | 广汽埃安新能源汽车有限公司 | Adjustment information acquisition method, adjustment method, device, electronic device and medium |
CN113903102B (en) * | 2021-10-29 | 2023-11-17 | 广汽埃安新能源汽车有限公司 | Adjustment information acquisition method, adjustment device, electronic equipment and medium |
CN114154510A (en) * | 2021-11-30 | 2022-03-08 | 江苏智能网联汽车创新中心有限公司 | Control method and device for automatic driving vehicle, electronic equipment and storage medium |
CN114047003A (en) * | 2021-12-22 | 2022-02-15 | 吉林大学 | A data-triggered recording control method based on dynamic time warping algorithm |
CN114354220A (en) * | 2022-01-07 | 2022-04-15 | 苏州挚途科技有限公司 | Driving data processing method and device and electronic equipment |
CN114608592A (en) * | 2022-02-10 | 2022-06-10 | 上海追势科技有限公司 | Crowdsourcing method, system, equipment and storage medium for map |
CN114548248A (en) * | 2022-02-14 | 2022-05-27 | 中汽研(天津)汽车工程研究院有限公司 | Classification triggering uploading method and system for driving data of automatic driving automobile |
CN115107791A (en) * | 2022-04-06 | 2022-09-27 | 东软睿驰汽车技术(沈阳)有限公司 | Method for triggering data acquisition, method and device for data acquisition |
CN114861220A (en) * | 2022-04-24 | 2022-08-05 | 宁波均胜智能汽车技术研究院有限公司 | Automatic driving data processing method and system conforming to data privacy security |
CN115203216B (en) * | 2022-05-23 | 2023-02-07 | 中国测绘科学研究院 | A method and system for classification, grading and protection of geographic information data for automatic driving map online update scenarios |
CN115203216A (en) * | 2022-05-23 | 2022-10-18 | 中国测绘科学研究院 | Geographic information data classification grading and protecting method and system for automatic driving map online updating scene |
CN115225422B (en) * | 2022-06-30 | 2023-10-03 | 际络科技(上海)有限公司 | Vehicle CAN bus data acquisition method and device |
CN115225422A (en) * | 2022-06-30 | 2022-10-21 | 际络科技(上海)有限公司 | Vehicle CAN bus data acquisition method and device |
CN116485626B (en) * | 2023-04-10 | 2024-03-12 | 北京辉羲智能科技有限公司 | Automatic driving SoC chip for sensor data dump |
CN116485626A (en) * | 2023-04-10 | 2023-07-25 | 北京辉羲智能科技有限公司 | Automatic driving SoC chip for sensor data dump |
CN116238545A (en) * | 2023-05-12 | 2023-06-09 | 禾多科技(北京)有限公司 | Automatic driving track deviation detection method and detection system |
CN116238545B (en) * | 2023-05-12 | 2023-10-27 | 禾多科技(北京)有限公司 | Automatic driving track deviation detection method and detection system |
CN116664964A (en) * | 2023-07-31 | 2023-08-29 | 福思(杭州)智能科技有限公司 | Data screening method, device, vehicle-mounted equipment and storage medium |
CN116664964B (en) * | 2023-07-31 | 2023-10-20 | 福思(杭州)智能科技有限公司 | Data screening method, device, vehicle-mounted equipment and storage medium |
CN118013234A (en) * | 2024-04-08 | 2024-05-10 | 浙江吴霞科技有限公司 | Multi-source heterogeneous big data-based key vehicle driver portrait intelligent generation system |
CN118656312A (en) * | 2024-08-19 | 2024-09-17 | 元拓科技(大连)有限公司 | A data processing method and system for chip simulation software |
CN118656312B (en) * | 2024-08-19 | 2024-10-25 | 元拓科技(大连)有限公司 | Data processing method and system of chip simulation software |
CN120086663A (en) * | 2025-04-30 | 2025-06-03 | 中广核(北京)新能源科技有限公司 | A new energy station terminal collection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110264586A (en) | L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading | |
CN112700470B (en) | Target detection and track extraction method based on traffic video stream | |
CN109948523B (en) | A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion | |
CN113168708B (en) | Lane line tracking method and device | |
WO2023045935A1 (en) | Automated iteration method for target detection model, device and storage medium | |
EP3104284B1 (en) | Automatic labeling and learning of driver yield intention | |
CN207624060U (en) | A kind of automated driving system scene floor data acquisition system | |
CN113643431B (en) | A system and method for iterative optimization of visual algorithms | |
US20230048680A1 (en) | Method and apparatus for passing through barrier gate crossbar by vehicle | |
CN108639059A (en) | Driver based on least action principle manipulates behavior quantization method and device | |
CN117612127B (en) | Scene generation method and device, storage medium and electronic equipment | |
CN109874099B (en) | Networking vehicle-mounted equipment flow control system | |
CN114693540A (en) | Image processing method and device and intelligent automobile | |
CN116888648A (en) | Critical scene extraction system in lightweight vehicles | |
CN118538055A (en) | Intelligent obstacle avoidance system of intelligent network-connected automobile | |
WO2022115987A1 (en) | Method and system for automatic driving data collection and closed-loop management | |
CN118387109A (en) | Road small target detection and decision method based on multi-mode data fusion | |
EP4148600A1 (en) | Attentional sampling for long range detection in autonomous vehicles | |
CN115205311A (en) | Image processing method, image processing apparatus, vehicle, medium, and chip | |
CN115203457A (en) | Image retrieval method, image retrieval device, vehicle, storage medium and chip | |
CN111353636A (en) | A method and system for predicting ship driving behavior based on multimodal data | |
CN111077893B (en) | Navigation method based on multiple vanishing points, electronic equipment and storage medium | |
CN110956072B (en) | Driving skill training method based on big data analysis | |
CN115042814A (en) | Traffic light status recognition method, device, vehicle and storage medium | |
CN116959267B (en) | A machine vision-based crosswind warning method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Zhejiang Zero run Technology Co.,Ltd. Address before: 310051 1st and 6th floors, no.451 Internet of things street, Binjiang District, Hangzhou City, Zhejiang Province Applicant before: ZHEJIANG LEAPMOTOR TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190920 |
|
RJ01 | Rejection of invention patent application after publication |