CN107886043A - The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible - Google Patents
The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible Download PDFInfo
- Publication number
- CN107886043A CN107886043A CN201710595388.8A CN201710595388A CN107886043A CN 107886043 A CN107886043 A CN 107886043A CN 201710595388 A CN201710595388 A CN 201710595388A CN 107886043 A CN107886043 A CN 107886043A
- Authority
- CN
- China
- Prior art keywords
- module
- vehicle
- area
- pedestrian
- detection module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 153
- 238000012549 training Methods 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 8
- 230000016776 visual perception Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000013135 deep learning Methods 0.000 description 9
- 230000001133 acceleration Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像处理技术领域,具体涉及一种视觉感知的汽车前视车辆和行人防碰撞预警系统及方法。The invention relates to the technical field of image processing, and in particular to a visual perception vehicle and pedestrian anti-collision warning system and method.
背景技术Background technique
汽车在复杂的交通环境中对其前视区域中汽车、行人的识别显得尤为重要。通过前视的车辆、行人的识别,可以有效弥补驾驶员的感官能力的不足,从而减少交通事故。因此,汽车行驶时的前视车辆、行人的识别有着重要的理论与实际意义。It is particularly important for a car to recognize cars and pedestrians in its front-sight area in a complex traffic environment. Through the recognition of forward-looking vehicles and pedestrians, it can effectively make up for the lack of the driver's sensory ability, thereby reducing traffic accidents. Therefore, the recognition of forward-looking vehicles and pedestrians when the car is driving has important theoretical and practical significance.
随着科学技术的进步,车辆、行人识别技术也得到了发展,但在实际实现过程中随着情景与形态的变换,识别效果并不尽如人意,各种识别算法只是根据图像特征进行判定识别,从而限制了各种算法的使用。With the advancement of science and technology, vehicle and pedestrian recognition technology has also been developed, but in the actual implementation process, as the scene and shape change, the recognition effect is not satisfactory, and various recognition algorithms only judge and recognize based on image features , thus limiting the use of various algorithms.
发明内容Contents of the invention
针对现有技术的不足,本发明旨在提供一种视觉感知的汽车前视车辆和行人防碰撞预警系统及方法,运用深度学习算法,通过对大量样本进行特征训练与提取,得到车辆、行人图像的最高层表示,从另一个角度对车辆、行人图像进行理解,使计算机对图像有一个深层次的理解,同时完成车辆、行人识别。Aiming at the deficiencies of the prior art, the present invention aims to provide a visual perception vehicle and pedestrian anti-collision warning system and method, using a deep learning algorithm to obtain images of vehicles and pedestrians by performing feature training and extraction on a large number of samples The highest-level representation of the image of vehicles and pedestrians is understood from another angle, so that the computer has a deep understanding of the image, and at the same time completes the recognition of vehicles and pedestrians.
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
视觉感知的汽车前视车辆和行人防碰撞预警系统,包括:Vision-aware forward-looking vehicle and pedestrian anti-collision warning systems, including:
汽车前视区域采集模块:用于采集汽车前视区域的视频信息,并传输至帧获取模块;Vehicle front-view area acquisition module: used to collect the video information of the vehicle front-view area and transmit it to the frame acquisition module;
帧获取模块:用于对汽车前视区域采集模块采集的视频信息逐帧连续性获取,供Haar检测模块进行检测;Frame acquisition module: used to continuously acquire the video information collected by the vehicle front view area acquisition module frame by frame, for detection by the Haar detection module;
Haar检测模块:用于采用Haar分类器识别帧获取模块所获取的帧信息的前车或行人所在的区域;Haar detection module: for adopting the Haar classifier to identify the area where the front vehicle or pedestrian is located in the frame information obtained by the frame acquisition module;
水平直线匹配模块:用于对Haar检测模块得到的区域进行水平直线检测并得出检测出水平直线的区域,并将水平直线检测得到的区域与Haar检测模块得到的区域进行对比,剔除Haar检测模块检测得到但是在水平直线匹配模块中没有检测到的区域;Horizontal line matching module: used to detect the area obtained by the Haar detection module and obtain the area where the horizontal line is detected, compare the area obtained by the horizontal line detection with the area obtained by the Haar detection module, and eliminate the Haar detection module Regions detected but not detected in the horizontal line matching module;
DBN网络检测模块:用于对经过水平直线匹配模块处理后得到的区域进行DBN网络检测,得出最终的前车或行人所在区域;DBN network detection module: used to perform DBN network detection on the area obtained after processing by the horizontal line matching module, and obtain the final area where the vehicle or pedestrian in front is located;
碰撞预警模块:用于根据最终的前车或行人所在区域,预测检测得到的前车和行人与汽车的间距是否小于安全间距,如果是,则发出碰撞预警。Collision warning module: It is used to predict whether the detected distance between the vehicle in front and the pedestrian and the car is less than the safe distance according to the final area where the vehicle in front or the pedestrian is located, and if so, issue a collision warning.
进一步地,所述水平直线匹配模块包括轮廓检测模块、水平直线检测模块、比对模块;其中:Further, the horizontal line matching module includes a contour detection module, a horizontal line detection module, and a comparison module; wherein:
轮廓检测模块用于对Haar检测模块检测得到的区域进行轮廓检测;The contour detection module is used to perform contour detection on the area detected by the Haar detection module;
水平直线检测模块用于对轮廓检测模块得到的检测结果进行水平直线检测,并只保留水平直线,得出具有水平直线的区域;The horizontal straight line detection module is used to detect the horizontal straight line on the detection result obtained by the contour detection module, and only keep the horizontal straight line to obtain the area with the horizontal straight line;
比对模块用于将水平直线检测模块得到的区域和Haar检测模块检测得到的区域进行比对,剔除Haar检测模块检测得到的前车或行人的区域中没有被水平直线检测模块检测得到的区域。The comparison module is used to compare the area obtained by the horizontal straight line detection module with the area detected by the Haar detection module, and remove the area not detected by the horizontal straight line detection module in the area of the preceding vehicle or pedestrian detected by the Haar detection module.
进一步地,DBN网络检测模块包括RBM层训练模块和Softmax回归模块;其中:Further, the DBN network detection module includes an RBM layer training module and a Softmax regression module; wherein:
RBM层训练模块:用于以经过水平直线匹配模块处理后得到的前车或行人所在的区域作为输入进行RBM无监督训练;RBM layer training module: used to perform RBM unsupervised training with the area of the front vehicle or pedestrian obtained after processing by the horizontal line matching module as input;
Softmax回归模块:用于对RBM模块得到的RBM无监督训练结果进行归类,判断所检测的区域是否为前车或行人所在的区域,如果是,则进行标记。Softmax regression module: used to classify the RBM unsupervised training results obtained by the RBM module, judge whether the detected area is the area where the vehicle in front or the pedestrian is located, and if so, mark it.
更进一步地,所述DBN网络检测模块包括至少三层RBM层训练模块,前一层RBM层训练模块的输出作为后一层RBM层训练模块的输入。Furthermore, the DBN network detection module includes at least three layers of RBM layer training modules, and the output of the previous layer of RBM layer training modules is used as the input of the subsequent layer of RBM layer training modules.
进一步地,所述DBN网络检测模块还包括有BP反馈调节模块,用于对整个DBN网络进行反馈微调。Further, the DBN network detection module also includes a BP feedback adjustment module, which is used for fine-tuning the feedback of the entire DBN network.
进一步地,所述碰撞预警模块包括坐标转换模块、卡尔曼滤波模块、安全间距计算模块、对比模块;其中:Further, the collision warning module includes a coordinate conversion module, a Kalman filter module, a safety distance calculation module, and a comparison module; wherein:
坐标转换模块:用于根据DBN网络检测模块最终得到的检测结果锁定前车或行人并进行坐标转换,确定前车或行人在车体坐标系中的位置;Coordinate transformation module: used to lock the vehicle or pedestrian in front and perform coordinate transformation according to the detection result finally obtained by the DBN network detection module, and determine the position of the vehicle or pedestrian in front in the vehicle body coordinate system;
卡尔曼滤波模块:用于采用卡尔曼滤波预测下一时间汽车与前车或行人的距离;Kalman filter module: used to use Kalman filter to predict the distance between the car and the vehicle or pedestrian in front at the next time;
安全间距计算模块:用于计算出汽车与前车或者汽车与行人的安全间距;Safety distance calculation module: used to calculate the safety distance between the car and the vehicle in front or the car and pedestrians;
对比模块:将预测到的下一时间汽车与前车或行人的距离和汽车与前车或者行人的安全间距进行对比,如果汽车与前车或者行人的安全间距大于预测到的下一时间汽车与前车或者行人的距离,则发出碰撞预警。Comparison module: compare the predicted distance between the car and the vehicle in front or the pedestrian with the safety distance between the car and the vehicle in front or the pedestrian, if the safety distance between the car and the vehicle in front or the pedestrian is greater than the predicted distance between the car and the pedestrian in the next time A collision warning will be issued if the distance between the vehicle or pedestrian in front is exceeded.
利用上述系统的汽车前视车辆和行人防碰撞预警方法,包括如下步骤:The anti-collision warning method for vehicles and pedestrians using the above-mentioned system includes the following steps:
S1在汽车的前方车标的下方安装汽车前视区域采集模块,系统启动后首先通过汽车前视区域采集模块采集视频信息,并将采集到的视频传到帧获取模块;S1 installs the car front-view area acquisition module under the car logo in front of the car. After the system is started, it first collects video information through the car front-view area acquisition module, and transmits the collected video to the frame acquisition module;
S2帧获取模块对视频信息的帧信息进行连续性获取,获取的帧信息以图片形式存在;The S2 frame acquisition module continuously acquires the frame information of the video information, and the acquired frame information exists in the form of pictures;
S3Haar检测模块采用Haar分类器来对帧信息中的前车和行人进行检测和识别,得出前车或行人所在的区域;The S3Haar detection module uses a Haar classifier to detect and identify the front vehicle and pedestrian in the frame information, and obtain the area where the front vehicle or pedestrian is located;
S4水平直线匹配模块对Haar检测模块得到的区域进行水平直线检测并得出检测出水平直线的区域,并将水平直线检测得到的区域与Haar检测模块得到的区域进行对比,剔除Haar检测模块检测得到但是在水平直线匹配模块中没有检测到的区域;The S4 horizontal straight line matching module detects the horizontal straight line on the area obtained by the Haar detection module and obtains the area where the horizontal straight line is detected, compares the area obtained by the horizontal straight line detection with the area obtained by the Haar detection module, and eliminates the area detected by the Haar detection module But there is no detected area in the horizontal line matching module;
S5DBN网络检测模块对经过水平直线匹配模块处理后得到的区域进行DBN网络检测,进一步识别这些区域是否为前车或行人,如果是,则进行标记,得出最终的前车或行人所在区域;The S5DBN network detection module performs DBN network detection on the areas obtained after the processing of the horizontal line matching module, and further identifies whether these areas are vehicles or pedestrians in front, and if so, mark them to obtain the final area where the vehicles or pedestrians in front are located;
S6碰撞预警模块根据根据最终的前车或行人所在区域锁定前车和行人的区域,预测前车和行人与汽车的间距是否小于安全间距,如果是,则发出碰撞预警。The S6 collision warning module predicts whether the distance between the vehicle in front and the pedestrian and the car is less than the safe distance according to the final area where the vehicle in front or the pedestrian is locked, and if so, issues a collision warning.
需要说明的是,步骤S4具体为:It should be noted that step S4 is specifically:
4.1)轮廓检测模块对Haar检测模块检测到的区域进行轮廓检测;4.1) The contour detection module carries out contour detection to the area detected by the Haar detection module;
4.2)水平直线检测模块对检测得到的轮廓进行水平直线检测,检测完成后只保留水平直线得出具有水平直线的区域;4.2) The horizontal straight line detection module carries out horizontal straight line detection to the detected contour, and after the detection is completed, only the horizontal straight line is retained to obtain the area with the horizontal straight line;
4.3)比对模块将水平直线检测模块得到的区域和Haar检测模块检测得到的区域进行比对,剔除Haar检测模块检测得到的前车或行人的区域中没有被水平直线检测模块检测得到的区域,得到水平直线匹配模块的检测结果。4.3) the comparison module compares the area obtained by the horizontal straight line detection module with the area detected by the Haar detection module, and removes the area that is not detected by the horizontal straight line detection module in the area of the vehicle in front or pedestrians that the Haar detection module detects. Obtain the detection result of the horizontal line matching module.
需要说明的是,步骤S5具体为:It should be noted that step S5 is specifically:
5.1)利用RBM层训练模块完成对经过水平直线匹配模块处理后得到的前车和行人所在区域进行RBM无监督训练,完成特征提取;5.1) Utilize the RBM layer training module to complete the RBM unsupervised training on the area where the vehicle in front and the pedestrian are located after being processed by the horizontal line matching module, and complete the feature extraction;
5.2)利用Softmax回归模块对提取特征进行分类,识别出经过水平直线匹配模块处理后得到的前车和行人所在区域中的实际的前车和行人的区域。5.2) Use the Softmax regression module to classify the extracted features, and identify the actual front vehicle and pedestrian areas in the area where the front vehicles and pedestrians are located after being processed by the horizontal line matching module.
进一步需要说明的是,还包括有5.3):利用BP反馈调节模块调节整个DBN网络,改善学习效果,使得DBN网络的参数状态达到最优。It should be further explained that 5.3) is also included: use the BP feedback adjustment module to adjust the entire DBN network, improve the learning effect, and make the parameter state of the DBN network reach the optimal state.
进一步需要说明的是,包括至少三层RBM层训练模块,前一层RBM层训练模块的输出作为后一层RBM层训练模块的输入,最后一层RBM层训练模块的输出作为Softmax回归模块的输入。It should be further noted that at least three layers of RBM layer training modules are included, the output of the previous layer of RBM layer training modules is used as the input of the next layer of RBM layer training modules, and the output of the last layer of RBM layer training modules is used as the input of the Softmax regression module .
需要说明的是,步骤5.1)中,将经过水平直线匹配模块处理后得到的前车和行人所在区域转换成图片形式,缩放到设定大小,并进行自适应二值化处理,进而按光栅顺序扫描成数组形式,所形成的数组作为RBM层训练模块的输入。It should be noted that, in step 5.1), the area of the vehicle in front and pedestrians obtained after processing by the horizontal line matching module is converted into an image form, scaled to a set size, and adaptive binarization is performed, and then the raster order Scan into an array form, and the formed array is used as the input of the RBM layer training module.
本发明的有益效果在于:车辆、行人作为非刚性物体,不同的车辆、行人之间表现形态各异,但是在这个形态变化之间,总有些不变的特征。本发明通过将车辆、行人图片进行自适应二值化化来处理,会较好的保留车辆、行人所具有的特征,并且足以与其他物体做出区别,并将处理结果用于训练DBN网络,进行特征提取,以便于更好的进行识别。The beneficial effect of the present invention is that: vehicles and pedestrians are non-rigid objects, and different vehicles and pedestrians have different forms, but there are always some invariable characteristics between the form changes. The present invention processes the pictures of vehicles and pedestrians through adaptive binarization, which can better retain the characteristics of vehicles and pedestrians, and is sufficient to distinguish them from other objects, and uses the processing results to train the DBN network. Feature extraction for better identification.
附图说明Description of drawings
图1为本发明实施例的流程示意图。Fig. 1 is a schematic flow chart of an embodiment of the present invention.
具体实施方式Detailed ways
以下将结合附图对本发明作进一步的描述,需要说明的是,以下实施例以本技术方案为前提,给出了详细的实施方式和具体的操作过程,但本发明的范围并不限于本实施例。The present invention will be further described below in conjunction with the accompanying drawings. It should be noted that the following examples are based on the technical solution and provide detailed implementation and specific operation process, but the scope of the present invention is not limited to this implementation example.
如图1所示,视觉感知的汽车前视车辆和行人防碰撞预警系统,包括:As shown in Figure 1, the vision-aware car forward-looking vehicle and pedestrian anti-collision warning system includes:
汽车前视区域采集模块:用于采集汽车前视区域的视频信息,并传输至帧获取模块;Vehicle front-view area acquisition module: used to collect the video information of the vehicle front-view area and transmit it to the frame acquisition module;
帧获取模块:用于对汽车前视区域采集模块采集的视频信息逐帧连续性获取,供Haar检测模块进行检测;Frame acquisition module: used to continuously acquire the video information collected by the vehicle front view area acquisition module frame by frame, for detection by the Haar detection module;
Haar检测模块:用于采用Haar分类器识别帧获取模块所获取的帧信息的前车或行人所在的区域;Haar detection module: for adopting the Haar classifier to identify the area where the front vehicle or pedestrian is located in the frame information obtained by the frame acquisition module;
水平直线匹配模块:用于对Haar检测模块得到的区域进行水平直线检测并得出检测出水平直线的区域,并将水平直线检测得到的区域与Haar检测模块得到的区域进行对比,剔除Haar检测模块检测得到但是在水平直线匹配模块中没有检测到的区域;Horizontal line matching module: used to detect the area obtained by the Haar detection module and obtain the area where the horizontal line is detected, compare the area obtained by the horizontal line detection with the area obtained by the Haar detection module, and eliminate the Haar detection module Regions detected but not detected in the horizontal line matching module;
DBN网络检测模块:用于对经过水平直线匹配模块处理后得到的区域进行DBN网络检测,得出最终的前车或行人所在区域;DBN network detection module: used to perform DBN network detection on the area obtained after processing by the horizontal line matching module, and obtain the final area where the vehicle or pedestrian in front is located;
碰撞预警模块:用于根据最终的前车或行人所在区域,预测检测得到的前车和行人与汽车的间距是否小于安全间距,如果是,则发出碰撞预警。Collision warning module: It is used to predict whether the detected distance between the vehicle in front and the pedestrian and the car is less than the safe distance according to the final area where the vehicle in front or the pedestrian is located, and if so, issue a collision warning.
进一步地,所述水平直线匹配模块包括轮廓检测模块、水平直线检测模块、比对模块;其中:Further, the horizontal line matching module includes a contour detection module, a horizontal line detection module, and a comparison module; wherein:
轮廓检测模块用于对Haar检测模块检测得到的区域进行轮廓检测;The contour detection module is used to perform contour detection on the area detected by the Haar detection module;
水平直线检测模块用于对轮廓检测模块得到的检测结果进行水平直线检测,并只保留水平直线,得出具有水平直线的区域;The horizontal straight line detection module is used to detect the horizontal straight line on the detection result obtained by the contour detection module, and only keep the horizontal straight line to obtain the area with the horizontal straight line;
比对模块用于将水平直线检测模块得到的区域和Haar检测模块检测得到的区域进行比对,剔除Haar检测模块检测得到的前车或行人的区域中没有被水平直线检测模块检测得到的区域。The comparison module is used to compare the area obtained by the horizontal straight line detection module with the area detected by the Haar detection module, and remove the area not detected by the horizontal straight line detection module in the area of the preceding vehicle or pedestrian detected by the Haar detection module.
进一步地,DBN网络检测模块包括RBM层训练模块和Softmax回归模块;其中:Further, the DBN network detection module includes an RBM layer training module and a Softmax regression module; wherein:
RBM层训练模块:用于以经过水平直线匹配模块处理后得到的前车或行人所在的区域作为输入进行RBM无监督训练;RBM layer training module: used to perform RBM unsupervised training with the area of the front vehicle or pedestrian obtained after processing by the horizontal line matching module as input;
Softmax回归模块:用于对RBM模块得到的RBM无监督训练结果进行归类,判断所检测的区域是否为前车或行人所在的区域,如果是,则进行标记。Softmax regression module: used to classify the RBM unsupervised training results obtained by the RBM module, judge whether the detected area is the area where the vehicle in front or the pedestrian is located, and if so, mark it.
更进一步地,所述DBN网络检测模块包括至少三层RBM层训练模块,前一层RBM层训练模块的输出作为后一层RBM层训练模块的输入。Furthermore, the DBN network detection module includes at least three layers of RBM layer training modules, and the output of the previous layer of RBM layer training modules is used as the input of the subsequent layer of RBM layer training modules.
进一步地,所述DBN网络检测模块还包括有BP反馈调节模块,用于对整个DBN网络进行反馈微调。Further, the DBN network detection module also includes a BP feedback adjustment module, which is used for fine-tuning the feedback of the entire DBN network.
进一步地,所述碰撞预警模块包括坐标转换模块、卡尔曼滤波模块、安全间距计算模块、对比模块;其中:Further, the collision warning module includes a coordinate conversion module, a Kalman filter module, a safety distance calculation module, and a comparison module; wherein:
坐标转换模块:用于根据DBN网络检测模块最终得到的检测结果锁定前车或行人并进行坐标转换,确定前车或行人在车体坐标系中的位置;Coordinate transformation module: used to lock the vehicle or pedestrian in front and perform coordinate transformation according to the detection result finally obtained by the DBN network detection module, and determine the position of the vehicle or pedestrian in front in the vehicle body coordinate system;
卡尔曼滤波模块:用于采用卡尔曼滤波预测下一时间汽车与前车或行人的距离;Kalman filter module: used to use Kalman filter to predict the distance between the car and the vehicle or pedestrian in front at the next time;
安全间距计算模块:用于计算出汽车与前车或者汽车与行人的安全间距;Safety distance calculation module: used to calculate the safety distance between the car and the vehicle in front or the car and pedestrians;
对比模块:将预测到的下一时间汽车与前车或行人的距离和汽车与前车或者行人的安全间距进行对比,如果汽车与前车或者行人的安全间距大于预测到的下一时间汽车与前车或者行人的距离,则发出碰撞预警。Comparison module: compare the predicted distance between the car and the vehicle in front or the pedestrian with the safety distance between the car and the vehicle in front or the pedestrian, if the safety distance between the car and the vehicle in front or the pedestrian is greater than the predicted distance between the car and the pedestrian in the next time A collision warning will be issued if the distance between the vehicle or pedestrian in front is exceeded.
利用上述系统的汽车前视车辆和行人防碰撞预警方法,包括如下步骤:The anti-collision warning method for vehicles and pedestrians using the above-mentioned system includes the following steps:
S1在汽车的前方车标的下方安装汽车前视区域采集模块,系统启动后首先通过汽车前视区域采集模块采集视频信息,并将采集到的视频传到帧获取模块;S1 installs the car front-view area acquisition module under the car logo in front of the car. After the system is started, it first collects video information through the car front-view area acquisition module, and transmits the collected video to the frame acquisition module;
S2帧获取模块对视频信息的帧信息进行连续性获取,获取的帧信息以图片形式存在;The S2 frame acquisition module continuously acquires the frame information of the video information, and the acquired frame information exists in the form of pictures;
S3Haar检测模块采用Haar分类器来对帧信息中的前车和行人进行检测和识别,得出前车或行人所在的区域;The S3Haar detection module uses a Haar classifier to detect and identify the front vehicle and pedestrian in the frame information, and obtain the area where the front vehicle or pedestrian is located;
S4水平直线匹配模块对Haar检测模块得到的区域进行水平直线检测并得出检测出水平直线的区域,并将水平直线检测得到的区域与Haar检测模块得到的区域进行对比,剔除Haar检测模块检测得到但是在水平直线匹配模块中没有检测到的区域;The S4 horizontal straight line matching module detects the horizontal straight line on the area obtained by the Haar detection module and obtains the area where the horizontal straight line is detected, compares the area obtained by the horizontal straight line detection with the area obtained by the Haar detection module, and eliminates the area detected by the Haar detection module But there is no detected area in the horizontal line matching module;
S5DBN网络检测模块对经过水平直线匹配模块处理后得到的区域进行DBN网络检测,进一步识别这些区域是否为前车或行人,如果是,则进行标记,得出最终的前车或行人所在区域;The S5DBN network detection module performs DBN network detection on the areas obtained after the processing of the horizontal line matching module, and further identifies whether these areas are vehicles or pedestrians in front, and if so, mark them to obtain the final area where the vehicles or pedestrians in front are located;
S6碰撞预警模块根据根据最终的前车或行人所在区域锁定前车和行人的区域,预测前车和行人与汽车的间距是否小于安全间距,如果是,则发出碰撞预警。The S6 collision warning module predicts whether the distance between the vehicle in front and the pedestrian and the car is less than the safe distance according to the final area where the vehicle in front or the pedestrian is locked, and if so, issues a collision warning.
需要说明的是,步骤S4具体为:It should be noted that step S4 is specifically:
4.1)轮廓检测模块对Haar检测模块检测到的区域进行轮廓检测;4.1) The contour detection module carries out contour detection to the area detected by the Haar detection module;
4.2)水平直线检测模块对检测得到的轮廓进行水平直线检测,检测完成后只保留水平直线得出具有水平直线的区域;4.2) The horizontal straight line detection module carries out horizontal straight line detection to the detected contour, and after the detection is completed, only the horizontal straight line is retained to obtain the area with the horizontal straight line;
4.3)比对模块将水平直线检测模块得到的区域和Haar检测模块检测得到的区域进行比对,剔除Haar检测模块检测得到的前车或行人的区域中没有被水平直线检测模块检测得到的区域,得到水平直线匹配模块的检测结果。4.3) the comparison module compares the area obtained by the horizontal straight line detection module with the area detected by the Haar detection module, and removes the area that is not detected by the horizontal straight line detection module in the area of the vehicle in front or pedestrians that the Haar detection module detects. Obtain the detection result of the horizontal line matching module.
需要说明的是,步骤S5具体为:It should be noted that step S5 is specifically:
5.1)利用RBM层训练模块完成对经过水平直线匹配模块处理后得到的前车和行人所在区域进行RBM无监督训练,完成特征提取;5.1) Utilize the RBM layer training module to complete the RBM unsupervised training on the area where the vehicle in front and the pedestrian are located after being processed by the horizontal line matching module, and complete the feature extraction;
5.2)利用Softmax回归模块对提取特征进行分类,识别出经过水平直线匹配模块处理后得到的前车和行人所在区域中的实际的前车和行人的区域。5.2) Use the Softmax regression module to classify the extracted features, and identify the actual front vehicle and pedestrian areas in the area where the front vehicles and pedestrians are located after being processed by the horizontal line matching module.
进一步需要说明的是,还包括有5.3):利用BP反馈调节模块调节整个DBN网络,改善学习效果,使得DBN网络的参数状态达到最优。It should be further explained that 5.3) is also included: use the BP feedback adjustment module to adjust the entire DBN network, improve the learning effect, and make the parameter state of the DBN network reach the optimal state.
进一步需要说明的是,包括至少三层RBM层训练模块,前一层RBM层训练模块的输出作为后一层RBM层训练模块的输入,最后一层RBM层训练模块的输出作为Softmax回归模块的输入。It should be further noted that at least three layers of RBM layer training modules are included, the output of the previous layer of RBM layer training modules is used as the input of the next layer of RBM layer training modules, and the output of the last layer of RBM layer training modules is used as the input of the Softmax regression module .
需要说明的是,步骤5.1)中,将经过水平直线匹配模块处理后得到的前车和行人所在区域转换成图片形式,缩放到设定大小,并进行自适应二值化处理,进而按光栅顺序扫描成数组形式,所形成的数组作为RBM层训练模块的输入。It should be noted that, in step 5.1), the area of the vehicle in front and pedestrians obtained after processing by the horizontal line matching module is converted into an image form, scaled to a set size, and adaptive binarization is performed, and then the raster order Scan into an array form, and the formed array is used as the input of the RBM layer training module.
以下将对本发明作进一步地说明和描述。The present invention will be further illustrated and described below.
车辆启动后,通过车中的车载设备启动所述视觉感知的汽车前视车辆和行人防碰撞预警系统,系统各部分软硬件开始工作。After the vehicle is started, the visual perception vehicle and pedestrian anti-collision warning system is activated through the on-board equipment in the vehicle, and the software and hardware of each part of the system start to work.
此时可以进入系统控制面板,在控制面板上选择是否进入学习模式。如果选择进入学习模式,系统进入DBN网络学习模块,进行学习训练后不断更新参数,直到最优。调整好参数后系统可以开始进行前车、行人识别。具体为At this time, you can enter the system control panel, and choose whether to enter the learning mode on the control panel. If you choose to enter the learning mode, the system enters the DBN network learning module, and continuously updates the parameters after learning and training until it is optimal. After adjusting the parameters, the system can start to identify the vehicle in front and pedestrians. Specifically
1、汽车前视区域采集模块(摄像头)采集1. Vehicle front view area acquisition module (camera) acquisition
在汽车前方车标下方安装摄像头,系统启动后首先通过摄像头采集视频信息,并将采集到的视频传到帧获取模块。A camera is installed under the front logo of the car. After the system is started, the video information is first collected through the camera, and the collected video is transmitted to the frame acquisition module.
2、帧获取模块实现对视频文件的帧信息的连续性获取。获取帧以图片形式存在,接下来只要完成对图片的处理即可。2. The frame acquisition module realizes the continuous acquisition of the frame information of the video file. The obtained frame exists in the form of a picture, and the next step is to complete the processing of the picture.
3、采用Haar分类器来对视频中的前车、行人进行检测。此环节为前期工作的重点,本环节目标为对帧信息中前车、行人进行检测。采用的方法为:通过基于级联分类器的训练,利用级联分类器实现帧中车辆、行人的识别。下面对级联分类器的训练方法进行说明。3. Use the Haar classifier to detect the front car and pedestrian in the video. This link is the focus of the preliminary work, and the goal of this link is to detect the front vehicle and pedestrians in the frame information. The method adopted is: through the training based on the cascade classifier, the cascade classifier is used to realize the recognition of vehicles and pedestrians in the frame. The training method of the cascade classifier is described below.
1)生产正样本描述文件1) Produce positive sample description files
2)生成分类器。2) Generate a classifier.
Haar分类器将AdaBoost算法训练出来的强分类器进行级联,并且采取了矩形特征和积分图。这个分类器是利用矩形区域内的像素的和或差,然后通过阈值化来生成的“Haar特征检查器”,这个特征由图像矩形区域的和或者差组成。级联分类器的训练的结果,可以使用生成的xml文件进行前车、行人检测,但是检测结果欠佳。检测到车辆、行人的同时也会有误检测区域。为了改善错误区域检测还需引入水平直线匹配模块。The Haar classifier cascades the strong classifiers trained by the AdaBoost algorithm, and adopts rectangular features and integral maps. This classifier is a "Haar feature checker" generated by thresholding the sum or difference of pixels in a rectangular area, which consists of the sum or difference of a rectangular area of the image. As a result of cascade classifier training, the generated xml file can be used for vehicle and pedestrian detection, but the detection results are not good. When vehicles and pedestrians are detected, there will also be false detection areas. In order to improve the detection of error regions, a horizontal line matching module needs to be introduced.
级联分类器训练时需要对大量的样本进行训练,负样本的数量控制在正样本的3倍。Cascade classifier training requires a large number of samples to be trained, and the number of negative samples is controlled to be 3 times that of positive samples.
4、采用水平直线匹配模块获对Haar分类器的检测结果进行处理。在车辆行驶、行人行走过程中,往往会形成至少一条水平直线,水平直线长度要小于车辆、行人检测区域宽度大小。而对于Haar分类器的检测结果中误检测部分,往往会缺少这样的直线,所以可以将水平直线检测引入到前车和行人的检测,提高前车、行人采集正确率。4. Use the horizontal line matching module to process the detection results of the Haar classifier. During the running of vehicles and pedestrians, at least one horizontal straight line is often formed, and the length of the horizontal straight line is smaller than the width of the detection area of vehicles and pedestrians. For the misdetection part of the detection result of the Haar classifier, such straight lines are often lacking, so the horizontal straight line detection can be introduced into the detection of the preceding vehicle and pedestrians to improve the accuracy rate of the preceding vehicles and pedestrians.
其具体的实现步骤如下:Its specific implementation steps are as follows:
1)轮廓检测;1) Contour detection;
2)直线检测;2) Line detection;
3)只保留水平直线;3) Keep only horizontal straight lines;
4)将样本采集区域与水平直线区域进行比对;4) Comparing the sample collection area with the horizontal straight line area;
5)剔除比对失败的区域。5) Eliminate the regions where the comparison fails.
车辆、行人部分会形成一定程度的水平直线,将水平直线与检测区域进行匹配排除一定程度的错误检测结果。Vehicles and pedestrians will form a certain degree of horizontal straight line, and match the horizontal straight line with the detection area to eliminate a certain degree of false detection results.
5、DBN网络处理5. DBN network processing
本发明利用深度学习方法来完成前车、行人判断。在模型顶端使用softmax回归进行处理,进而利用反向传播进行对参数调优,也就是说是一种多层无监督学习和一层有监督学习构成的一种深度学习模型。The present invention utilizes the deep learning method to complete the judgment of the vehicle in front and the pedestrian. At the top of the model, softmax regression is used for processing, and then backpropagation is used for parameter tuning, that is to say, it is a deep learning model composed of multi-layer unsupervised learning and one-layer supervised learning.
DBN网络主要分为如下几部分:首先利用RBM完成对特征提取,然后利用Softmax回归算法对提取特征进行分类,最后利用BP反馈调节整个DBN网络,使得DBN网络具有较好的识别效果。The DBN network is mainly divided into the following parts: first, use RBM to complete the feature extraction, then use the Softmax regression algorithm to classify the extracted features, and finally use BP feedback to adjust the entire DBN network, so that the DBN network has a better recognition effect.
在DBN中,每一层会进行RBM无监督训练。并且第一层的输出会用来作为第二层的输入用来训练第二层RBM,就这样直到最后一层RBM,在这个过程中,每层RBM利用对比分歧算法对参数进行更新,最终完成对特征信息的提取。本发明采用三层RBN进行特征提取。In DBN, each layer will undergo RBM unsupervised training. And the output of the first layer will be used as the input of the second layer to train the second layer of RBM, so until the last layer of RBM, in this process, each layer of RBM uses the comparison and divergence algorithm to update the parameters, and finally completes Extract feature information. The present invention uses a three-layer RBN for feature extraction.
在DBN顶端利用softmax算法对无监督训练获得的数据进行归类,也就是所说的监督学习。At the top of DBN, the softmax algorithm is used to classify the data obtained by unsupervised training, which is the so-called supervised learning.
BP对整个深度学习网络进行反馈微调,改善学习效果,使得DBN网络的参数状态达到最优。BP performs feedback fine-tuning on the entire deep learning network to improve the learning effect and make the parameter state of the DBN network optimal.
基于深度学习的车辆、行人识别与检测,其核心内容是深度学习,通过使用深度学习完成车辆、行人识别工作。本发明采用三层RBM层构成的隐层,进行对原始数据特征提取,这样便可以更高效的使用分类算法对数据进行分类,也就是最后的Softmax回归层所进行的分类工作。The core content of deep learning-based recognition and detection of vehicles and pedestrians is deep learning, and the recognition of vehicles and pedestrians is completed by using deep learning. The present invention adopts a hidden layer composed of three RBM layers to extract the features of the original data, so that the data can be classified using the classification algorithm more efficiently, that is, the classification work performed by the final Softmax regression layer.
具体的实现方法为:将水平直线匹配模块检测到的区域转换成图片形式,放缩到指定大小,并进行自适应二值化处理,进而按光栅顺序扫描成数组形式,之后对所形成的数组进行DBN网络检测,根据检测结果来判断此区域是否为前车、行人,如果是前车、行人则进行标记。The specific implementation method is: convert the area detected by the horizontal line matching module into an image form, scale it to a specified size, and perform adaptive binarization processing, and then scan it into an array form in raster order, and then perform the formation of the array Carry out DBN network detection, judge whether the area is a vehicle or pedestrian in front according to the detection result, and mark it if it is a vehicle or pedestrian in front.
通过将DBN网络引入,对视频中检测出的前车、行人进行判断时具有更优效果。By introducing the DBN network, it has a better effect on judging the front vehicle and pedestrians detected in the video.
由于实时性的问题,此处将之前DBN网络的训练参数直接引入到程序之中,对DBN网络完成赋值操作。这样便可以迅速构造出DBN网络。Due to real-time problems, the training parameters of the previous DBN network are directly introduced into the program, and the assignment operation is completed for the DBN network. In this way, a DBN network can be quickly constructed.
5.1 数据采集5.1 Data collection
进行深度学习训练之前需采集训练所需样本,样本包括正样本(车辆或行人)和负样本(非车辆或非行人)。正样本只包含车辆或行人,负样本则为道路上可能出现的一切事物,但是不含有任何车辆或行人。Before deep learning training, samples required for training need to be collected, including positive samples (vehicles or pedestrians) and negative samples (non-vehicles or non-pedestrians). Positive samples only contain vehicles or pedestrians, and negative samples contain everything that may appear on the road, but do not contain any vehicles or pedestrians.
5.2 样本处理5.2 Sample handling
由于深度学习的输入关系,须将采集到的样本图片全部都放缩成统一32*32大小,然后进行灰度处理,这样图像就转换成了32*32的由灰度像素值构成的矩阵,由于对DBN网络进行训练一次要输入大量数据,所以本实施例将32*32的矩阵按光栅顺序转换成1*1024型一维数组,并将所有样本整合成一个大二维矩阵。Due to the input relationship of deep learning, all the collected sample pictures must be scaled to a uniform size of 32*32, and then processed in grayscale, so that the image is converted into a matrix composed of grayscale pixel values of 32*32. Since a large amount of data needs to be input once to train the DBN network, this embodiment converts the 32*32 matrix into a 1*1024 one-dimensional array in raster order, and integrates all samples into a large two-dimensional matrix.
同理按这种方法完成对非车辆、非行人图片信息的处理。并将处理之后的信息存入文件中,以便下次使用。Similarly, the processing of non-vehicle and non-pedestrian image information is completed in this way. And store the processed information in a file for next use.
5.3 训练集和测试集生成模块5.3 Training set and test set generation module
为了保证DBN网络训练的有效性,本发明让所有车辆与非车辆、行人与非行人交叉分布,这样做不仅可以使DBN网络得到有效训练,还可以使标签的分配简单化。In order to ensure the effectiveness of DBN network training, the present invention allows all vehicles and non-vehicles, pedestrians and non-pedestrians to be cross-distributed, which not only enables the DBN network to be trained effectively, but also simplifies label assignment.
5.4 RBM层训练模块5.4 RBM layer training module
受限玻尔兹曼机一层为可见层,另一层为隐层,但层内之间没有连接,可见层与隐层之间全连接,各层内部单元状态相互独立,通过Gibbs采样克服各层内部单元状态相互独立所导致的RBM分布不稳定性问题。One layer of the restricted Boltzmann machine is the visible layer, and the other layer is the hidden layer, but there is no connection between the layers, the visible layer and the hidden layer are fully connected, and the internal unit states of each layer are independent of each other, which is overcome by Gibbs sampling. The RBM distribution instability problem caused by the independence of the internal unit states of each layer.
用c表示显层数据,由图像信息转换而成;h表示隐层数据,是对图像的特征提取。W代表数据之间的权重,a和b则分别代表c与h的偏置。Use c to represent the data of the display layer, which is converted from image information; h represents the data of the hidden layer, which is the feature extraction of the image. W represents the weight between the data, and a and b represent the bias of c and h, respectively.
将对比分歧算法引入RBM模型,使得RBM性能得到很大的提高,不论是在训练效果还是在训练速度。具体步骤如下:Introducing the contrastive divergence algorithm into the RBM model greatly improves the performance of the RBM, both in terms of training effect and training speed. Specific steps are as follows:
a.用现有显层训练样本计算隐层节点的概率,并获得隐层数据h;a. Calculate the probability of hidden layer nodes with the existing visible layer training samples, and obtain the hidden layer data h;
b.利用h构造可见层c',再用c'构造隐层h';b. Use h to construct the visible layer c', and then use c' to construct the hidden layer h';
c.更新权重。c. Update weights.
使用对比分歧算法完成对RBM层训练,关于其它参数的设置如下:权重初始化:每个数来自区间(-1/N,1/N)内的随机数,N代表的是输入层神经单元的个数;学习率:0.1/N;权重衰减:在源基础上减少0.0002;RBM训练过程为逐层训练,首先利用输入数据训练第一层RBM,训练好后利用第一层RBM的输出作为第二层的输入训练第二层RBM,依次逐层训练。Use the comparison and divergence algorithm to complete the training of the RBM layer. The settings for other parameters are as follows: Weight initialization: each number comes from a random number in the interval (-1/N, 1/N), and N represents the number of neural units in the input layer. number; learning rate: 0.1/N; weight decay: reduce 0.0002 on the basis of the source; RBM training process is layer-by-layer training, first use the input data to train the first layer of RBM, after training, use the output of the first layer of RBM as the second The input of the first layer trains the second layer RBM, which is trained layer by layer.
5.5 Softmax回归模块5.5 Softmax regression module
softmax回归模块针对多分类问题。xi为一类数据,k对应分类结果数目,A为参数,每个数据都会对应k种不同的分类结果,在k种结果之中,每个数据分别代表每一类xi结果的概率。只要对参数进行处理就可以获得所需的分类结果。训练参数A,使A能够最小化代价函数。接下来使用迭代方式对参数进行更新,并引入权重衰减项,这样代价函数就会成为凸函数,保证解的唯一性,从而得到全局最优解。实现Softmax回归算法用来解决分类问题,对数据进行分类,将RBM逐层提取的特征进行Softmax回归。The softmax regression module targets multi-classification problems. x i is a type of data, k corresponds to the number of classification results, A is a parameter, each data corresponds to k different classification results, and among the k types of results, each data represents the probability of each type of x i results. As long as the parameters are processed, the desired classification result can be obtained. Train parameters A such that A minimizes the cost function. Next, the parameters are updated in an iterative manner, and the weight attenuation term is introduced, so that the cost function becomes a convex function, ensuring the uniqueness of the solution, and thus obtaining the global optimal solution. Implement the Softmax regression algorithm to solve the classification problem, classify the data, and perform Softmax regression on the features extracted by the RBM layer by layer.
5.6 BP反馈微调模块5.6 BP feedback fine-tuning module
运用反向传播法更新权重参数。在DBN网络进行BP反馈调节,DBN网络已经完成计算所有RBM节点激活值的工作,包括最后softmax层输出结果,对于每一层网络,其输出不可能完全达到目标所需求的状态,从DBN网络的最后一层逐层向前计算,并利用该层网络的残差与该层网络的输入数据完成对该层参数的更新。al为L层输出,δl为L层的残差,利用L层的残差能够计算出L-1层的残差δl-1,al-1为L-1层输出数据,L层输入数据,利用al-1与δl能计算参数A的更新量。The weight parameters are updated using the backpropagation method. BP feedback adjustment is performed in the DBN network. The DBN network has completed the calculation of the activation values of all RBM nodes, including the output of the final softmax layer. For each layer of the network, its output cannot fully reach the state required by the target. From the DBN network The last layer is calculated forward layer by layer, and the residual of this layer network and the input data of this layer network are used to update the parameters of this layer. a l is the output of L layer, δ l is the residual of L layer, and the residual of L-1 layer can be calculated by using the residual of L layer δ l-1 , a l-1 is the output data of L-1 layer, L Layer input data, using a l-1 and δ l can calculate the update amount of parameter A.
运用RBM提取出的特征逐层向源数据进行转换,反向传播算法通过反馈微调整个神经网络来减少这种数据丢失程度,同时提高特征提取准确度。Using the features extracted by RBM to convert to the source data layer by layer, the backpropagation algorithm fine-tunes the entire neural network through feedback to reduce the degree of data loss and improve the accuracy of feature extraction.
6、坐标转换6. Coordinate transformation
通过求解摄像机外部参数的约束方程后,摄像机、图像和车体坐标系的相对关系可以完全确定,从而雷达扫描点可以通过摄像机模型投影至图像像素坐标系上。建立车体坐标系中物点P(xv,yv,zv)与图像像素坐标系中像点p(u,v)之间的转换关系。其像素级数据融合模型如下:After solving the constraint equation of the external parameters of the camera, the relative relationship between the camera, the image and the vehicle body coordinate system can be completely determined, so that the radar scanning point can be projected onto the image pixel coordinate system through the camera model. Establish the conversion relationship between the object point P(x v ,y v ,z v ) in the vehicle body coordinate system and the image point p(u,v) in the image pixel coordinate system. Its pixel-level data fusion model is as follows:
(u,v,l)T=K(Rc(xv,yv,zv)T+Tc) (1)(u,v,l) T =K(R c (x v ,y v ,z v ) T +T c ) (1)
其中摄像机内部参数矩阵为The internal parameter matrix of the camera is
上述模型中,l是(u,v)写成齐次坐标的系数,Tc是平移矩阵,Rc为摄像机外部参数的旋转矩阵,乃为摄像机外部参数的平移向量,fx、fy为x和y方向的等效焦距,u0,v0为图像像素中心的坐标。通过公式完成空间上的数据融合,其数据融合第一个目的是为了把摄像机坐标系、图像像素坐标系和车体坐标系统一起来。坐标系的统一和建立有利于环境感知传感器对环境和障碍物具体距离和方位的测量,摄像机外部参数的标定可以实现逆透视投影变换,为车辆视觉导航提供重要参数。In the above model, l is the coefficient of (u, v) written in homogeneous coordinates, T c is the translation matrix, R c is the rotation matrix of the external parameters of the camera, which is the translation vector of the external parameters of the camera, and f x and f y are x and the equivalent focal length in the y direction, u 0 and v 0 are the coordinates of the image pixel center. The spatial data fusion is completed through the formula, and the first purpose of the data fusion is to combine the camera coordinate system, the image pixel coordinate system and the vehicle body coordinate system together. The unification and establishment of the coordinate system is conducive to the measurement of the specific distance and orientation of the environment and obstacles by the environmental perception sensor. The calibration of the external parameters of the camera can realize the inverse perspective projection transformation and provide important parameters for vehicle visual navigation.
7、卡尔曼滤波7. Kalman filter
本发明用卡尔曼滤波发预估下一时间车速的变化,运用到后续的安全距离计算中。The present invention uses the Kalman filter to predict the change of the vehicle speed in the next time, and applies it to the subsequent calculation of the safety distance.
首先使用三阶Kalman滤波器对当前周期内获得的有效目标信息进行预测,选取状态xn=[dn,e,vn,e,an,e]T,其中dn,e,vn,e,an,e分别为第n探测周期内经过处理得到的“前视车辆、行人”与本车的Y向相对距离、速度和加速度,则目标状态在下一周期内的预测值如式所示:First, use the third-order Kalman filter to predict the effective target information obtained in the current period, and select the state x n =[d n,e ,v n,e ,a n,e ] T , where d n,e ,v n , e , a n, and e are the Y-direction relative distances, speeds, and accelerations between the “forward-looking vehicle and pedestrian” and the vehicle obtained through processing in the nth detection cycle, and the predicted value of the target state in the next cycle is as follows: Shown:
式中,t为周期,取值为0.05s;d(n+1)|n、v(n+1)|n、a(n+1)|n为所预测到的第(n+1)探测周期内有效目标与本车之间的Y向相对距离、速度和加速度。通过式(2)得到第(n+1)探测周期内有效目标信息的预测值,并与该周期内的初选目标信息进行一致性检验,采取以下的判断准则,如式(3)所示:In the formula, t is the period, and the value is 0.05s; d (n+1)|n , v (n+1)|n , a (n+1)|n are the predicted (n+1)th The Y-direction relative distance, velocity and acceleration between the valid target and the vehicle within the detection period. The predicted value of effective target information in the (n+1)th detection period is obtained through formula (2), and the consistency check with the primary target information in this period is carried out, and the following judgment criteria are adopted, as shown in formula (3) :
式(3)中,dn+l、vn+1、an+1分别为第(n+1)探测周期内初选“前视车辆、行人”与本车间的y向相对距离、速度和加速度;d0、v0、a0分别为最大容许的Y向相对距离、速度和加速度误差。d0、v0、a0主要是由测量误差和使用式(2)进行预测所产生的误差决定,在本发明中取:In formula (3), d n+l , v n+1 , and a n+1 are the y-direction relative distances and speeds between the primary selected "foresight vehicles and pedestrians" and the workshop in the (n+1)th detection cycle, respectively. and acceleration; d 0 , v 0 , a 0 are the maximum allowable Y-direction relative distance, velocity and acceleration errors respectively. d 0 , v 0 , a 0 are mainly determined by the measurement error and the error generated by using formula (2) for prediction, and in the present invention:
[d0 v0 a0]T=[3 2 0.25]T (4)[d 0 v 0 a 0 ] T =[3 2 0.25] T (4)
对于第(n+1)周期得到的初选“前视车辆、行人”,其信息值如果满足式(4),则认为该初选目标与第n周期得到的有效目标是一致的,进而进行目标信息的更新。For the primary selection "forward-looking vehicles and pedestrians" obtained in the (n+1)th cycle, if its information value satisfies formula (4), it is considered that the primary selection target is consistent with the effective target obtained in the nth cycle, and then proceed Update of target information.
8、本发明中由于能够计算出“前车、行人”的速度、加速度等数据,所以,在设计安全距离算法时,应该符合人们的驾驶习惯,例如,“前车、行人”相对本车的速度很大,则安全距离应该适当的缩小,“前车、行人”正在以一个比较大的正向加速度行驶,则安全距离也应该适当缩小,综上所述,在“车车间时距、车人间时距”是与“两车或车-行人”的相对速度成反比,与“前车、行人”的加速度成反比,得到安全车间间隔时间t'的计算公式如下:8. In the present invention, data such as speed and acceleration of "the vehicle in front and pedestrians" can be calculated, so when designing the safety distance algorithm, it should conform to people's driving habits, for example, the distance between the vehicle in front and pedestrians relative to the vehicle. If the speed is very high, the safety distance should be appropriately reduced. If the "vehicle in front and pedestrian" are driving with a relatively large positive acceleration, the safety distance should also be appropriately reduced. The "time distance between people" is inversely proportional to the relative speed of "two vehicles or vehicle-pedestrian" and inversely proportional to the acceleration of "the vehicle in front and pedestrian". The formula for calculating the safe inter-vehicle interval t' is as follows:
t'=T-a'*v_re-b'*ar_1 (5)t'=T-a'*v_re-b'*ar_1 (5)
式中T为固定值1.5s,v_re为“车-车或车-行人”的相对速度v_re=vr_1-v_1。vr_1为预测时间1秒后的“前车、行人”的速度,v_1为预测时间1秒后的本车速度。ar_1为预测时间1秒后的“前车、行人”加速度。a',b'为正的常数。In the formula, T is a fixed value of 1.5s, and v_re is the relative speed v_re=vr_1-v_1 of "vehicle-vehicle or vehicle-pedestrian". vr_1 is the speed of the "vehicle in front and pedestrian" after 1 second of predicted time, and v_1 is the speed of the vehicle after 1 second of predicted time. ar_1 is the acceleration of the "vehicle in front and pedestrian" after 1 second of the predicted time. a', b' are positive constants.
由于“车车间时距、车人间时距”不能为负值,而且不能过大,具体运用饱和函数,来使得安全“车车间、车人间”时距更加的合理,得到的安全“车车间、车人间”时距间隔时间t'的计算公式如式(6)所示Since the "vehicle-to-vehicle time distance and vehicle-to-human time distance" cannot be negative and cannot be too large, the saturation function is specifically used to make the safe "vehicle-to-vehicle, vehicle-to-human" time distance more reasonable, and the obtained safe "vehicle-to-vehicle, The calculation formula of the time interval t' between vehicles and people is shown in formula (6)
t'_min为设定的安全“车车间、车人间”时距间隔时间的最小值,t'_max为设定的安全“车车间、车人间”时距间隔时间的最大值。最终得到的安全“车车间、车人间”时距公式如下:t'_min is the minimum value of the set safe "vehicle-to-vehicle, vehicle-to-person" time interval, and t'_max is the set maximum value of the safe "vehicle-to-vehicle to vehicle-to-human" time interval. The formula for the final safe "vehicle-to-vehicle, vehicle-to-human" time distance is as follows:
safdis=v_1*t'+d' (7)safdis=v_1*t'+d' (7)
安全“车车间、车人间”时距具体实现如下:The specific implementation of the safe "vehicle-to-vehicle, vehicle-to-person" time distance is as follows:
Step1:计算“两车或车-行人”时距,如果“两车或车-行人”时距大于2.2s则把“两车或车-行人”时距设置为2.2s,如果“两车或车-行人”时距小于0.2s,则把“两车或车-行人”时距设置为0.2s。Step1: Calculate the "two vehicles or vehicles - pedestrians" time distance, if the "two vehicles or vehicles - pedestrians" time distance is greater than 2.2s, set the "two vehicles or vehicles - pedestrians" time distance to 2.2s, if "two vehicles or vehicles - pedestrians" time distance is greater than 2.2s If the time distance between vehicles and pedestrians is less than 0.2s, set the time distance between two vehicles or vehicles and pedestrians to 0.2s.
t'=1.5-0.2*v_re-1.2*ar_1;t'=1.5-0.2*v_re-1.2*ar_1;
ift'>2.2ift'>2.2
t'=2.2;t'=2.2;
endend
ift'<0.2ift'<0.2
t'=0.2;t'=0.2;
endend
Step2:计算出安全距离。Step2: Calculate the safe distance.
safdis=v_1*t'+2.5;safdis=v_1*t'+2.5;
公式6和公式7可以根据本车和“前车、行人”的运行的实时数据,去动态的调整“两车或车-行人”之间的安全距离,使得该安全距离计算有更强预测能力,和抵抗干扰的能力。最终提高了“车车间、车人间”安全时距在外界环境发生变化时的稳定性和动态性能。Formula 6 and Formula 7 can dynamically adjust the safety distance between "two vehicles or vehicle-pedestrian" according to the real-time data of the operation of the vehicle and the "vehicle in front and pedestrian", so that the calculation of the safety distance has a stronger predictive ability , and the ability to resist interference. Finally, the stability and dynamic performance of the "vehicle-to-vehicle, vehicle-to-person" safety time distance when the external environment changes are improved.
车辆、行人作为非刚性物体,不同的车辆、行人之间表现形态各异,但是在这个形态变化之间,总有些不变的特征。本发明通过将车辆、行人图片进行自适应二值化化来处理,会较好的保留车辆、行人所具有的特征。并且足以与其他物体做出区别。并将处理结果用于训练DBN网络,进行特征提取,以便于更好的进行识别。Vehicles and pedestrians are non-rigid objects, and different vehicles and pedestrians have different forms, but there are always some invariable characteristics between these form changes. The present invention processes the pictures of vehicles and pedestrians through self-adaptive binarization, which can better retain the characteristics of the vehicles and pedestrians. And enough to distinguish it from other objects. And the processing results are used to train the DBN network and perform feature extraction for better recognition.
本发明运用深度学习并通过叠加多个RBM的DBN网络,逐层进行特征提取,并在最后阶段使用Softmax回归层,并以标签进行监督训练,最后利用BP反馈调节对整个网络进行调节。在高效准确的识别前车、行人后,根据坐标转换确定前车、行人相对位置,并通过卡尔曼滤波预测下一时间车-车、车-行人的距离数值关系,并以“车车间、车人间”安全时距调节安全距离,提高了车辆的行驶的安全性。The present invention uses deep learning and superimposes multiple RBM DBN networks to extract features layer by layer, uses the Softmax regression layer in the final stage, and uses labels for supervision and training, and finally uses BP feedback adjustment to adjust the entire network. After efficiently and accurately identifying the vehicle and pedestrian in front, the relative position of the vehicle in front and pedestrian is determined according to the coordinate transformation, and the distance value relationship between vehicle-vehicle and vehicle-pedestrian in the next time is predicted by Kalman filter, and the "vehicle-to-vehicle, vehicle-to-vehicle The "human world" safety time distance adjusts the safety distance, which improves the safety of the vehicle.
对于本领域的技术人员来说,可以根据以上的技术方案和构思,给出各种相应的改变和变形,而所有的这些改变和变形,都应该包括在本发明权利要求的保护范围之内。For those skilled in the art, various corresponding changes and modifications can be made according to the above technical solutions and concepts, and all these changes and modifications should be included in the protection scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595388.8A CN107886043B (en) | 2017-07-20 | 2017-07-20 | Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710595388.8A CN107886043B (en) | 2017-07-20 | 2017-07-20 | Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886043A true CN107886043A (en) | 2018-04-06 |
CN107886043B CN107886043B (en) | 2022-04-01 |
Family
ID=61780473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710595388.8A Expired - Fee Related CN107886043B (en) | 2017-07-20 | 2017-07-20 | Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886043B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108549880A (en) * | 2018-04-28 | 2018-09-18 | 深圳市商汤科技有限公司 | Collision control method and device, electronic equipment and storage medium |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN110275168A (en) * | 2019-07-09 | 2019-09-24 | 厦门金龙联合汽车工业有限公司 | A kind of multi-targets recognition and anti-collision early warning method and system |
CN110928286A (en) * | 2018-09-19 | 2020-03-27 | 百度在线网络技术(北京)有限公司 | Method, apparatus, medium, and system for controlling automatic driving of vehicle |
CN111353453A (en) * | 2020-03-06 | 2020-06-30 | 北京百度网讯科技有限公司 | Obstacle detection method and apparatus for vehicle |
CN112356815A (en) * | 2020-12-01 | 2021-02-12 | 吉林大学 | Pedestrian active collision avoidance system and method based on monocular camera |
CN112498342A (en) * | 2020-11-26 | 2021-03-16 | 潍柴动力股份有限公司 | Pedestrian collision prediction method and system |
CN112815907A (en) * | 2021-01-22 | 2021-05-18 | 北京峰智科技有限公司 | Vehicle wading monitoring method, device and system, computer equipment and storage medium |
WO2022148143A1 (en) * | 2021-01-08 | 2022-07-14 | 华为技术有限公司 | Target detection method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050066275A1 (en) * | 2003-09-23 | 2005-03-24 | Gannon Aaron James | Methods and apparatus for displaying multiple data categories |
JP2010134866A (en) * | 2008-12-08 | 2010-06-17 | Toyota Motor Corp | Facial part detection apparatus |
CN102096803A (en) * | 2010-11-29 | 2011-06-15 | 吉林大学 | Safe state recognition system for people on basis of machine vision |
CN102765365A (en) * | 2011-05-06 | 2012-11-07 | 香港生产力促进局 | Pedestrian detection method based on machine vision and pedestrian anti-collision early warning system |
US20140002657A1 (en) * | 2012-06-29 | 2014-01-02 | Lg Innotek Co., Ltd. | Forward collision warning system and forward collision warning method |
CN103778432A (en) * | 2014-01-08 | 2014-05-07 | 南京邮电大学 | Human being and vehicle classification method based on deep belief net |
CN104504395A (en) * | 2014-12-16 | 2015-04-08 | 广州中国科学院先进技术研究所 | Method and system for achieving classification of pedestrians and vehicles based on neural network |
CN104657752A (en) * | 2015-03-17 | 2015-05-27 | 银江股份有限公司 | Deep learning-based safety belt wearing identification method |
CN106295459A (en) * | 2015-05-11 | 2017-01-04 | 青岛若贝电子有限公司 | Based on machine vision and the vehicle detection of cascade classifier and method for early warning |
CN106679672A (en) * | 2017-01-15 | 2017-05-17 | 吉林大学 | AGV (Automatic Guided Vehicle) location algorithm based on DBN (Dynamic Bayesian Network) and Kalman filtering algorithm |
-
2017
- 2017-07-20 CN CN201710595388.8A patent/CN107886043B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050066275A1 (en) * | 2003-09-23 | 2005-03-24 | Gannon Aaron James | Methods and apparatus for displaying multiple data categories |
JP2010134866A (en) * | 2008-12-08 | 2010-06-17 | Toyota Motor Corp | Facial part detection apparatus |
CN102096803A (en) * | 2010-11-29 | 2011-06-15 | 吉林大学 | Safe state recognition system for people on basis of machine vision |
CN102765365A (en) * | 2011-05-06 | 2012-11-07 | 香港生产力促进局 | Pedestrian detection method based on machine vision and pedestrian anti-collision early warning system |
US20140002657A1 (en) * | 2012-06-29 | 2014-01-02 | Lg Innotek Co., Ltd. | Forward collision warning system and forward collision warning method |
CN103778432A (en) * | 2014-01-08 | 2014-05-07 | 南京邮电大学 | Human being and vehicle classification method based on deep belief net |
CN104504395A (en) * | 2014-12-16 | 2015-04-08 | 广州中国科学院先进技术研究所 | Method and system for achieving classification of pedestrians and vehicles based on neural network |
CN104657752A (en) * | 2015-03-17 | 2015-05-27 | 银江股份有限公司 | Deep learning-based safety belt wearing identification method |
CN106295459A (en) * | 2015-05-11 | 2017-01-04 | 青岛若贝电子有限公司 | Based on machine vision and the vehicle detection of cascade classifier and method for early warning |
CN106679672A (en) * | 2017-01-15 | 2017-05-17 | 吉林大学 | AGV (Automatic Guided Vehicle) location algorithm based on DBN (Dynamic Bayesian Network) and Kalman filtering algorithm |
Non-Patent Citations (2)
Title |
---|
PENGCHENG DING 等: "Anti-Collision Warning Algorithm based on Visual Perception in Front of Vehicle", 《INTERNATIONAL CONFERENCE ON MECHANICAL, ELECTRONIC, CONTROL AND AUTOMATION ENGINEERING》 * |
付洋 等: "一种基于视频的道路行人检测方法", 《电视技术》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019206272A1 (en) * | 2018-04-28 | 2019-10-31 | 深圳市商汤科技有限公司 | Collision control method and apparatus, and electronic device and storage medium |
CN108549880A (en) * | 2018-04-28 | 2018-09-18 | 深圳市商汤科技有限公司 | Collision control method and device, electronic equipment and storage medium |
US11308809B2 (en) | 2018-04-28 | 2022-04-19 | Shenzhen Sensetime Technology Co., Ltd. | Collision control method and apparatus, and storage medium |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN110928286A (en) * | 2018-09-19 | 2020-03-27 | 百度在线网络技术(北京)有限公司 | Method, apparatus, medium, and system for controlling automatic driving of vehicle |
CN110928286B (en) * | 2018-09-19 | 2023-12-26 | 阿波罗智能技术(北京)有限公司 | Method, apparatus, medium and system for controlling automatic driving of vehicle |
CN110275168A (en) * | 2019-07-09 | 2019-09-24 | 厦门金龙联合汽车工业有限公司 | A kind of multi-targets recognition and anti-collision early warning method and system |
CN110275168B (en) * | 2019-07-09 | 2021-05-04 | 厦门金龙联合汽车工业有限公司 | Multi-target identification and anti-collision early warning method and system |
CN111353453B (en) * | 2020-03-06 | 2023-08-25 | 北京百度网讯科技有限公司 | Obstacle detection method and device for vehicle |
CN111353453A (en) * | 2020-03-06 | 2020-06-30 | 北京百度网讯科技有限公司 | Obstacle detection method and apparatus for vehicle |
CN112498342A (en) * | 2020-11-26 | 2021-03-16 | 潍柴动力股份有限公司 | Pedestrian collision prediction method and system |
CN112356815A (en) * | 2020-12-01 | 2021-02-12 | 吉林大学 | Pedestrian active collision avoidance system and method based on monocular camera |
WO2022148143A1 (en) * | 2021-01-08 | 2022-07-14 | 华为技术有限公司 | Target detection method and device |
CN112815907A (en) * | 2021-01-22 | 2021-05-18 | 北京峰智科技有限公司 | Vehicle wading monitoring method, device and system, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107886043B (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886043A (en) | The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible | |
CN108638999B (en) | Anti-collision early warning system and method based on 360-degree look-around input | |
US11527078B2 (en) | Using captured video data to identify pose of a vehicle | |
Wu et al. | Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement | |
CN106647776B (en) | Method and device for judging lane changing trend of vehicle and computer storage medium | |
CN111133448A (en) | Controlling autonomous vehicles using safe arrival times | |
WO2020264010A1 (en) | Low variance region detection for improved detection | |
WO2009101660A1 (en) | Vehicle periphery monitoring device, vehicle, and vehicle periphery monitoring program | |
CN114418895A (en) | Driving assistance method and device, vehicle-mounted device and storage medium | |
US12131551B2 (en) | Systems and methods for mitigating mis-detections of tracked objects in the surrounding environment of a vehicle | |
CN107229906A (en) | A kind of automobile overtaking's method for early warning based on units of variance model algorithm | |
CN117111055A (en) | Vehicle state sensing method based on thunder fusion | |
US20230230257A1 (en) | Systems and methods for improved three-dimensional data association using information from two-dimensional images | |
Satzoda et al. | Vision-based front and rear surround understanding using embedded processors | |
CN112989956A (en) | Traffic light identification method and system based on region of interest and storage medium | |
CN117416349A (en) | Automatic driving risk pre-judging system and method based on improved YOLOV7-Tiny and SS-LSTM in V2X environment | |
JP6171608B2 (en) | Object detection device | |
CN113688662B (en) | Motor vehicle passing warning method, device, electronic device and computer equipment | |
CN115100251A (en) | Thermal imager and laser radar-based vehicle front pedestrian detection method and terminal | |
Álvarez et al. | Perception advances in outdoor vehicle detection for automatic cruise control | |
US20240346752A1 (en) | Machine learning device and vehicle | |
JP2006127358A (en) | Vehicle road sign detection system | |
US12125215B1 (en) | Stereo vision system and method for small-object detection and tracking in real time | |
Liu | Development of a vision-based object detection and recognition system for intelligent vehicle | |
JP7323716B2 (en) | Image processing device and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220401 |
|
CF01 | Termination of patent right due to non-payment of annual fee |