[go: up one dir, main page]

CN106874900A - A kind of tired driver detection method and detection means based on steering wheel image - Google Patents

A kind of tired driver detection method and detection means based on steering wheel image Download PDF

Info

Publication number
CN106874900A
CN106874900A CN201710282836.9A CN201710282836A CN106874900A CN 106874900 A CN106874900 A CN 106874900A CN 201710282836 A CN201710282836 A CN 201710282836A CN 106874900 A CN106874900 A CN 106874900A
Authority
CN
China
Prior art keywords
steering wheel
image
frame
angle
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710282836.9A
Other languages
Chinese (zh)
Inventor
李海标
蒋鹏民
黄名柏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201710282836.9A priority Critical patent/CN106874900A/en
Publication of CN106874900A publication Critical patent/CN106874900A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

本发明为一种基于方向盘图像的司机疲劳检测方法和检测装置,本方法主要步骤为:取得模板图像,求方向盘圆心位置坐标,取方向盘上特征点,求得当前帧和下一帧中特征点坐标位置,计算方向盘转动的角度并存储;以计算一定时间段内方向盘的零速百分比和角度标准差;采用支持向量机分类算法,训练样本集,求出判断司机是否处于疲劳状态的决策函数。监控时实时采集方向盘图像,由方向盘转角,求出当前零速百分比和角度表准差的值,代入决策函数,结果为1即当前司机为疲劳状态,立即报警。本装置摄像头安装于驾驶室顶,摄像头与中心处理器连接;中心处理器还连接报警模块。本发明操作简单,测量结果准确,可计算大于一周的方向盘转角,适用范围更广。

The present invention is a driver fatigue detection method and detection device based on a steering wheel image. The main steps of the method are: obtaining a template image, obtaining the coordinates of the center position of the steering wheel, obtaining feature points on the steering wheel, and obtaining feature points in the current frame and the next frame Coordinate position, calculate and store the steering wheel rotation angle; to calculate the zero speed percentage and angle standard deviation of the steering wheel within a certain period of time; use the support vector machine classification algorithm to train the sample set, and find out the decision function for judging whether the driver is in a state of fatigue. During monitoring, the steering wheel image is collected in real time, and the current zero-speed percentage and the value of the angle gauge deviation are obtained from the steering wheel angle, which is substituted into the decision function. The result is 1, which means that the current driver is in a fatigue state, and an alarm is given immediately. The camera of the device is installed on the top of the cab, and the camera is connected to the central processor; the central processor is also connected to the alarm module. The invention has simple operation, accurate measurement results, can calculate the steering wheel rotation angle larger than one cycle, and has a wider application range.

Description

一种基于方向盘图像的司机疲劳检测方法和检测装置A driver fatigue detection method and detection device based on steering wheel image

技术领域technical field

本发明涉及车辆司机疲劳检测领域,具体为一种基于方向盘图像的司机疲劳检测方法和检测装置。The invention relates to the field of vehicle driver fatigue detection, in particular to a driver fatigue detection method and detection device based on steering wheel images.

背景技术Background technique

随着机动车数量的增多,交通事故也持续增加。统计数据表明,我国每年由于疲劳驾驶导致的死亡人数约9000人,给国家带来了巨大的人员和财产损失。统计数据表明,司机疲劳引起的交通事故占交通事故总数的20%左右,占特大交通事故数的40%以上。如果能在司机疲劳产生的初期给予预警,那么就会减少交通事故的发生,对交通安全意义重大。As the number of motor vehicles increases, traffic accidents also continue to increase. Statistics show that the number of deaths caused by fatigue driving in my country is about 9,000 every year, which has brought huge loss of personnel and property to the country. Statistics show that traffic accidents caused by driver fatigue account for about 20% of the total number of traffic accidents and more than 40% of the number of serious traffic accidents. If early warning can be given at the initial stage of driver fatigue, the occurrence of traffic accidents will be reduced, which is of great significance to traffic safety.

现有的司机疲劳检测不少是基于人脸技术,因为需要对司机的正常状态和疲劳状态的特征进行识别分类判别,硬件成本较高,算法复杂,实时效果差,难以实现,且因有些司机的疲劳在脸部并没有特别反映,受个人特征影响比较大。Many of the existing driver fatigue detection is based on face technology, because it is necessary to identify and classify the characteristics of the driver's normal state and fatigue state, the hardware cost is high, the algorithm is complex, the real-time effect is poor, and it is difficult to realize, and because some drivers Fatigue is not particularly reflected on the face, and it is greatly affected by personal characteristics.

研究发现方向盘转角数据能实时反映司机的操作状态,当司机处于疲劳状态时,其对环境的感知能力、判断能力和实际操控能力都会下降,其直接操控的方向盘转角会出现异常波动。The study found that the steering wheel angle data can reflect the driver's operating status in real time. When the driver is in a state of fatigue, his ability to perceive the environment, judge ability and actual control ability will all decrease, and the steering wheel angle directly controlled by him will fluctuate abnormally.

通过检测方向盘转角的实时变化,即可判断司机的疲劳状态。其中关键是对方向盘转角的测量。目前,方向盘转角的测量方法安装转角传感器,测量方向盘的旋转角度。比较有代表性的如申请号为201310068176.6、名称为《基于方向盘转角数据的疲劳检测方法及检测装置》的发明专利申请和申请号为201510113007.9、名称为《基于方向盘转角信息的驾驶人疲劳状态检测方法》的发明专利申请,虽然他们的方案有效地提取了方向盘转角的疲劳特征信息并有效地检测司机的疲劳状态,但也存在明显问题:By detecting the real-time change of the steering wheel angle, the fatigue state of the driver can be judged. The key is the measurement of the steering wheel angle. At present, the method for measuring the steering wheel angle is to install a rotation angle sensor to measure the rotation angle of the steering wheel. A more representative example is the invention patent application with application number 201310068176.6, titled "Fatigue Detection Method and Detection Device Based on Steering Wheel Angle Data" and application number 201510113007.9, titled "Driver Fatigue State Detection Method Based on Steering Wheel Angle Information" "Invention patent application, although their scheme effectively extracts the fatigue feature information of the steering wheel angle and effectively detects the driver's fatigue state, but there are also obvious problems:

1、方向盘转角传感器需要直接固定在车辆方向盘上或固定在方向盘转轴上,转角传感器遮挡部分仪表盘,司机察看的仪表的视线被阻挡,另一方面方向盘或其转轴上的传感器还有可能影响方向盘的旋转,影响正常驾驶操作;1. The steering wheel angle sensor needs to be directly fixed on the steering wheel of the vehicle or on the steering wheel shaft. The angle sensor blocks part of the instrument panel, and the driver's view of the instrument is blocked. On the other hand, the sensor on the steering wheel or its shaft may also affect the steering wheel. rotation, affecting normal driving operation;

2、现有方向盘转角传感器没有转角记忆功能,因此当方向盘转动范围大于一周时,转角传感器无法得到正确的转角数据;2. The existing steering wheel angle sensor has no angle memory function, so when the steering wheel turns more than one circle, the angle sensor cannot obtain correct angle data;

3、方向盘转角传感器安装困难,通常需要在汽车出厂前固定安装于方向盘或转轴上,且不方便拆卸,移植性差,无法满足现有车辆司机疲劳状态检测的需要。3. The steering wheel angle sensor is difficult to install. It usually needs to be fixed on the steering wheel or the rotating shaft before the car leaves the factory. It is inconvenient to disassemble and has poor portability. It cannot meet the needs of existing vehicle driver fatigue detection.

为此又出现的申请号为201210343216.9名称为《一种基于图像的车辆方向盘转角检测方法及检测装置》的方案,其无需安装方向盘转角传感器,在方向盘的背面粘贴两种标记贴和标识线,以方向盘初始状态图像与转动后的图像比较,计算得到方向盘转角。但是其标记过于繁琐,标识线有六条将方向盘分为六个区,每个区内要按规定粘贴三角形的标记贴和弧形标记贴,实施较困难;在方向盘背面粘贴标记,光线较差,影响图像清晰度,司机握着方向盘操作时会遮挡部分标记,也影响检测效果。故也难以实用。The application number that occurs again for this reason is that 201210343216.9 name is called " a kind of vehicle steering wheel angle detection method and detection device based on image " scheme, it does not need to install steering wheel angle sensor, sticks two kinds of marking stickers and marking line on the back side of steering wheel, with The initial state image of the steering wheel is compared with the image after rotation, and the steering wheel angle is calculated. However, the marking is too cumbersome. There are six marking lines to divide the steering wheel into six areas. In each area, a triangular marking and an arc marking should be pasted according to the regulations, which is difficult to implement; the marking is pasted on the back of the steering wheel, and the light is poor. Affecting the image clarity, the driver will cover part of the mark when holding the steering wheel, which also affects the detection effect. Therefore it is also difficult to be practical.

传统的基于方向盘转角信息的司机疲劳判别是采用fisher线性判别算法,此方法将车辆的行驶视为直线行驶,采用角度标准差σ和零速百分比PNS这两个指标、用fisher线性判别算法判别司机的疲劳状态。但是车辆的实际行驶有走“S”形、超车变道等情况。以角度标准差σ大于1.2°作为判定疲劳的标准明显不合理。故传统的基于方向盘转角信息的司机疲劳判别方法准确率不高。The traditional driver fatigue discrimination based on steering wheel angle information uses Fisher's linear discriminant algorithm. This method regards the vehicle's driving as straight-line driving, and uses the two indicators of angle standard deviation σ and zero speed percentage PNS to identify drivers with Fisher's linear discriminant algorithm. state of fatigue. However, the actual driving of the vehicle has situations such as taking an "S" shape, overtaking and changing lanes. It is obviously unreasonable to use the angle standard deviation σ greater than 1.2° as the criterion for judging fatigue. Therefore, the accuracy of the traditional driver fatigue discrimination method based on steering wheel angle information is not high.

目前尚未见一种用之有效的简单易行的基于图像的司机疲劳状态检测方法。An effective, simple and easy image-based driver fatigue state detection method has not been seen so far.

发明内容Contents of the invention

本发明的目的是为了克服现有技术中存在的缺陷而提出的一种基于方向盘图像的司机疲劳检测方法,摄像头实时采集方向盘图像,采用模板匹配算法让模板图像与实时图像匹配,通过图像处理得到方向盘转动角度,计算当前零速百分比和角度标准差,用支持向量机算法来得到当前司机的状态。本方法操作简单、实用性强,且对司机状态检测准确。The purpose of the present invention is to propose a driver fatigue detection method based on the steering wheel image in order to overcome the defects in the prior art. The camera collects the steering wheel image in real time, uses a template matching algorithm to match the template image with the real-time image, and obtains the result by image processing. Steering wheel rotation angle, calculate the current zero-speed percentage and angle standard deviation, and use the support vector machine algorithm to obtain the current driver's state. The method is simple in operation, strong in practicability, and can accurately detect the state of the driver.

本发明的另一目的是提出一种基于方向盘图像的司机疲劳检测装置,包括中心处理器、摄像头和报警模块,摄像头安装于驾驶室内的顶部,方向盘处于摄像头的取景框内,摄像头的视频信号输出线与本装置的中心处理器连接;中心处理器还连接报警模块。Another object of the present invention is to propose a driver fatigue detection device based on the steering wheel image, including a central processor, a camera and an alarm module, the camera is installed on the top of the cab, the steering wheel is in the viewfinder frame of the camera, and the video signal of the camera is output The line is connected with the central processor of the device; the central processor is also connected with the alarm module.

本发明设计的一种基于方向盘图像的司机疲劳检测方法主要步骤如下:The main steps of a kind of driver fatigue detection method based on the steering wheel image designed by the present invention are as follows:

步骤Ⅰ、取得模板图像Step Ⅰ. Get the template image

摄像头安装于驾驶室内的顶部,摄像头的视频信号输出线与检测装置的中心处理器连接;摄像头采集静止的方向盘图像,作为初始图像存储;The camera is installed on the top of the cab, and the video signal output line of the camera is connected to the central processor of the detection device; the camera collects the still steering wheel image and stores it as the initial image;

中心处理器对初始图像进行去噪处理后,存储为模板图像。After the central processor performs denoising processing on the initial image, it is stored as a template image.

在方向盘圆上的点与摄像头拍摄到的图像椭圆上的点一一对应,图像椭圆上某一点m(xm,ym)与方向盘圆上的对应点M(xM,yM)之间的对应关系是xM=xm式中半径R为椭圆长半轴,R的长度不变。Points on the steering wheel circle correspond to points on the image ellipse captured by the camera, and the distance between a point m(x m , y m ) on the image ellipse and the corresponding point M(x M , y M ) on the steering wheel circle The corresponding relation is x M =x m In the formula, the radius R is the semi-major axis of the ellipse, and the length of R remains unchanged.

步骤Ⅱ、取得方向盘圆心位置坐标o(x0,y0)Step Ⅱ. Obtain the coordinates o(x 0 ,y 0 ) of the center of the steering wheel

步骤Ⅰ所得的任何类型车辆的方向盘模板图像上均可得到方向盘圆心位置坐标,The steering wheel center position coordinates can be obtained on the steering wheel template image of any type of vehicle obtained in step I,

Ⅱ-1、方向盘外轮廓Ⅱ-1. Steering wheel outline

对步骤Ⅰ所得的模板图像采用OpenCV开源视觉库自带的findContours()函数,通过设置相应的参数得到方向盘图像的外轮廓上的各点坐标;Adopt the findContours () function that the OpenCV open source visual library carries to the template image of step I gained, obtain the coordinates of each point on the outer contour of the steering wheel image by setting corresponding parameters;

Ⅱ-2、方向盘圆心位置坐标Ⅱ-2. Position coordinates of the center of the steering wheel

由步骤Ⅱ-1确定的方向盘图像的外轮廓上的5个坐标点(xm1,ym1,xm2,ym2,...,xm5,ym5),确定椭圆方程,得到椭圆中心位置坐标o(x0,y0);From the 5 coordinate points (x m1 , y m1 , x m2 , y m2 ,..., x m5 , y m5 ) on the outer contour of the steering wheel image determined in step Ⅱ-1, determine the ellipse equation and obtain the position of the center of the ellipse Coordinate o(x 0 ,y 0 );

步骤Ⅲ、特征点跟踪Step Ⅲ. Feature point tracking

本方法采用光流法,针对每一个视频序列,与模板图像匹配,检测当前方向盘图像转动的角度;在方向盘上取具有代表性并易于确定的特征点,具有代表性并易于确定的特征点是方向盘转动后仍能在方向盘图像上明确定位的点。通过迭代求得当前帧中方向盘图像中的特征点坐标位置和在下一帧中的坐标位置,根据该特征点在当前帧的坐标与该特征点在下一帧的坐标计算得出从当前帧到下一帧该特征点转动的角度,也就是方向盘转动的角度;This method uses the optical flow method to match each video sequence with the template image to detect the rotation angle of the current steering wheel image; take representative and easy-to-determined feature points on the steering wheel, and the representative and easy-to-determined feature points are A point that remains unambiguously located on the steering wheel image after the steering wheel has been turned. By iteratively obtaining the coordinate position of the feature point in the steering wheel image in the current frame and the coordinate position in the next frame, according to the coordinates of the feature point in the current frame and the coordinates of the feature point in the next frame, it is calculated from the current frame to the next frame. The rotation angle of the feature point in one frame, that is, the rotation angle of the steering wheel;

步骤Ⅳ、计算方向盘相邻两帧转动的角度θStep Ⅳ. Calculate the rotation angle θ of the steering wheel in two adjacent frames

以下角度θ的计算采用方向盘圆上特征点A的位置坐标。The calculation of the following angle θ adopts the position coordinates of the characteristic point A on the steering wheel circle.

由步骤Ⅱ确定了方向盘圆心o(x0,y0),步骤Ⅲ确定了当前帧特征点A位置A0(xM01,yM01)的坐标,当方向盘转动一个角度θ后,再确定特征点位置A0(xM01,yM01)对应的下一帧的特征点位置A1(xM11,yM11)的坐标。The center of the steering wheel o(x 0 , y 0 ) is determined in step II, and the coordinates of the position A 0 (x M01 , y M01 ) of the feature point A in the current frame are determined in step III. After the steering wheel is rotated by an angle θ, the feature point is determined again The coordinates of the feature point position A 1 (x M11 , y M11 ) of the next frame corresponding to the position A 0 (x M01 , y M01 ).

圆心o(x0,y0)和A0(xM01,yM01)、A1(xM11,yM11)连线的夹角为θ1,根据余弦定理即特征点A转动角度, The included angle between the center o(x 0 , y 0 ) and the line connecting A 0 (x M01 , y M01 ), A 1 (x M11 , y M11 ) is θ 1 , according to the law of cosines That is, the rotation angle of feature point A,

由两点间距离公式算出:Calculated by the distance formula between two points:

步骤Ⅴ、方向盘图像的特征点转动角度Step Ⅴ, the rotation angle of the feature point of the steering wheel image

令方向盘圆心坐标为o(x0,y0),第k帧图像、第k+1帧图像中特征点的坐标分别为Dk(xMk,yMk)、Dk+1(xM(k+1),yM(k+1)),相邻两帧图像中特征点转动角度小于90°。Let the coordinates of the center of the steering wheel be o(x 0 , y 0 ), the coordinates of the feature points in the kth frame image and the k+1th frame image are D k (x Mk , y Mk ), D k+1 (x M( k+1) , y M(k+1) ), the rotation angle of feature points in two adjacent frames of images is less than 90°.

当yMk≥y0、yM(k+1)≥y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度。When y Mk ≥ y 0 , y M(k+1) ≥ y 0 : If x M(k+1) ≥ x Mk is satisfied, it means that the k+1th frame image rotates an angle clockwise relative to the kth frame image ; If x M(k+1) ≤ x Mk is satisfied, it means that the k+1 frame image is rotated counterclockwise by an angle relative to the k frame image.

当yMk≤y0、yM(k+1)≤y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度。When y Mk ≤y 0 , y M(k+1) ≤y 0 : if x M(k+1) ≥x Mk is satisfied, it means that the k+1th frame image rotates an angle counterclockwise relative to the kth frame image ; If it satisfies x M(k+1) ≤ x Mk , it means that the k+1th frame image rotates clockwise by an angle relative to the kth frame image.

当yMk≥y0、yM(k+1)≤y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度。When y Mk ≥y 0 , y M(k+1) ≤y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 is satisfied, it means that the k+1th frame image is relative to the kth frame The image rotates an angle clockwise; if x Mk ≤ x 0 , x M(k+1) ≤ x 0 is satisfied, it means that the k+1 frame image rotates an angle counterclockwise relative to the k frame image.

当yMk≤y0、yM(k+1)≥y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度。When y Mk ≤y 0 , y M(k+1) ≥y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 are satisfied, it means that the k+1th frame image is relative to the kth frame The image is rotated by an angle counterclockwise; if x Mk ≤ x 0 , x M(k+1) ≤ x 0 is satisfied, it means that the k+1th frame image is rotated by an angle clockwise relative to the kth frame image.

本方法规定下一帧方向盘相对当前帧方向盘顺时针转动角度θ为负,逆时针转动则θ为正。依次累加每帧方向盘图像转动角度,当某时间段内累加值大于360°,即方向盘相对模板图像的位置逆时针转动超过一周;当累加值小于-360°,即此时间段内方向盘相对模板图像的位置顺时针转动超过一周;This method stipulates that the steering wheel in the next frame rotates clockwise with respect to the steering wheel in the current frame. The angle θ is negative, and the angle θ is positive when it is rotated counterclockwise. Accumulate the rotation angle of each frame of the steering wheel image in sequence. When the accumulated value is greater than 360° in a certain period of time, that is, the position of the steering wheel relative to the template image rotates counterclockwise for more than one week; Rotate the position clockwise for more than one revolution;

根据存储每帧图像中的特征点转角数据计算此方向盘转动时间段内方向盘的零速百分比和角度标准差;Calculate the zero-speed percentage and angle standard deviation of the steering wheel during the steering wheel rotation time period according to the feature point rotation angle data stored in each frame of image;

步骤Ⅵ、提取技术指标零速百分比和角度标准差Step Ⅵ, extract the technical index zero speed percentage and angle standard deviation

当时间t内方向盘角度改变的频度减少,也就是出现了司机对方向盘的修正频度减少特征时,方向盘不动的时间增多,作为本方法的技术指标之一零速百分比PNS,表征所选时间段内方向盘不动的程度,所述t为50~80秒。When the frequency of steering wheel angle changes decreases within time t, that is, when the frequency of corrections to the steering wheel by the driver decreases, the time for which the steering wheel does not move increases. As one of the technical indicators of this method, the zero speed percentage PNS is used to characterize the selected The extent to which the steering wheel does not move within the time period, the t is 50 to 80 seconds.

零速百分比PNS的定义如下式所示:The definition of zero speed percentage PNS is as follows:

本发明采样频率为8~12帧/秒,PNSi代表第i帧的零速百分比,Ni代表第i帧前t秒内角速度的总采样点数,ni代表第i帧前t秒内角速度在±0.1°/s之间的点数。The sampling frequency of the present invention is 8 to 12 frames per second, PNS i represents the zero velocity percentage of the i frame, N i represents the total number of sampling points of the angular velocity within t seconds before the i frame, and ni represents the internal angular velocity of the i frame within t seconds Number of points between ±0.1°/s.

本方法的另一技术指标角度标准差的定义如下式所示:The definition of another technical index angle standard deviation of this method is shown in the following formula:

其中:θj为第j帧对应的方向盘转动的角度,μ表示m2帧内平均每帧方向盘转动角度值;Among them: θ j is the steering wheel rotation angle corresponding to the jth frame, and μ represents the average steering wheel rotation angle value of each frame in the m2 frame;

步骤Ⅶ、支持向量机分类算法Step VII, Support Vector Machine Classification Algorithm

综合步骤Ⅵ所得的零速百分比PNS和角度标准差σ,采用支持向量机分类算法判断司机是否处于疲劳状态;Combine the zero-speed percentage PNS and angle standard deviation σ obtained in step VI, and use the support vector machine classification algorithm to judge whether the driver is in a fatigue state;

支持向量机分类算法先训练样本集,在样本空间中找到一个划分不同类别样本的超平面,该超平面对应的模型为ωTx+b=0,模型参数ω为法向量、b为位移项、T为转置、x为零速百分比和角度标准差组成的自变量;由超平面模型求出的决策函数The support vector machine classification algorithm first trains the sample set, and finds a hyperplane in the sample space that divides samples of different categories. The model corresponding to the hyperplane is ω T x + b = 0, and the model parameter ω is the normal vector and b is the displacement item , T is the transpose, x is the independent variable composed of zero speed percentage and angle standard deviation; the decision function obtained by the hyperplane model

f(x)=sgn[(ω*×x)+b*]f(x)=sgn[(ω * ×x)+b * ]

其中sgn[(ω*×x)+b*]为符号函数,当ω*×x+b>0 f(x)=1,当ω*×x+b=0f(x)=0,当ω*×x+b<0 f(x)=-1;Where sgn[(ω * ×x)+b * ] is a sign function, when ω * ×x+b>0 f(x)=1, when ω * ×x+b=0f(x)=0, when ω * ×x+b<0 f(x)=-1;

根据车辆行驶在上述不同路段下的W个训练样本,W为80~150,由支持向量机分类算法学习,求出区分司机处于清醒状态还是疲劳状态的决策函数f(x);According to the W training samples of the vehicle driving on the above-mentioned different road sections, W is 80-150, learned by the support vector machine classification algorithm, and obtains the decision function f(x) that distinguishes whether the driver is awake or fatigued;

在司机实际驾驶过程中摄像头实时采集方向盘图像,经过处理得出方向盘转角,进而求出当前零速百分比和角度表准差的值,当前零速百分比和角度表准差的值代入已经求出的决策函数f(x),若f(x)=1则说明当前司机处于疲劳状态,系统立即报警。During the actual driving process of the driver, the camera collects the steering wheel image in real time, and after processing, the steering wheel angle is obtained, and then the value of the current zero-speed percentage and angle standard deviation is calculated, and the current zero-speed percentage and angle standard difference are substituted into the calculated values Decision function f(x), if f(x)=1, it means that the current driver is in a state of fatigue, and the system will give an alarm immediately.

所述步骤Ⅰ具体执行包括如下步骤:The specific execution of the step I includes the following steps:

Ⅰ-1、摄像头安装Ⅰ-1. Camera installation

摄像头安装于驾驶室内的顶部,使整个方向盘处于摄像头的取景框内,调节摄像头,使图像清晰,与取景框大小配合;向中心处理器传送所采集的图像数据;The camera is installed on the top of the cab, so that the entire steering wheel is in the frame of the camera, and the camera is adjusted to make the image clear and match the size of the frame; transmit the collected image data to the central processor;

Ⅰ-2、方向盘初始图像的采集Ⅰ-2. Acquisition of the initial image of the steering wheel

在车辆启动前,方向盘处于初始状态时,摄像头采集静止的方向盘图像,作为初始图像存储;Before the vehicle starts, when the steering wheel is in the initial state, the camera collects the still steering wheel image and stores it as the initial image;

Ⅰ-3、模板图像Ⅰ-3. Template image

中心处理器采用核函数窗3×3的中值滤波对步骤Ⅰ-2取得的初始图像进行去噪,在去除脉冲噪声、椒盐噪声的同时保持图像的边缘细节信息。The central processor uses a median filter with a kernel function window of 3×3 to denoise the initial image obtained in step I-2, and maintains the edge detail information of the image while removing impulse noise and salt and pepper noise.

所述步骤Ⅰ-2摄像头采集方向盘初始图像前,在方向盘的下面放一张白纸,方向盘的投影位于白纸上,采集得到有白色背景的初始图像。In step I-2, before the camera captures the initial image of the steering wheel, a piece of white paper is placed under the steering wheel, the projection of the steering wheel is located on the white paper, and an initial image with a white background is collected.

所述步骤Ⅲ特征点跟踪的具体的过程如下:The specific process of the step III feature point tracking is as follows:

Ⅲ-1、当前图像的处理Ⅲ-1. Processing of the current image

本方法摄像头的摄像频率为12~16帧/秒,中心处理器的采样频率小于摄像频率为9~12帧/秒;The shooting frequency of the camera in this method is 12-16 frames/second, and the sampling frequency of the central processor is 9-12 frames/second lower than the shooting frequency;

中心处理器对摄像头实时采集到的方向盘图像进行与步骤Ⅰ-3相同的去噪处理;The central processor performs the same denoising processing as step Ⅰ-3 on the steering wheel image collected by the camera in real time;

Ⅲ-2、当前帧的特征点Ⅲ-2. Feature points of the current frame

对步骤Ⅲ-1处理后的当前图像采用现有的OpenCV中的Shi-Tomasi进行特征点检测,利用OpenCV开源视觉库的goodFeaturesToTrack()函数检测得到当前帧的特征点经换算后得到的位置A0(xM01,yM01);Use the existing Shi-Tomasi in OpenCV to perform feature point detection on the current image processed in step Ⅲ-1, and use the goodFeaturesToTrack() function of the OpenCV open source vision library to detect and obtain the converted position of the feature points of the current frame A 0 (x M01 , y M01 );

Ⅲ-3、下一帧的特征点Ⅲ-3. The feature points of the next frame

调用OpenCV开源视觉库的cvCalcOpticalFlowPyrLK()函数,输入步骤Ⅲ-2所得的当前帧图像中的特征点位置A0(xM01,yM01),输出该特征点在下一帧图像中经换算后得到的位置A1(xM11,yM11)。Call the cvCalcOpticalFlowPyrLK() function of the OpenCV open source vision library, input the feature point position A 0 (x M01 , y M01 ) in the current frame image obtained in step III-2, and output the converted feature point in the next frame image Position A 1 (x M11 , y M11 ).

所述步骤Ⅲ在方向盘上还取另一具有代表性并易于确定的特征点B,A、B与圆心连线的夹角大于30度;In the step III, another representative and easy-to-determined feature point B is taken on the steering wheel, and the angle between A, B and the line connecting the center of the circle is greater than 30 degrees;

所述Ⅲ-2同时确定特征点B当前帧位置B0(xM02,yM02);The III-2 simultaneously determines the current frame position B 0 (x M02 , y M02 ) of the feature point B;

所述Ⅲ-3同样方法确定特征点位置B0(xM02,yM02)在下一帧图像中的位置B1(xM12,yM12);The same method as described in III-3 determines the position B 1 (x M12 , y M12 ) of the feature point position B 0 (x M02 , y M02 ) in the next frame image;

圆心o(x0,y0)和B0(xM02,yM02)和B1(xM12,yM12)连线的夹角为θ2,根据余弦定即特征点B转动角度,由两点间距离公式算出:The included angle between the center o(x 0 , y 0 ) and the line connecting B 0 (x M02 , y M02 ) and B 1 (x M12 , y M12 ) is θ 2 , which is determined according to the cosine That is, the rotation angle of feature point B, Calculated by the distance formula between two points:

最终的方向盘转动的角度θ为A、B两个特征点转动角度的平均值, The final steering wheel rotation angle θ is the average value of the rotation angles of the two feature points A and B,

跟踪两个特征点的转动角度,可使检测更准确;另外当因为司机操作遮挡,其中一个特征点未能摄取到转动结果即跟踪失败时,用另一个特征点也可达到跟踪目的,此时的方向盘的转动角度θ就是该特征点转动的角度。Tracking the rotation angle of two feature points can make the detection more accurate; in addition, when one of the feature points fails to pick up the rotation result due to the driver's operation occlusion, that is, the tracking fails, the other feature point can also be used to achieve the tracking purpose. The rotation angle θ of the steering wheel is the rotation angle of the feature point.

所述步骤Ⅶ支持向量机分类算法的具体步骤如下:The specific steps of the step VII support vector machine classification algorithm are as follows:

Ⅶ-1、训练样本的采集Ⅶ-1. Collection of training samples

选拥有五年或更长驾驶经验的至少5位司机和状态良好的至少5部车辆,每位司机随机挑选一部车辆,分别离散采集在直线行驶、“S”形路线行驶及超车不同行车情况下、司机处在清醒和疲劳状态时的样本,(x1,y1),...,(xW,yW)。其中x1=(x11,x12)T,x11、x12分别代表第一个样本数据的第一个指标零速百分比PNS和第二个指标角度标准差σ;yk1∈Y={1,-1},Y=1、Y=-1分别代表司机处于清醒状态和疲劳状态,W为80~150,司机疲劳状态的判断是由司机的面部视频评分的方法评估驾驶司机的疲劳状态;司机的面部视频以12~20s为间隔划分为多段,至少5名经过训练的实验人员按目前常用的司机状态特征独立对各段视频进行评分,多个实验人员评分的平均值作为司机状态的判别结果;Select at least 5 drivers with five or more years of driving experience and at least 5 vehicles in good condition. Each driver randomly selects a vehicle, and collects discrete data on different driving situations of straight-line driving, "S"-shaped route driving and overtaking. Next, the samples when the driver is awake and fatigued, (x 1 , y 1 ),..., (x W , y W ). Where x 1 = (x 11 , x 12 ) T , x 11 , x 12 respectively represent the first index zero speed percentage PNS and the second index angle standard deviation σ of the first sample data; y k1 ∈ Y = { 1, -1}, Y=1, Y=-1 respectively represent that the driver is awake and fatigued, W is 80-150, the driver’s fatigue state is judged by the driver’s facial video scoring method to evaluate the driver’s fatigue state ; The driver's facial video is divided into multiple sections with an interval of 12-20s. At least 5 trained experimenters independently score each section of video according to the currently commonly used driver state characteristics. Judgment result;

Ⅶ-2、参数的求解Ⅶ-2. Solving of parameters

所述分开训练样本的超平面,满足两类支持向量到该超平面的距离之和最大,支持向量即满足ωTx+b=±1的样本点,具体为求出超平面模型ωTx+b=0的参数ω和b,使得不同类支持向量到超平面的距离之和最大,即转化为:The hyperplane that separates the training samples satisfies that the sum of the distances from the two types of support vectors to the hyperplane is the largest, and the support vector is the sample point that satisfies ω T x + b = ± 1, specifically to find the hyperplane model ω T x The parameters ω and b of +b=0 make the sum of the distances from different types of support vectors to the hyperplane the largest, which is transformed into:

ST ytTxt+b)≥1,t=1,2,...,n1ST y tT x t +b)≥1, t=1, 2,..., n1

其中xt=(PNSt,σt)yt∈{1,-1}where x t = (PNS t , σ t )y t ∈ {1, -1}

此为凸二次规划问题,采用高效的拉格朗日乘子法得到其对偶,上述模型属于非线性可分,引入一个实现线性映射的高斯核函数,经过变换得到上式的对偶,即:This is a convex quadratic programming problem, and its dual is obtained by using the efficient Lagrange multiplier method. The above model is non-linearly separable. A Gaussian kernel function that realizes linear mapping is introduced, and the dual of the above formula is obtained after transformation, namely:

其中:σ1为高斯核的带宽,in: σ 1 is the bandwidth of the Gaussian kernel,

xt1、xt2、yt1和yt2代表样本参数;x t1 , x t2 , y t1 and y t2 represent sample parameters;

C为正则化常数,t1、t2=1,2,...,n1; C is a regularization constant, t1, t2=1, 2,..., n1;

求解出最优拉格朗日乘子α*=(α1 *,α2 *,...,αn1 *)T,计算最优法向量选择最优拉格朗日乘子α*中的一个小于正则化常数C的正分量αt1 *,以此计算最优位移项 Solve the optimal Lagrangian multiplier α * = (α 1 * , α 2 * ,..., α n1 * ) T , and calculate the optimal normal vector Select a positive component α t1 * of the optimal Lagrange multiplier α * that is smaller than the regularization constant C to calculate the optimal displacement term

Ⅶ-3、构造超平面(ω*×x)+b*=0,由此求出的决策函数VII-3. Construct a hyperplane (ω * ×x) + b * = 0, and the decision function obtained from it

f(x)=sgn[(ω*×x)+b*]。f(x)=sgn[(ω * ×x)+b * ].

根据上述本发明的基于方向盘图像的司机疲劳检测方法,本发明设计了一种基于方向盘图像的司机疲劳检测装置,包括中心处理器、摄像头和报警模块,摄像头安装于驾驶室内的顶部,处于方向盘上方,整个方向盘处于摄像头的取景框内,摄像头的视频信号输出线与本装置的中心处理器连接;所述中心处理器配有存储模块和供电模块,还连接报警模块,所述报警模块为报警灯或蜂鸣器;According to the driver fatigue detection method based on the steering wheel image of the present invention, the present invention designs a driver fatigue detection device based on the steering wheel image, including a central processor, a camera and an alarm module, and the camera is installed on the top of the cab, above the steering wheel , the whole steering wheel is in the viewfinder frame of the camera, and the video signal output line of the camera is connected to the central processor of the device; the central processor is equipped with a storage module and a power supply module, and is also connected to an alarm module, and the alarm module is an alarm light or buzzer;

所述中心处理器经数据传输模块连接图像处理模块,图像处理模块连接摄像头,摄像头采集方向盘图像传送入中心处理器的图像处理模块,图像处理模块对图像视频数据进行处理,经数据传输模块送入存储模块存储,并提取数据送入中心处理器,中心处理器将将当前方向盘图像与模板图像比较,得到当前方向盘转动角度,求得当前的零速百分比和角度标准差,代入根据支持向量机分类算法学习得到的决策函数,判断当前司机是否处于疲劳状态;检测到司机处于疲劳状态,中心处理器发送指令到报警模块,其报警灯闪烁或蜂鸣器鸣响。The central processor is connected to the image processing module through the data transmission module, and the image processing module is connected to the camera, and the steering wheel image collected by the camera is sent to the image processing module of the central processor, and the image processing module processes the image and video data, and is sent to the image processing module through the data transmission module. The storage module stores and extracts the data and sends them to the central processor. The central processor will compare the current steering wheel image with the template image to obtain the current steering wheel rotation angle, obtain the current zero speed percentage and angle standard deviation, and substitute it into the classification according to the support vector machine. The decision function learned by the algorithm judges whether the current driver is in a state of fatigue; when the driver is detected to be in a state of fatigue, the central processor sends an instruction to the alarm module, and its alarm light flashes or the buzzer sounds.

所述摄像头安装于驾驶室内顶部的N点,N点处于穿过方向盘圆心、垂直于车体纵轴的铅垂面后方,且与该铅垂面的距离小于或等于20cm,N点与穿过方向盘圆心、平行于车体纵轴的铅垂面的距离小于或等于15cm。The camera is installed at point N on the top of the cab, and point N is behind the vertical plane passing through the center of the steering wheel and perpendicular to the longitudinal axis of the vehicle body, and the distance from the vertical plane is less than or equal to 20cm. The distance between the center of the steering wheel and the vertical plane parallel to the longitudinal axis of the vehicle body is less than or equal to 15cm.

与现有技术相比,本发明一种基于方向盘图像的司机疲劳检测方法和检测装置的优点为:1、操作简单,根据方向盘的实时图像对方向盘转角进行实时检测,测量结果准确,且当方向盘转动的角度大于一周时,也可以得到方向盘转动的角度;2、摄像头安装于顶部,不阻挡司机看仪表的视线,也不影响方向盘的旋转,司机的正常驾驶不受妨碍;3、中心处理器采用的OpenCV开源视觉库和支持向量机分类算法均为成熟软件,实用性和鲁棒性较好;4、采用支持向量机分类算法适用范围更广,不再局限于直线行驶的情况下的司机疲劳状态判别;5、摄像头安装拆卸方便,移植性强,实用性好,可在现有各种车辆上安装,满足司机疲劳检测的需要。Compared with the prior art, the advantages of a driver fatigue detection method based on the steering wheel image and the detection device of the present invention are: 1. Simple operation, real-time detection of the steering wheel angle according to the real-time image of the steering wheel, accurate measurement results, and when the steering wheel When the angle of rotation is greater than one circle, the angle of rotation of the steering wheel can also be obtained; 2. The camera is installed on the top, which does not block the driver's sight of the instrument, nor does it affect the rotation of the steering wheel, and the driver's normal driving is not hindered; 3. The central processor The OpenCV open source vision library and support vector machine classification algorithm used are mature software, with good practicability and robustness; 4. The application range of the support vector machine classification algorithm is wider, and it is no longer limited to drivers driving in a straight line. Fatigue state discrimination; 5. The camera is easy to install and disassemble, has strong portability and good practicability, and can be installed on various existing vehicles to meet the needs of driver fatigue detection.

附图说明Description of drawings

图1为本基于方向盘图像的司机疲劳检测装置实施例的摄像头与方向盘安装关系示意图;Fig. 1 is the schematic diagram of the installation relationship between the camera and the steering wheel of the embodiment of the driver fatigue detection device based on the steering wheel image;

图2为本基于方向盘图像的司机疲劳检测装置实施例结构框图;Fig. 2 is the structural block diagram of the embodiment of the driver's fatigue detection device based on the steering wheel image;

图3为本基于方向盘图像的司机疲劳检测方法实施例流程图;Fig. 3 is the embodiment flow chart of the driver's fatigue detection method based on the steering wheel image;

图4为本基于方向盘图像的司机疲劳检测方法实施例步骤Ⅲ所取特征点A和B的示意图;Fig. 4 is a schematic diagram of feature points A and B taken in step III of the embodiment of the driver fatigue detection method based on the steering wheel image;

图5为本基于方向盘图像的司机疲劳检测方法实施例中实际的方向盘特征点位置和摄像头采集的方向盘图像中相应的特征点位置的示意图;5 is a schematic diagram of the actual steering wheel feature point positions and the corresponding feature point positions in the steering wheel image collected by the camera in the embodiment of the driver fatigue detection method based on the steering wheel image;

图6为本基于方向盘图像的司机疲劳检测方法实施例2个特征点的转动角度示意图。FIG. 6 is a schematic diagram of rotation angles of two feature points in the embodiment of the method for detecting driver fatigue based on the steering wheel image.

图中:In the picture:

1-方向盘 2-摄像头。1-Steering wheel 2-Camera.

具体实施方式detailed description

下面结合附图对本发明的具体实施方式作进一步详细说明。The specific implementation manners of the present invention will be described in further detail below in conjunction with the accompanying drawings.

基于方向盘图像的司机疲劳检测装置实施例Embodiment of Driver Fatigue Detection Device Based on Steering Wheel Image

本基于方向盘图像的司机疲劳检测装置实施例,包括中心处理器、摄像头和报警模块,如图1所示,摄像头安装于驾驶室内的顶部,处于方向盘上方,整个方向盘处于摄像头的取景框内,摄像头的视频信号输出线与本装置的中心处理器连接;传送所采集的图像数据。The embodiment of the driver fatigue detection device based on the steering wheel image includes a central processor, a camera and an alarm module. The video signal output line is connected with the central processing unit of the device; the collected image data is transmitted.

如图1所示,摄像头安装于驾驶室内顶部的N点,本例N点处于穿过方向盘圆心、垂直于车体纵轴的铅垂面后方,且与该铅垂面的距离为15cm,N点与穿过方向盘圆心、平行于车体纵轴的铅垂面的距离为5cm。As shown in Figure 1, the camera is installed at point N on the top of the cab. In this example, point N is behind the vertical plane passing through the center of the steering wheel and perpendicular to the longitudinal axis of the vehicle body, and the distance from the vertical plane is 15cm. N The distance between the point and the vertical plane passing through the center of the steering wheel and parallel to the longitudinal axis of the vehicle body is 5cm.

如图2所示,本例中心处理器配有存储模块和供电模块,还连接报警模块,在本实施例中报警模块采用蜂鸣器。本例中心处理器经数据传输模块连接图像处理模块。图像处理模块连接摄像头,摄像头采集的方向盘图像传送到图像处理模块。图像处理模块对图像视频数据进行处理,经数据传输模块送入存储模块存储,并提取数据送入中心处理器,中心处理器将当前方向盘图像与模板图像比较,得到当前方向盘转动角度θ,求得当前的零速百分比PNS和角度标准差σ,代入根据支持向量机分类算法(SVM)学习得到的决策函数f(x)=ωTx+b计算,当f(x)=1说明司机处于疲劳状态,中心处理器发送指令到报警模块,其蜂鸣器鸣响。As shown in Figure 2, the central processor in this example is equipped with a storage module and a power supply module, and is also connected to an alarm module, which uses a buzzer in this embodiment. In this example, the central processor is connected to the image processing module via the data transmission module. The image processing module is connected to the camera, and the steering wheel image collected by the camera is sent to the image processing module. The image processing module processes the image and video data, sends them to the storage module for storage through the data transmission module, and extracts the data and sends them to the central processor. The central processor compares the current steering wheel image with the template image to obtain the current steering wheel rotation angle θ, and obtain The current zero-speed percentage PNS and angle standard deviation σ are substituted into the decision function f(x)=ω T x+b learned from the support vector machine classification algorithm (SVM) to calculate. When f(x)=1, it means that the driver is fatigued state, the central processor sends instructions to the alarm module, and its buzzer sounds.

基于方向盘图像的司机疲劳检测方法实施例Embodiment of Driver Fatigue Detection Method Based on Steering Wheel Image

本基于方向盘图像的司机疲劳检测方法实施例,其流程如图3所示,具体步骤如下:The embodiment of the driver fatigue detection method based on the steering wheel image, its flow process is as shown in Figure 3, and the specific steps are as follows:

步骤Ⅰ、取得模板图像Step Ⅰ. Get the template image

Ⅰ-1、摄像头安装Ⅰ-1. Camera installation

在本实施例中选用车辆后,摄像头安装于驾驶室内的顶部,使整个方向盘处于摄像头的取景框内,调节摄像头,使图像清晰,与取景框大小配合;摄像头的视频信号输出线与检测装置的中心处理器连接;传送所采集的图像数据。After selecting the vehicle in this embodiment, the camera is installed on the top of the cab, so that the whole steering wheel is in the viewfinder of the camera, and the camera is adjusted to make the image clear and match the size of the viewfinder; the video signal output line of the camera and the detection device The central processor is connected; the collected image data is transmitted.

Ⅰ-2、方向盘初始图像的采集Ⅰ-2. Acquisition of the initial image of the steering wheel

在车辆启动前,在方向盘下面放一张白纸,方向盘的投影位于白纸上,方向盘处于静止的初始状态时,摄像头采集得到有白色背景的初始状态下的方向盘图像;传送到中心处理器,中心处理器将此作为初始图像存储;Before the vehicle starts, put a piece of white paper under the steering wheel, the projection of the steering wheel is located on the white paper, and when the steering wheel is in a static initial state, the camera collects the steering wheel image in the initial state with a white background; transmits it to the central processor, The central processor stores this as the initial image;

Ⅰ-3、模板图像Ⅰ-3. Template image

中心处理器对摄像头采集到的初始图像进行去噪处理,本方法采用核函数窗3×3的中值滤波进行去噪,在去除脉冲噪声、椒盐噪声的同时保持图像的边缘细节信息。The central processor performs denoising processing on the initial image collected by the camera. This method uses a kernel function window 3×3 median filter for denoising, and maintains the edge details of the image while removing impulse noise and salt and pepper noise.

中心处理器对步骤Ⅰ初始图像完成去噪处理后,存储为模板图像。模板图像效果如图4所示。After the central processor completes denoising processing on the initial image in step I, it is stored as a template image. The template image effect is shown in Figure 4.

方向盘图像是椭圆,在方向盘圆上的点与在图像椭圆上的点一一对应如图5所示,图像椭圆上点m(xm,ym)与方向盘圆上的对应点M(xM,yM)之间的对应关系是其中半径R为椭圆长半轴,且R的长度不变。The steering wheel image is an ellipse, and the points on the steering wheel circle correspond to the points on the image ellipse as shown in Figure 5. The point m(x m , y m ) on the image ellipse corresponds to the corresponding point M(x M , y M ) is the correspondence between The radius R is the semi-major axis of the ellipse, and the length of R is constant.

步骤Ⅱ、方向盘圆心位置坐标o(x0,y0)Step Ⅱ, the coordinates of the center of the steering wheel o(x 0 ,y 0 )

根据步骤Ⅰ所得的车辆的方向盘模板图像得到方向盘圆心位置坐标,具体如下:According to the steering wheel template image of the vehicle obtained in step I, the position coordinates of the center of the steering wheel are obtained, as follows:

Ⅱ-1、方向盘外轮廓Ⅱ-1. Steering wheel outline

在集成开发环境VS2013和OpenCV3.0.0下对步骤Ⅰ所得的模板图像采用OpenCV开源视觉库中的findContours()函数,通过设置相应的参数得到方向盘图像的外轮廓上的各点坐标,该函数为:Under the integrated development environment VS2013 and OpenCV3.0.0, use the findContours() function in the OpenCV open source vision library for the template image obtained in step I, and obtain the coordinates of each point on the outer contour of the steering wheel image by setting the corresponding parameters. The function is:

Void findContours(InputArray image,Void findContours(InputArray image,

OutputArrayOfArrays contours,OutputArray hierarchy, OutputArrayOfArrays contours, OutputArray hierarchy,

int mode,int method,Point offset=Point()) int mode, int method, Point offset = Point())

其中各参数依次如下:The parameters are as follows:

InputArray image代表InputArray的输入图像,InputArray image represents the input image of InputArray,

OutputArrayOfArrays contours代表轮廓存储的点向量(xm,ym),OutputArrayOfArrays contours represents the point vector (x m , y m ) stored in the contour,

OutputArray hierarchy代表可选的输出向量,包含图像的拓扑信息,OutputArray hierarchy represents an optional output vector, which contains the topological information of the image,

int mode代表轮廓的检索方式,int mode represents the retrieval method of the contour,

int method代表轮廓的近似办法,int method represents the approximation method of the contour,

Point offset=Point()代表每个轮廓点的可选偏移量,使用默认值Point()。Point offset = Point() represents an optional offset for each contour point, use the default value Point().

在本实施例中所用的求方向盘外轮廓各点坐标的函数为:The function used in the present embodiment to find the coordinates of each point of the outer contour of the steering wheel is:

findContours(image,contours,hierarchy,RETR_EXTERNAL,findContours(image, contours, hierarchy, RETR_EXTERNAL,

CHAIN_APPROX_NONE,Point offset=Point())。CHAIN_APPROX_NONE, Point offset = Point ()).

Ⅱ-2、最小二乘法求方向盘圆心位置坐标Ⅱ-2. Calculate the coordinates of the center of the steering wheel by the method of least squares

本例采集到的方向盘图像是椭圆如图5所示,椭圆二次曲线的通式为:The steering wheel image collected in this example is an ellipse as shown in Figure 5, and the general formula of the ellipse quadratic curve is:

ax2+bxy+cy2+dx+ey+1=0ax 2 +bxy+cy 2 +dx+ey+1=0

由步骤Ⅱ-1确定的方向盘图像的外轮廓上的5个坐标点(xm1,ym1,xm2,ym2,...,xm5,ym5),确定椭圆方程,本例求出椭圆中心位置圆心坐标为o(545,421)。From the five coordinate points (x m1 , y m1 , x m2 , y m2 ,..., x m5 , y m5 ) on the outer contour of the steering wheel image determined in step Ⅱ-1, the ellipse equation is determined, and in this example, The coordinates of the center of the ellipse are o(545,421).

步骤Ⅲ、特征点跟踪Step Ⅲ. Feature point tracking

本方法采用光流法,针对每一个视频序列,与模板图像匹配,检测当前方向盘图像转动的角度;如图4所示,本例取方向盘内轮廓中对称的两棱边与中心部位外侧的交点作为A、B两个特征点。方向盘无论怎样转动,在其图像上A、B点都易于确定。This method uses the optical flow method to match each video sequence with the template image to detect the rotation angle of the current steering wheel image; as shown in Figure 4, this example takes the intersection point between the two symmetrical edges in the inner contour of the steering wheel and the outside of the center As two feature points of A and B. No matter how the steering wheel turns, points A and B are easy to determine on its image.

具体的跟踪过程如下:The specific tracking process is as follows:

Ⅲ-1、当前图像的处理Ⅲ-1. Processing of the current image

在本实施例中摄像头的摄像频率为15帧/秒,中心处理器的采样频率为10帧/秒。In this embodiment, the imaging frequency of the camera is 15 frames/second, and the sampling frequency of the central processor is 10 frames/second.

中心处理器对摄像头实时采集到的方向盘图像进行与步骤Ⅰ-3相同的去噪预处理;The central processor performs the same denoising preprocessing as step Ⅰ-3 on the steering wheel image collected by the camera in real time;

Ⅲ-2、当前帧的特征点Ⅲ-2. Feature points of the current frame

对步骤Ⅲ-1处理后的当前图像采用现有的OpenCV中的Shi-Tomasi进行特征点检测,如图6所示,在本实施例中利用OpenCV开源视觉库的中goodFeaturesToTrack()函数检测得到当前帧的一组特征点经换算后得到的位置A0(xM01,yM01)和B0(xM02,yM02);该函数各参数如下:The current image processed in step III-1 adopts Shi-Tomasi in the existing OpenCV to perform feature point detection, as shown in Figure 6, in this embodiment, the goodFeaturesToTrack () function in the OpenCV open source vision library is used to detect the current The positions A 0 (x M01 , y M01 ) and B 0 (x M02 , y M02 ) of a set of feature points of the frame obtained after conversion; the parameters of this function are as follows:

Void goodFeaturesToTrack(InputArray image,OutputArray corners,Void goodFeaturesToTrack(InputArray image,OutputArray corners,

Int maxCorners,double qualityLevel,double minDistance,Int maxCorners, double qualityLevel, double minDistance,

InputArray mask,int blockSize,bool useHarrisDetector,double k)InputArray mask, int blockSize, bool useHarrisDetector, double k)

其中各参数依次为The parameters in which are

InputArray image代表源图像,InputArray image represents the source image,

OutputArray corners代表检测到特征点的输出矢量(xm,ym),OutputArray corners represent the output vector (x m , y m ) of the detected feature points,

Int maxCorners代表本方法所取特征点的最大数量,Int maxCorners represents the maximum number of feature points taken by this method,

double qualityLevel代表特征点检测可接受的最小特征值,double qualityLevel represents the minimum feature value acceptable for feature point detection,

double minDistance代表特征点之间的最小距离,double minDistance represents the minimum distance between feature points,

InputArray mask为设置的感兴趣区域,InputArray mask is the set area of interest,

int blockSize代表计算导数自相关矩阵时指定的邻域范围默认值3,int blockSize represents the default value of the neighborhood range specified when calculating the derivative autocorrelation matrix is 3,

bool useHarrisDetector代表不使用Harris特征点检测默认值false,bool useHarrisDetector means not to use Harris feature point detection, the default value is false,

double k代表设置hessian自相关矩阵行列式的相对权重的权重系数默认值0.04。The double k represents the default value of the weight coefficient for setting the relative weight of the determinant of the hessian autocorrelation matrix to 0.04.

本例具体参数设置为:In this example, the specific parameters are set to:

goodFeaturesToTrack(g_grayImage,corners,20,0.01,40,imageROI,3,false,0.04)。goodFeaturesToTrack(g_grayImage, corners, 20, 0.01, 40, imageROI, 3, false, 0.04).

Ⅲ-3、下一帧的特征点Ⅲ-3. The feature points of the next frame

把步骤Ⅲ-2所得的当前帧图像中的一组特征点位置A0(xM01,yM01)和B0(xM02,yM02)输入到OpenCV开源视觉库的CalcOpticalFlowPyrLK()函数,输出上述的一组特征点A0(xM01,yM01)和B0(xM02,yM02)在下一帧图像中经换算后得到的位置A1(xM11,yM11)和B1(xM12,yM12),Input a group of feature point positions A 0 (x M01 , y M01 ) and B 0 (x M02 , y M02 ) in the current frame image obtained in step III-2 into the CalcOpticalFlowPyrLK() function of the OpenCV open source vision library, and output the above The positions A 1 ( x M11 , y M11 ) and B 1 ( x M12 , y M12 ),

函数各参数的含义为:The meaning of each parameter of the function is:

Void cvCalcOpticalFlowPyrLK(const CvArr*prev,Void cvCalcOpticalFlowPyrLK(const CvArr*prev,

const CvArr*curr,const CvPoint2D32f*prev_features,const CvArr*curr, const CvPoint2D32f*prev_features,

Const CvPoint2D32f*curr_features,Const CvPoint2D32f*curr_features,

char*status,float*track_error,CvSize win_size,char*status, float*track_error, CvSize win_size,

int count,CvTermCriter criteria,int flags)int count, CvTermCriter criteria, int flags)

其中各参数依次为:The parameters are as follows:

const CvArr*prev代表前一帧图像,const CvArr*prev represents the previous frame image,

const CvArr*curr代表当前帧图像,const CvArr*curr represents the current frame image,

const CvPoint2D32f*prev_features代表特征点的前一帧坐标,const CvPoint2D32f*prev_features represents the previous frame coordinates of feature points,

Const CvPoint2D32f*curr_features代表特征点的当前帧坐标,Const CvPoint2D32f*curr_features represents the current frame coordinates of the feature points,

char*status代表输出状态矢量,char*status represents the output status vector,

float*track_error代表输出误差矢量,float*track_error represents the output error vector,

CvSize win_size代表每个金字塔层搜索窗大小,CvSize win_size represents the search window size of each pyramid layer,

int count代表本法所取的金字塔层的最大数,int count represents the maximum number of pyramid levels taken by this method,

CvTermCriter criteria代表指定搜索算法迭代类型,CvTermCriter criteria represents the specified search algorithm iteration type,

int flags代表该参数默认值0。int flags represents the default value of this parameter is 0.

在本实施例中具体参数设置为:In this embodiment, the specific parameters are set to:

CalcOpticalFlowPyrLK(pre_image,curr_image,points2D[0],points2D[1],status,err,winSize,3,termcrit,0)。CalcOpticalFlowPyrLK(pre_image, curr_image, points2D[0], points2D[1], status, err, winSize, 3, termcrit, 0).

步骤Ⅳ、计算方向盘相邻两帧转动的角度θStep Ⅳ. Calculate the rotation angle θ of the steering wheel in two adjacent frames

步骤Ⅱ-2确定了方向盘圆心o(x0,y0)、步骤Ⅲ确定了当前帧特征点位置A0(xM01,yM01)和B0(xM02,yM02);如图6所示,当方向盘转动一个角度θ,由步骤Ⅲ-2确定确定下一帧的特征点位置A1(xM11,yM11)和B1(xM12,yM12)。Step Ⅱ-2 determines the steering wheel center o(x 0 , y 0 ), step Ⅲ determines the current frame feature point positions A 0 (x M01 , y M01 ) and B 0 (x M02 , y M02 ); as shown in Figure 6 As shown, when the steering wheel turns an angle θ, the feature point positions A 1 (x M11 , y M11 ) and B 1 (x M12 , y M12 ) of the next frame are determined by step III-2.

圆心o(x0,y0)和A0(xM01,yM01)、A1(xM11,yM11)连线的夹角为θ1,根据余弦定理即特征点A转动角度, The included angle between the center o(x 0 , y 0 ) and the line connecting A 0 (x M01 , y M01 ), A 1 (x M11 , y M11 ) is θ 1 , according to the law of cosines That is, the rotation angle of feature point A,

圆心o(x0,y0)和B0(xM02,yM02)、B1(xM12,yM12)连线的夹角为θ2,根据余弦定即特征点B转动角度,由两点间距离公式算出:The included angle between the center o(x 0 , y 0 ) and the line connecting B 0 (x M02 , y M02 ) and B 1 (x M12 , y M12 ) is θ 2 , which is determined according to the cosine That is, the rotation angle of feature point B, Calculated by the distance formula between two points:

最终的方向盘转动的角度θ为两个特征点A和B转动角度的平均值, The final steering wheel rotation angle θ is the average of the rotation angles of the two feature points A and B,

步骤Ⅴ、方向盘图像的特征点转动角度Step Ⅴ, the rotation angle of the feature point of the steering wheel image

令方向盘圆心坐标为o(x0,y0),第k帧图像、第k+1帧图像中特征点的坐标分别为Dk(xMk,yMk)、Dk+1(xM(k+1),yM(k+1)),相邻两帧图像中特征点转动角度小于90°;Let the coordinates of the center of the steering wheel be o(x 0 , y 0 ), the coordinates of the feature points in the kth frame image and the k+1th frame image are D k (x Mk , y Mk ), D k+1 (x M( k+1) , y M(k+1) ), the rotation angle of feature points in two adjacent frames of images is less than 90°;

当yMk≥y0、yM(k+1)≥y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;When y Mk ≥ y 0 , y M(k+1) ≥ y 0 : If x M(k+1) ≥ x Mk is satisfied, it means that the k+1th frame image rotates an angle clockwise relative to the kth frame image ; If x M(k+1) ≤ x Mk is satisfied, it means that the k+1 frame image rotates an angle counterclockwise with respect to the k frame image;

当yMk≤y0、yM(k+1)≤y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;When y Mk ≤y 0 , y M(k+1) ≤y 0 : if x M(k+1) ≥x Mk is satisfied, it means that the k+1th frame image rotates an angle counterclockwise relative to the kth frame image ; If x M(k+1) ≤ x Mk is satisfied, it means that the k+1th frame image rotates an angle clockwise with respect to the kth frame image;

当yMk≥y0、yM(k+1)≤y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;When y Mk ≥y 0 , y M(k+1) ≤y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 is satisfied, it means that the k+1th frame image is relative to the kth frame The image rotates an angle clockwise; if it satisfies x Mk ≤ x 0 , x M(k+1) ≤ x 0 , it means that the image of frame k+1 rotates an angle counterclockwise relative to the image of frame k;

当yMk≤y0、yM(k+1)≥y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;When y Mk ≤y 0 , y M(k+1) ≥y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 are satisfied, it means that the k+1th frame image is relative to the kth frame The image rotates an angle counterclockwise; if it satisfies x Mk ≤ x 0 , x M(k+1) ≤ x 0 , it means that the k+1th frame image is rotated clockwise by an angle relative to the kth frame image;

本例规定下一帧方向盘相对当前帧方向盘顺时针转动角度θ为负,逆时针转动则θ为正;依次累加每帧方向盘图像转动角度,当某时间段内累加值大于360°,即此时间段内方向盘相对模板图像的位置逆时针转动超过一周;当累加值小于-360°,即此时间段内方向盘相对模板图像的位置顺时针转动超过一周;This example stipulates that the steering wheel rotation angle θ in the next frame relative to the steering wheel in the current frame is negative, and θ is positive when it rotates counterclockwise; the rotation angle of each frame of the steering wheel image is accumulated sequentially. When the accumulated value is greater than 360° in a certain period of time, that is, this time The position of the steering wheel relative to the template image in the segment rotates counterclockwise for more than one cycle; when the accumulated value is less than -360°, that is, the position of the steering wheel relative to the template image rotates clockwise for more than one cycle during this period;

步骤Ⅵ、提取技术指标零速百分比和角度标准差Step Ⅵ, extract the technical index zero speed percentage and angle standard deviation

本例的技术指标之一零速百分比PNS(percentage of non-steering),表征所选时间段内方向盘不动的程度,在本实施例中,取此时间段为50秒。One of the technical indicators of this example, the percentage of non-steering, PNS (percentage of non-steering), represents the degree of steering wheel motion in the selected time period. In this embodiment, this time period is taken as 50 seconds.

零速百分比PNS的定义如下式所示:The definition of zero speed percentage PNS is as follows:

本发明采样频率为10帧/秒,PNSi代表第i帧的零速百分比,Ni代表第i帧前50秒内角速度的总采样点数,ni代表第i帧i前50秒内角速度在±0.1°/s之间的点数。The sampling frequency of the present invention is 10 frames per second, PNS i represents the zero speed percentage of the i frame, N i represents the total sampling points of the angular velocity in the first 50 seconds of the i frame, and n i represents the angular velocity in the first 50 seconds of the i frame i Points between ±0.1°/s.

另一技术指标角度标准差的定义如下式所示:The definition of the angle standard deviation of another technical indicator is shown in the following formula:

其中:θj为j帧对应的方向盘转动的角度,μ表示m2帧内平均每帧方向盘转动角度值。Among them: θ j is the steering wheel rotation angle corresponding to frame j, and μ represents the average steering wheel rotation angle value of each frame in m2 frame.

步骤Ⅶ、支持向量机分类算法Step VII, Support Vector Machine Classification Algorithm

综合步骤Ⅵ所得的零速百分比PNS和角度标准差σ,采用支持向量机(SVM)分类算法判断司机是否处于疲劳状态。Combining the zero-speed percentage PNS and angle standard deviation σ obtained in step VI, the support vector machine (SVM) classification algorithm is used to judge whether the driver is in a fatigue state.

支持向量机分类算法先训练样本集,在样本空间中找到一个划分不同类别样本的超平面,该超平面对应的模型为ωTx+b=0,模型参数ω为法向量、b为位移项、T为转置、x为零速百分比和角度标准差组成的自变量。支持向量机分类算法的具体步骤如下:The support vector machine classification algorithm first trains the sample set, and finds a hyperplane in the sample space that divides samples of different categories. The model corresponding to the hyperplane is ω T x + b = 0, and the model parameter ω is the normal vector and b is the displacement item , T is the transpose, x is the independent variable composed of zero speed percentage and angle standard deviation. The specific steps of the support vector machine classification algorithm are as follows:

Ⅶ-1、训练样本的采集Ⅶ-1. Collection of training samples

本例选拥有五年或更多驾驶经验的5名司机和状态良好的5部车辆,每位司机随机挑选一部车辆,分别离散采集在直线行驶、“S”形路线行驶及超车的不同行驶情况下、司机处于清醒和疲劳状态时的100个样本。具体的样本数量如下表1所示:In this example, 5 drivers with five or more years of driving experience and 5 vehicles in good condition are selected. Each driver randomly selects a vehicle, and discretely collects different driving in straight lines, "S"-shaped routes, and overtaking. 100 samples of the driver in a state of wakefulness and fatigue. The specific sample size is shown in Table 1 below:

表1 司机的不同状态不同行驶情况下采集的样本数量一览表Table 1 A list of the number of samples collected under different driving conditions in different states of the driver

(x1,y1),...,(x100,y100)。其中x1=(x11,x12)T,x11、x12分别代表第一个样本数据的第一个指标零速百分比PNS和第二个指标角度标准差σ;yk1∈Y={1,-1},k1=1,...,100,Y=1、Y=-1分别代表司机处于清醒状态和疲劳状态,司机状态的判断是由司机的面部视频评分的方法评估驾驶司机的疲劳状态。按目前常用的司机状态判别方法将驾驶司机的状态分为清醒和疲劳2个等级,清醒和疲劳状态的面部特征如下表2所示:(x 1 , y 1 ), . . . , (x 100 , y 100 ). Where x 1 = (x 11 , x 12 ) T , x 11 , x 12 respectively represent the first index zero speed percentage PNS and the second index angle standard deviation σ of the first sample data; y k1 ∈ Y = { 1, -1}, k1=1,...,100, Y=1, Y=-1 respectively represent the driver is awake and fatigued, the judgment of the driver’s state is to evaluate the driving driver by scoring the driver’s facial video state of fatigue. According to the currently commonly used driver state discrimination method, the state of the driver is divided into two levels: sobriety and fatigue. The facial features of the sober and fatigue states are shown in Table 2 below:

表2 驾驶司机状态等级的面部特征一览表Table 2 List of facial features of drivers' status levels

本例司机的面部视频以15s为间隔划分为多段,5名经过训练的实验人员按上述表2中目前常用的司机状态特征独立对各段视频进行评分,5个实验人员评分的平均值作为司机状态的判别结果。In this example, the driver’s facial video is divided into multiple sections at intervals of 15s. Five trained experimenters independently rated each section of video according to the currently commonly used driver status characteristics in Table 2 above, and the average value of the scores of the five experimenters was used as the driver The judgment result of the state.

Ⅶ-2、参数的求解Ⅶ-2. Solving of parameters

本例求超平面模型ωTx+b=0的参数ω和b,使得不同类支持向量到超平面的距离之和最大,即:In this example, the parameters ω and b of the hyperplane model ω T x + b = 0 are calculated, so that the sum of the distances from different types of support vectors to the hyperplane is the largest, namely:

ST ytTxt+b)≥1,t=1,2,...,n1ST y tT x t +b)≥1, t=1, 2,..., n1

其中xt=(PNSt,σt)yt∈{1,-1}where x t = (PNS t , σ t )y t ∈ {1, -1}

本例采用高斯核函数,经过变换得到上式的对偶,如下:In this example, the Gaussian kernel function is used, and the dual of the above formula is obtained after transformation, as follows:

其中:σ1为高斯核的带宽,xt1、xt2、yt1和yt2代表样本参数;in: σ 1 is the bandwidth of the Gaussian kernel, x t1 , x t2 , y t1 and y t2 represent the sample parameters;

C为正则化常数,t1、t2=1,2,...,n1; C is a regularization constant, t1, t2=1, 2,..., n1;

求解出最优拉格朗日乘子α*=(α1 *,α2 *,...,αn1 *)T,计算最优法向量选择最优拉格朗日乘子α*中的一个小于正则化常数C的正分量αt1 *,以此计算最优位移项 Solve the optimal Lagrangian multiplier α * = (α 1 * , α 2 * ,..., α n1 * ) T , and calculate the optimal normal vector Select a positive component α t1 * of the optimal Lagrange multiplier α * that is smaller than the regularization constant C to calculate the optimal displacement term

Ⅶ-3、构造超平面(ω*×x)+b*=0,由此求出的决策函数VII-3. Construct a hyperplane (ω * ×x) + b * = 0, and the decision function obtained from it

f(x)=sgn[(ω*×x)+b*]f(x)=sgn[(ω * ×x)+b * ]

其中sgn[(ω*×x)+b*]为符号函数,当ω*×x+b>0 f(x)=1,当ω*×x+b=0f(x)=0,当ω*×x+b<0 f(x)=-1。Where sgn[(ω * ×x)+b * ] is a sign function, when ω * ×x+b>0 f(x)=1, when ω * ×x+b=0f(x)=0, when ω * ×x+b<0 f(x)=-1.

根据车辆行驶在上述不同路段下的100个训练样本,由支持向量机分类算法学习,求出区分司机处于清醒状态还是疲劳状态的决策函数f(x)。According to the 100 training samples of the vehicle driving on the above-mentioned different road sections, the support vector machine classification algorithm is used to learn, and the decision function f(x) for distinguishing whether the driver is awake or fatigued is obtained.

在实际行驶过程中摄像头实时采集方向盘图像,经过处理得出方向盘转角,进而求出当前零速百分比PNS和角度表准差的值σ,当前零速百分比和角度表准差的值代入已经求出的决策函数f(x),若f(x)=1则说明当前司机处于疲劳状态,系统立即报警。During the actual driving process, the camera collects the steering wheel image in real time, and after processing, the steering wheel angle is obtained, and then the value σ of the current zero speed percentage PNS and the angle deviation is calculated, and the current zero speed percentage and the angle deviation value are substituted into the calculated value The decision function f(x) of f(x), if f(x)=1, it means that the current driver is in a state of fatigue, and the system will give an alarm immediately.

上述实施例,仅为对本发明的目的、技术方案和有益效果进一步详细说明的具体个例,本发明并非限定于此。凡在本发明的公开的范围之内所做的任何修改、等同替换、改进等,均包含在本发明的保护范围之内。The above-mentioned embodiments are only specific examples for further specifying the purpose, technical solutions and beneficial effects of the present invention, and the present invention is not limited thereto. Any modifications, equivalent replacements, improvements, etc. made within the disclosed scope of the present invention are included in the protection scope of the present invention.

Claims (8)

1.一种基于方向盘图像的司机疲劳检测方法,主要步骤如下:1. A driver fatigue detection method based on steering wheel image, the main steps are as follows: 步骤Ⅰ、取得模板图像Step Ⅰ. Get the template image 摄像头安装于驾驶室内的顶部,摄像头的视频信号输出线与检测装置的中心处理器连接;摄像头采集静止的方向盘图像,作为初始图像存储;The camera is installed on the top of the cab, and the video signal output line of the camera is connected to the central processor of the detection device; the camera collects the still steering wheel image and stores it as the initial image; 中心处理器对初始图像进行去噪处理后,存储为模板图像;After the central processor performs denoising processing on the initial image, it is stored as a template image; 在方向盘圆上的点与摄像头拍摄到的图像椭圆上的点一一对应,图像椭圆上某一点m(xm,ym)与方向盘圆上的对应点M(xM,yM)之间的对应关系是xM=xm式中半径R为椭圆长半轴,R的长度不变;Points on the steering wheel circle correspond to points on the image ellipse captured by the camera, and the distance between a point m(x m , y m ) on the image ellipse and the corresponding point M(x M , y M ) on the steering wheel circle The corresponding relation is x M =x m In the formula, the radius R is the semi-major axis of the ellipse, and the length of R remains unchanged; 步骤Ⅱ、取得方向盘圆心位置坐标o(x0,y0)Step Ⅱ. Obtain the coordinates o(x 0 ,y 0 ) of the center of the steering wheel Ⅱ-1、方向盘外轮廓Ⅱ-1. Steering wheel outline 对步骤Ⅰ所得的模板图像采用OpenCV开源视觉库自带的findContours()函数,通过设置相应的参数得到方向盘图像的外轮廓上的各点坐标;Adopt the findContours () function that the OpenCV open source visual library carries to the template image of step I gained, obtain the coordinates of each point on the outer contour of the steering wheel image by setting corresponding parameters; Ⅱ-2、方向盘圆心位置坐标Ⅱ-2. Position coordinates of the center of the steering wheel 由步骤Ⅱ-1确定的方向盘图像的外轮廓上的5个坐标点(xm1,ym1,xm2,ym2,...,xm5,ym5),确定椭圆方程,得到椭圆中心位置坐标o(x0,y0);From the 5 coordinate points (x m1 , y m1 , x m2 , y m2 ,..., x m5 , y m5 ) on the outer contour of the steering wheel image determined in step Ⅱ-1, determine the ellipse equation and obtain the position of the center of the ellipse Coordinate o(x 0 ,y 0 ); 步骤Ⅲ、特征点跟踪Step Ⅲ. Feature point tracking 本方法采用光流法,针对每一个视频序列,与模板图像匹配,检测当前方向盘图像转动的角度;在方向盘上取具有代表性并易于确定的特征点,具有代表性并易于确定的特征点是方向盘转动后仍能在方向盘图像上明确定位的点;通过迭代求得当前帧中方向盘图像中的特征点坐标位置和在下一帧中的坐标位置,根据该特征点在当前帧的坐标与该特征点在下一帧的坐标计算得出从当前帧到下一帧该特征点转动的角度,也就是方向盘转动的角度;This method uses the optical flow method to match each video sequence with the template image to detect the rotation angle of the current steering wheel image; take representative and easy-to-determined feature points on the steering wheel, and the representative and easy-to-determined feature points are Points that can still be clearly located on the steering wheel image after the steering wheel is turned; through iteration to obtain the coordinate position of the feature point in the steering wheel image in the current frame and the coordinate position in the next frame, according to the coordinates of the feature point in the current frame and the feature The coordinates of the point in the next frame are calculated to obtain the rotation angle of the feature point from the current frame to the next frame, that is, the rotation angle of the steering wheel; 步骤Ⅳ、计算方向盘相邻两帧转动的角度θStep Ⅳ. Calculate the rotation angle θ of the steering wheel in two adjacent frames 以下角度θ的计算采用方向盘圆上特征点A的位置坐标;The calculation of the following angle θ adopts the position coordinates of the characteristic point A on the steering wheel circle; 由步骤Ⅱ确定了方向盘圆心o(x0,y0),步骤Ⅲ确定了当前帧特征点位置A0(xM01,yM01)的坐标,当方向盘转动一个角度θ后,再确定特征点A0(xM01,yM01)对应的下一帧特征点位置A1(xM11,yM11)的坐标;The center of the steering wheel o(x 0 , y 0 ) is determined in step II, and the coordinates of the feature point position A 0 (x M01 , y M01 ) of the current frame are determined in step III. After the steering wheel rotates an angle θ, the feature point A is determined 0 (x M01 , y M01 ) corresponds to the coordinates of the next frame feature point position A 1 (x M11 , y M11 ); 圆心o(x0,y0)和A0(xM01,yM01)、A1(xM11,yM11)连线的夹角为θ1,即特征点A转动角度,The included angle between the center o(x 0 , y 0 ) and the line connecting A 0 (x M01 , y M01 ), A 1 (x M11 , y M11 ) is θ 1 , which is the rotation angle of feature point A, &theta;&theta; 11 == aa rr coscos || oAoA 00 || 22 ++ || oAoA 11 || 22 -- || AA 00 AA 11 || 22 22 &times;&times; || oAoA 00 || &times;&times; || oAoA 11 || ;; 步骤Ⅴ、方向盘图像的特征点转动角度Step Ⅴ, the rotation angle of the feature point of the steering wheel image 令方向盘圆心坐标为o(x0,y0),第k帧图像、第k+1帧图像中特征点的坐标分别为Dk(xMk,yMk)、Dk+1(xM(k+1),yM(k+1)),相邻两帧图像中特征点转动角度小于90°;Let the coordinates of the center of the steering wheel be o(x 0 , y 0 ), the coordinates of the feature points in the kth frame image and the k+1th frame image are D k (x Mk , y Mk ), D k+1 (x M( k+1) , y M(k+1) ), the rotation angle of feature points in two adjacent frames of images is less than 90°; 当yMk≥y0、yM(k+1)≥y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;When y Mk ≥ y 0 , y M(k+1) ≥ y 0 : If x M(k+1) ≥ x Mk is satisfied, it means that the k+1th frame image rotates an angle clockwise relative to the kth frame image ; If x M(k+1) ≤ x Mk is satisfied, it means that the k+1 frame image rotates an angle counterclockwise with respect to the k frame image; 当yMk≤y0、yM(k+1)≤y0时:若满足xM(k+1)≥xMk,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xM(k+1)≤xMk,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;When y Mk ≤y 0 , y M(k+1) ≤y 0 : if x M(k+1) ≥x Mk is satisfied, it means that the k+1th frame image rotates an angle counterclockwise relative to the kth frame image ; If x M(k+1) ≤ x Mk is satisfied, it means that the k+1th frame image rotates an angle clockwise with respect to the kth frame image; 当yMk≥y0、yM(k+1)≤y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;When y Mk ≥y 0 , y M(k+1) ≤y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 is satisfied, it means that the k+1th frame image is relative to the kth frame The image rotates an angle clockwise; if it satisfies x Mk ≤ x 0 , x M(k+1) ≤ x 0 , it means that the image of frame k+1 rotates an angle counterclockwise relative to the image of frame k; 当yMk≤y0、yM(k+1)≥y0时:若满足xMk≥x0、xM(k+1)≥x0,说明第k+1帧图像相对于第k帧图像逆时针转动一个角度;若满足xMk≤x0、xM(k+1)≤x0,说明第k+1帧图像相对于第k帧图像顺时针转动一个角度;When y Mk ≤y 0 , y M(k+1) ≥y 0 : if x Mk ≥x 0 , x M(k+1) ≥x 0 are satisfied, it means that the k+1th frame image is relative to the kth frame The image rotates an angle counterclockwise; if it satisfies x Mk ≤ x 0 , x M(k+1) ≤ x 0 , it means that the k+1th frame image is rotated clockwise by an angle relative to the kth frame image; 本方法规定下一帧方向盘相对当前帧方向盘顺时针转动角度θ为负,逆时针转动则θ为正;依次累加每帧方向盘图像转动角度,当某时间段内累加值大于360°,即此时间段内方向盘相对模板图像的位置逆时针转动超过一周;当累加值小于-360°,即此时间段内方向盘相对模板图像的位置顺时针转动超过一周;This method stipulates that the rotation angle θ of the steering wheel in the next frame relative to the steering wheel in the current frame is negative, and θ is positive when it is rotated counterclockwise; the rotation angle of each frame of the steering wheel image is accumulated sequentially. When the accumulated value is greater than 360° in a certain period of time, that is The position of the steering wheel relative to the template image in the segment rotates counterclockwise for more than one cycle; when the accumulated value is less than -360°, that is, the position of the steering wheel relative to the template image rotates clockwise for more than one cycle during this period; 步骤Ⅵ、提取技术指标零速百分比和角度标准差Step Ⅵ, extract the technical index zero speed percentage and angle standard deviation 当某时间段t内方向盘角度改变的频度减少,也就是出现了司机对方向盘的修正频度减少特征时,方向盘不动的时间增多,作为本方法的技术指标——零速百分比PNS,表征所选时间段内方向盘不动的程度,所述时间段t为50~80秒;When the frequency of steering wheel angle changes in a certain period of time t decreases, that is, when the driver’s correction frequency of the steering wheel decreases, the time the steering wheel does not move increases, as the technical index of this method—zero speed percentage PNS, characterized by The extent to which the steering wheel does not move within the selected time period, where the time period t is 50 to 80 seconds; 零速百分比PNS的定义如下式所示:The definition of zero speed percentage PNS is as follows: PNSPNS ii == nno ii NN ii ii == 500500 ,, 501501 ,, ...... ,, mm 11 mm 11 &Element;&Element; (( 500500 ,, 10001000 )) 本发明采样频率为8~12帧/秒,PNSi代表第i帧的零速百分比,Ni代表第i帧前t秒内角速度的总采样点数,ni代表第i帧前t秒内角速度在±0.1°/s之间的点数;The sampling frequency of the present invention is 8 to 12 frames per second, PNS i represents the zero velocity percentage of the i frame, N i represents the total number of sampling points of the angular velocity within t seconds before the i frame, and ni represents the internal angular velocity of the i frame within t seconds Number of points between ±0.1°/s; 另一技术指标角度标准差的定义如下式所示:The definition of the angle standard deviation of another technical indicator is shown in the following formula: &sigma;&sigma; == 11 mm 22 &Sigma;&Sigma; jj == 11 mm 22 (( &theta;&theta; jj -- &mu;&mu; )) 22 &mu;&mu; == 11 mm 22 &Sigma;&Sigma; jj == 11 mm 22 &theta;&theta; jj jj == 11 ,, 22 ,, ...... ,, mm 22 mm 22 &Element;&Element; (( 150150 ,, 200200 )) 其中:θj为第j帧对应的方向盘转动的角度,μ表示m2帧内平均每帧方向盘转动角度值;Among them: θ j is the steering wheel rotation angle corresponding to the jth frame, and μ represents the average steering wheel rotation angle value of each frame in the m2 frame; 步骤Ⅶ、支持向量机分类算法Step VII, Support Vector Machine Classification Algorithm 综合步骤Ⅵ所得的零速百分比PNS和角度标准差σ,采用支持向量机分类算法判断司机是否处于疲劳状态;Combine the zero-speed percentage PNS and angle standard deviation σ obtained in step VI, and use the support vector machine classification algorithm to judge whether the driver is in a fatigue state; 支持向量机分类算法先训练样本集,在样本空间中找到一个划分不同类别样本的超平面,该超平面对应的模型为ωTx+b=0,模型参数ω为法向量、b为位移项,T为转置、x为零速百分比和角度标准差组成的自变量;由超平面模型求出的决策函数The support vector machine classification algorithm first trains the sample set, and finds a hyperplane in the sample space that divides samples of different categories. The model corresponding to the hyperplane is ω T x + b = 0, and the model parameter ω is the normal vector and b is the displacement item , T is transpose, x is the independent variable composed of zero speed percentage and angle standard deviation; the decision function obtained by the hyperplane model f(x)=sgn[(ω*×x)+b*]f(x)=sgn[(ω * ×x)+b * ] 其中sgn[(ω*×x)+b*]为符号函数,当ω*×x+b>0f(x)=1,当ω*×x+b=0f(x)=0,当ω*×x+b<0f(x)=-1;Where sgn[(ω * ×x)+b * ] is a sign function, when ω * ×x+b>0f(x)=1, when ω * ×x+b=0f(x)=0, when ω * ×x+b<0f(x)=-1; 根据车辆行驶在上述不同路段下的W个训练样本,W为80~150,由支持向量机分类算法学习,求出区分司机处于清醒状态还是疲劳状态的决策函数f(x);According to the W training samples of the vehicle running on the above-mentioned different road sections, W is 80-150, learned by the support vector machine classification algorithm, and obtains the decision function f(x) that distinguishes whether the driver is awake or fatigued; 在司机实际驾驶过程中摄像头实时采集方向盘图像,经过处理得出方向盘转角,进而求出当前零速百分比和角度表准差的值,当前零速百分比和角度表准差的值代入已经求出的决策函数f(x),若f(x)=1则说明当前司机处于疲劳状态,系统立即报警。During the actual driving process of the driver, the camera collects the steering wheel image in real time, and after processing, the steering wheel angle is obtained, and then the value of the current zero-speed percentage and angle standard deviation is calculated, and the current zero-speed percentage and angle standard difference are substituted into the calculated values Decision function f(x), if f(x)=1, it means that the current driver is in a state of fatigue, and the system will give an alarm immediately. 2.根据权利要求1所述的基于方向盘图像的司机疲劳检测方法,其特征在于:2. the driver fatigue detection method based on steering wheel image according to claim 1, is characterized in that: 所述步骤Ⅰ包括如下步骤:Described step I comprises the following steps: Ⅰ-1、摄像头安装Ⅰ-1. Camera installation 摄像头安装于驾驶室内的顶部,使整个方向盘处于摄像头的取景框内,调节摄像头,使图像清晰,与取景框大小配合;向中心处理器传送所采集的图像数据;The camera is installed on the top of the cab, so that the entire steering wheel is in the frame of the camera, and the camera is adjusted to make the image clear and match the size of the frame; transmit the collected image data to the central processor; Ⅰ-2、方向盘初始图像的采集Ⅰ-2. Acquisition of the initial image of the steering wheel 在车辆启动前,方向盘处于初始状态时,摄像头采集静止的方向盘图像,作为初始图像存储;Before the vehicle starts, when the steering wheel is in the initial state, the camera collects the still steering wheel image and stores it as the initial image; Ⅰ-3、模板图像Ⅰ-3. Template image 中心处理器采用核函数窗3×3的中值滤波对步骤Ⅰ-2取得的初始图像进行去噪,在去除脉冲噪声、椒盐噪声的同时保持图像的边缘细节信息。The central processor uses a median filter with a kernel function window of 3×3 to denoise the initial image obtained in step I-2, and maintains the edge detail information of the image while removing impulse noise and salt and pepper noise. 3.根据权利要求2所述的基于方向盘图像的司机疲劳检测方法,其特征在于:3. the driver fatigue detection method based on steering wheel image according to claim 2, is characterized in that: 所述步骤Ⅰ-2摄像头采集方向盘初始图像前,在方向盘的下面放一张白纸,方向盘的投影位于白纸上,采集得到有白色背景的初始图像。In step I-2, before the camera collects the initial image of the steering wheel, a piece of white paper is placed under the steering wheel, the projection of the steering wheel is located on the white paper, and an initial image with a white background is collected. 4.根据权利要求1所述的基于方向盘图像的司机疲劳检测方法,其特征在于:4. the driver fatigue detection method based on steering wheel image according to claim 1, is characterized in that: 所述步骤Ⅲ特征点跟踪的具体的过程如下:The specific process of the step III feature point tracking is as follows: Ⅲ-1、当前图像的处理Ⅲ-1. Processing of the current image 本方法摄像头的摄像频率为12~16帧/秒,中心处理器的采样频率小于摄像频率为9~12帧/秒;The shooting frequency of the camera in this method is 12-16 frames/second, and the sampling frequency of the central processor is 9-12 frames/second lower than the shooting frequency; 中心处理器对摄像头实时采集到的方向盘图像进行与步骤Ⅰ-3相同的去噪处理;The central processor performs the same denoising processing as step Ⅰ-3 on the steering wheel image collected by the camera in real time; Ⅲ-2、当前帧的特征点Ⅲ-2. Feature points of the current frame 对步骤Ⅲ-1处理后的当前图像采用现有的OpenCV中的Shi-Tomasi进行特征点检测,利用OpenCV开源视觉库的goodFeaturesToTrack()函数检测得到当前帧的特征点经换算后得到的位置A0(xM01,yM01);Use the existing Shi-Tomasi in OpenCV to perform feature point detection on the current image processed in step Ⅲ-1, and use the goodFeaturesToTrack() function of the OpenCV open source vision library to detect and obtain the converted position of the feature points of the current frame A 0 (x M01 , y M01 ); Ⅲ-3、下一帧的特征点Ⅲ-3. The feature points of the next frame 调用OpenCV开源视觉库的cvCalcOpticalFlowPyrLK()函数,输入步骤Ⅲ-2所得的当前帧图像中的特征点位置A0(xM01,yM01),输出该特征点在下一帧图像中经换算后得到的位置A1(xM11,yM11)。Call the cvCalcOpticalFlowPyrLK() function of the OpenCV open source vision library, input the feature point position A 0 (x M01 , y M01 ) in the current frame image obtained in step III-2, and output the converted feature point in the next frame image Position A 1 (x M11 , y M11 ). 5.根据权利要求4所述的基于方向盘图像的司机疲劳检测方法,其特征在于:5. the driver fatigue detection method based on steering wheel image according to claim 4, is characterized in that: 所述步骤Ⅲ在方向盘上还取另一具有代表性并易于确定的特征点B,A、B与圆心连线的夹角大于30度;In the step III, another representative and easy-to-determined feature point B is taken on the steering wheel, and the angle between A, B and the line connecting the center of the circle is greater than 30 degrees; 所述Ⅲ-2同时确定当前帧的特征点B的位置B0(xM02,yM02);The III-2 simultaneously determines the position B 0 (x M02 , y M02 ) of the feature point B of the current frame; 所述Ⅲ-3同样方法确定特征点位置B0(xM02,yM02)在下一帧图像中的位置B1(xM12,yM12);The same method as described in III-3 determines the position B 1 (x M12 , y M12 ) of the feature point position B 0 (x M02 , y M02 ) in the next frame image; 圆心o(x0,y0)和B0(xM02,yM02)、B1(xM12,yM12)连线的夹角为θ2,即特征点B转动角度,The included angle between the center o(x 0 , y 0 ) and the line connecting B 0 (x M02 , y M02 ), B 1 (x M12 , y M12 ) is θ 2 , which is the rotation angle of feature point B, &theta;&theta; 22 == aa rr coscos || oBoB 00 || 22 ++ || oBoB 11 || 22 -- || BB 00 BB 11 || 22 22 &times;&times; || oBoB 00 || &times;&times; || oBoB 11 || ;; 最终的方向盘转动的角度θ为A、B两个特征点转动角度的平均值, The final steering wheel rotation angle θ is the average value of the rotation angles of the two feature points A and B, 6.根据权利要求1所述的基于方向盘图像的司机疲劳检测方法,其特征在于:6. the driver fatigue detection method based on steering wheel image according to claim 1, is characterized in that: 步骤Ⅶ支持向量机分类算法的具体步骤如下:Step VII The specific steps of the support vector machine classification algorithm are as follows: Ⅶ-1、训练样本的采集Ⅶ-1. Collection of training samples 选拥有五年或更长驾驶经验的至少5位司机和状态良好的至少5部车辆,每位司机随机挑选一部车辆,分别离散采集在直线行驶、“S”形路线行驶及超车不同行车情况下、司机处在清醒和疲劳状态时的样本,(x1,y1),...,(xW,yW);其中x1=(x11,x12)T,x11、x12分别代表第一个样本数据的第一个指标零速百分比PNS和第二个指标角度标准差σ;yk1∈Y={1,-1},Y=1、Y=-1分别代表司机处于清醒状态和疲劳状态,W为80~150,司机疲劳状态的判断是由司机的面部视频评分的方法评估驾驶司机的疲劳状态;司机的面部视频以12~20s为间隔划分为多段,至少5名经过训练的实验人员按目前常用的司机状态特征独立对各段视频进行评分,多个实验人员评分的平均值作为司机状态的判别结果;Select at least 5 drivers with five or more years of driving experience and at least 5 vehicles in good condition. Each driver randomly selects a vehicle, and collects discrete data on different driving situations of straight-line driving, "S"-shaped route driving and overtaking. Next, the samples when the driver is awake and fatigued, (x 1 , y 1 ), ..., (x W , y W ); where x 1 = (x 11 , x 12 ) T , x 11 , x 12 respectively represent the first index zero-speed percentage PNS and the second index angle standard deviation σ of the first sample data; y k1 ∈ Y={1,-1}, Y=1, Y=-1 respectively represent the driver In the state of wakefulness and fatigue, W is 80-150. The judgment of the driver’s fatigue state is to evaluate the driver’s fatigue state by scoring the driver’s facial video; A trained experimenter independently scores each section of video according to the currently commonly used driver state characteristics, and the average value of the scores of multiple experimenters is used as the discriminant result of the driver state; Ⅶ-2、参数的求解Ⅶ-2. Solving of parameters 所述分开训练样本的超平面,满足两类支持向量到该超平面的距离之和最大,支持向量即满足ωTx+b=±1的样本点,具体为求出超平面模型ωTx+b=0的参数ω和b,使得不同类支持向量到超平面的距离之和最大,即转化为:The hyperplane that separates the training samples satisfies that the sum of the distances from the two types of support vectors to the hyperplane is the largest, and the support vector is the sample point that satisfies ω T x + b = ± 1, specifically to find the hyperplane model ω T x The parameters ω and b of +b=0 make the sum of the distances from different types of support vectors to the hyperplane the largest, which is transformed into: mm ii nno &omega;&omega; ,, bb 11 22 || || &omega;&omega; || || 22 ST ytTxt+b)≥1,t=1,2,...,n1ST y tT x t +b)≥1, t=1, 2,..., n1 其中xt=(PNSt,σt)yi∈{1,-1}where x t = (PNS t , σ t )y i ∈ {1, -1} 此为凸二次规划问题,采用高效的拉格朗日乘子法得到其对偶,上述模型属于非线性可分,引入一个实现线性映射的高斯核函数,经过变换得到上式的对偶,即:This is a convex quadratic programming problem, and its dual is obtained by using the efficient Lagrange multiplier method. The above model is nonlinearly separable, and a Gaussian kernel function that realizes linear mapping is introduced, and the dual of the above formula is obtained after transformation, namely: minmin &alpha;&alpha; 11 22 &Sigma;&Sigma; tt 11 == 11 nno 11 &Sigma;&Sigma; tt 22 == 11 nno 11 ythe y tt 11 ythe y tt 22 &alpha;&alpha; tt 11 &alpha;&alpha; tt 22 KK (( xx tt 11 ,, xx tt 22 )) -- &Sigma;&Sigma; tt 11 == 11 nno 11 &alpha;&alpha; tt 11 tt 11 ,, tt 22 == 11 ,, 22 ,, ...... ,, nno 11 其中:σ1为高斯核的带宽,in: σ 1 is the bandwidth of the Gaussian kernel, xt1、xt2、yt1和yt2代表样本参数;x t1 , x t2 , y t1 and y t2 represent sample parameters; C为正则化常数,t1、t2=1,2,...,n1; C is a regularization constant, t1, t2=1, 2,..., n1; 求解出最优拉格朗日乘子α*=(α1 *,α2 *,...,αn1 *)T,计算最优法向量选择最优拉格朗日乘子α*中的一个小于正则化常数C的正分量αt1 *,以此计算最优位移项 Solve the optimal Lagrangian multiplier α * = (α 1 * , α 2 * ,..., α n1 * ) T , and calculate the optimal normal vector Select a positive component α t1 * of the optimal Lagrange multiplier α * that is smaller than the regularization constant C to calculate the optimal displacement term Ⅶ-3、构造超平面(ω*×x)+b*=0,由此求出的决策函数VII-3. Construct a hyperplane (ω * ×x) + b * = 0, and the decision function obtained from it f(x)=sgn[(ω*×x)+b*]。f(x)=sgn[(ω * ×x)+b * ]. 7.根据权利要求1至6中任一项所述的基于方向盘图像的司机疲劳检测方法设计的一种基于方向盘图像的司机疲劳检测装置,其特征在于:7. A kind of driver fatigue detection device based on the steering wheel image according to the driver fatigue detection method design based on the steering wheel image according to any one of claims 1 to 6, characterized in that: 包括中心处理器、摄像头和报警模块,摄像头安装于驾驶室内的顶部,处于方向盘上方,整个方向盘处于摄像头的取景框内,摄像头的视频信号输出线与本装置的中心处理器连接;所述中心处理器配有存储模块和供电模块,还连接报警模块,所述报警模块为报警灯或蜂鸣器;Including a central processor, a camera and an alarm module, the camera is installed on the top of the cab above the steering wheel, the entire steering wheel is in the viewfinder frame of the camera, and the video signal output line of the camera is connected to the central processor of the device; the central processing The device is equipped with a storage module and a power supply module, and is also connected to an alarm module, which is an alarm lamp or a buzzer; 所述中心处理器经数据传输模块连接图像处理模块,图像处理模块连接摄像头,摄像头采集方向盘图像传送入中心处理器的图像处理模块,图像处理模块对图像视频数据进行处理,经数据传输模块送入存储模块存储,并提取数据送入中心处理器,中心处理器将将当前方向盘图像与模板图像比较,得到当前方向盘转动角度,求得当前的零速百分比和角度标准差,代入根据支持向量机分类算法学习得到的决策函数,判断当前司机是否处于疲劳状态;检测到司机处于疲劳状态,中心处理器发送指令到报警模块,其报警灯闪烁或蜂鸣器鸣响。The central processor is connected to the image processing module through the data transmission module, and the image processing module is connected to the camera, and the steering wheel image collected by the camera is sent to the image processing module of the central processor, and the image processing module processes the image and video data, and is sent to the image processing module through the data transmission module. The storage module stores and extracts the data and sends them to the central processor. The central processor will compare the current steering wheel image with the template image to obtain the current steering wheel rotation angle, obtain the current zero speed percentage and angle standard deviation, and substitute it into the classification according to the support vector machine. The decision function learned by the algorithm judges whether the current driver is in a state of fatigue; when the driver is detected to be in a state of fatigue, the central processor sends an instruction to the alarm module, and its alarm light flashes or the buzzer sounds. 8.根据权利要求7所述的基于方向盘图像的司机疲劳检测装置,其特征在于:8. The driver fatigue detection device based on steering wheel image according to claim 7, characterized in that: 所述摄像头安装于驾驶室内顶部的N点,N点处于穿过方向盘圆心、垂直于车体纵轴的铅垂面后方,且与该铅垂面的距离小于或等于20cm,N点与穿过方向盘圆心、平行于车体纵轴的铅垂面的距离小于或等于15cm。The camera is installed at point N on the top of the cab, and point N is behind the vertical plane passing through the center of the steering wheel and perpendicular to the longitudinal axis of the vehicle body, and the distance from the vertical plane is less than or equal to 20cm. The distance between the center of the steering wheel and the vertical plane parallel to the longitudinal axis of the vehicle body is less than or equal to 15cm.
CN201710282836.9A 2017-04-26 2017-04-26 A kind of tired driver detection method and detection means based on steering wheel image Pending CN106874900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710282836.9A CN106874900A (en) 2017-04-26 2017-04-26 A kind of tired driver detection method and detection means based on steering wheel image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710282836.9A CN106874900A (en) 2017-04-26 2017-04-26 A kind of tired driver detection method and detection means based on steering wheel image

Publications (1)

Publication Number Publication Date
CN106874900A true CN106874900A (en) 2017-06-20

Family

ID=59161493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710282836.9A Pending CN106874900A (en) 2017-04-26 2017-04-26 A kind of tired driver detection method and detection means based on steering wheel image

Country Status (1)

Country Link
CN (1) CN106874900A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492527A (en) * 2018-05-18 2018-09-04 武汉理工大学 A kind of fatigue driving monitoring method based on passing behavior feature
CN108764185A (en) * 2018-06-01 2018-11-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN108827148A (en) * 2018-05-24 2018-11-16 青岛杰瑞自动化有限公司 Rotating accuracy measurement method and measuring device
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN109243006A (en) * 2018-08-24 2019-01-18 深圳市国脉畅行科技股份有限公司 Abnormal driving Activity recognition method, apparatus, computer equipment and storage medium
CN109584304A (en) * 2018-12-07 2019-04-05 中国科学技术大学 A kind of steering wheel angle measurement method and device, system
CN112238032A (en) * 2020-09-04 2021-01-19 上海尧崇智能科技有限公司 Gluing path generation method, device and system and computer-readable storage medium
CN112329731A (en) * 2020-11-27 2021-02-05 华南理工大学 Operation behavior detection method and system for forklift driver practical operation examination and coaching
CN113076874A (en) * 2021-04-01 2021-07-06 安徽嘻哈网络技术有限公司 Steering wheel angle detection system
CN115588275A (en) * 2022-09-28 2023-01-10 歌尔科技有限公司 AR device, smart watch, control method of smart watch and smart wearable device system
CN118092672A (en) * 2024-04-24 2024-05-28 广州美术学院 Image-based intelligent device interaction system and device
CN119540192A (en) * 2024-11-18 2025-02-28 衡阳县众诚钟表制造有限公司 Automatic detection method of clock time accuracy based on machine vision

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025729A (en) * 2007-03-29 2007-08-29 复旦大学 Pattern classification rcognition method based on rough support vector machine
CN101110103A (en) * 2006-07-20 2008-01-23 中国科学院自动化研究所 A learning-based automatic inspection method for image registration
CN102878951A (en) * 2012-09-14 2013-01-16 北京航空航天大学 Method and device for detecting rotation angle of vehicle steering wheel based on image
CN103279741A (en) * 2013-05-20 2013-09-04 大连理工大学 Pedestrian early warning system based on vehicle-mounted infrared image and working method thereof
CN104036619A (en) * 2013-03-04 2014-09-10 德尔福电子(苏州)有限公司 Fatigue detection method and detection apparatus based on turn angle data of steering wheel
CN104688252A (en) * 2015-03-16 2015-06-10 清华大学 Method for detecting fatigue status of driver through steering wheel rotation angle information
CN104952210A (en) * 2015-05-15 2015-09-30 南京邮电大学 Fatigue driving state detecting system and method based on decision-making level data integration
CN105095835A (en) * 2014-05-12 2015-11-25 比亚迪股份有限公司 Pedestrian detection method and system
CN105354988A (en) * 2015-12-11 2016-02-24 东北大学 Driver fatigue driving detection system based on machine vision and detection method
CN105631485A (en) * 2016-03-28 2016-06-01 苏州阿凡提网络技术有限公司 Fatigue driving detection-oriented steering wheel operation feature extraction method
CN106408032A (en) * 2016-09-30 2017-02-15 防城港市港口区高创信息技术有限公司 Fatigue driving detection method based on corner of steering wheel

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110103A (en) * 2006-07-20 2008-01-23 中国科学院自动化研究所 A learning-based automatic inspection method for image registration
CN101025729A (en) * 2007-03-29 2007-08-29 复旦大学 Pattern classification rcognition method based on rough support vector machine
CN102878951A (en) * 2012-09-14 2013-01-16 北京航空航天大学 Method and device for detecting rotation angle of vehicle steering wheel based on image
CN104036619A (en) * 2013-03-04 2014-09-10 德尔福电子(苏州)有限公司 Fatigue detection method and detection apparatus based on turn angle data of steering wheel
CN103279741A (en) * 2013-05-20 2013-09-04 大连理工大学 Pedestrian early warning system based on vehicle-mounted infrared image and working method thereof
CN105095835A (en) * 2014-05-12 2015-11-25 比亚迪股份有限公司 Pedestrian detection method and system
CN104688252A (en) * 2015-03-16 2015-06-10 清华大学 Method for detecting fatigue status of driver through steering wheel rotation angle information
CN104952210A (en) * 2015-05-15 2015-09-30 南京邮电大学 Fatigue driving state detecting system and method based on decision-making level data integration
CN105354988A (en) * 2015-12-11 2016-02-24 东北大学 Driver fatigue driving detection system based on machine vision and detection method
CN105631485A (en) * 2016-03-28 2016-06-01 苏州阿凡提网络技术有限公司 Fatigue driving detection-oriented steering wheel operation feature extraction method
CN106408032A (en) * 2016-09-30 2017-02-15 防城港市港口区高创信息技术有限公司 Fatigue driving detection method based on corner of steering wheel

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492527A (en) * 2018-05-18 2018-09-04 武汉理工大学 A kind of fatigue driving monitoring method based on passing behavior feature
CN108827148A (en) * 2018-05-24 2018-11-16 青岛杰瑞自动化有限公司 Rotating accuracy measurement method and measuring device
US11321952B2 (en) 2018-06-01 2022-05-03 Boe Technology Group Co., Ltd. Computer-implemented method of alerting driver of vehicle, apparatus for alerting driver of vehicle, vehicle, and computer-program product
CN108764185A (en) * 2018-06-01 2018-11-06 京东方科技集团股份有限公司 A kind of image processing method and device
CN108764185B (en) * 2018-06-01 2022-07-19 京东方科技集团股份有限公司 Image processing method and device
CN109243006A (en) * 2018-08-24 2019-01-18 深圳市国脉畅行科技股份有限公司 Abnormal driving Activity recognition method, apparatus, computer equipment and storage medium
CN109087485A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Assisting automobile driver method, apparatus, intelligent glasses and storage medium
CN109584304A (en) * 2018-12-07 2019-04-05 中国科学技术大学 A kind of steering wheel angle measurement method and device, system
CN112238032A (en) * 2020-09-04 2021-01-19 上海尧崇智能科技有限公司 Gluing path generation method, device and system and computer-readable storage medium
CN112329731B (en) * 2020-11-27 2023-09-05 华南理工大学 Operation behavior detection method and system for forklift driver real operation assessment and coaching
CN112329731A (en) * 2020-11-27 2021-02-05 华南理工大学 Operation behavior detection method and system for forklift driver practical operation examination and coaching
CN113076874A (en) * 2021-04-01 2021-07-06 安徽嘻哈网络技术有限公司 Steering wheel angle detection system
CN115588275A (en) * 2022-09-28 2023-01-10 歌尔科技有限公司 AR device, smart watch, control method of smart watch and smart wearable device system
CN118092672A (en) * 2024-04-24 2024-05-28 广州美术学院 Image-based intelligent device interaction system and device
CN119540192A (en) * 2024-11-18 2025-02-28 衡阳县众诚钟表制造有限公司 Automatic detection method of clock time accuracy based on machine vision
CN119540192B (en) * 2024-11-18 2025-07-08 衡阳县众诚钟表制造有限公司 Automatic detection method for clock travel time accuracy based on machine vision

Similar Documents

Publication Publication Date Title
CN106874900A (en) A kind of tired driver detection method and detection means based on steering wheel image
CN103824420B (en) Fatigue driving identification system based on heart rate variability non-contact measurement
CN104029680B (en) Lane Departure Warning System based on monocular cam and method
CN108875642A (en) A kind of method of the driver fatigue detection of multi-index amalgamation
CN104011737B (en) Method for detecting mist
WO2018153304A1 (en) Map road mark and road quality collection apparatus and method based on adas system
WO2020029444A1 (en) Method and system for detecting attention of driver while driving
CN102254151A (en) Driver fatigue detection method based on face video analysis
CN105976402A (en) Real scale obtaining method of monocular vision odometer
CN102085099B (en) Method and device for detecting fatigue driving
CN106156725A (en) A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN106205163A (en) Mountain-area road-curve sight blind area based on panoramic shooting technology meeting early warning system
CN102663352A (en) Track identification method
WO2023240805A1 (en) Connected vehicle overspeed early warning method and system based on filtering correction
CN106203273B (en) Lane detection system, method and the advanced driving assistance system of multiple features fusion
WO2020237939A1 (en) Method and apparatus for constructing eyelid curve of human eye
CN107316354A (en) A kind of method for detecting fatigue driving based on steering wheel and GNSS data
CN109886086A (en) Pedestrian detection method based on HOG feature and linear SVM cascade classifier
CN115273005A (en) An Environment Perception Method for Visual Navigation Vehicles Based on Improved YOLO Algorithm
CN103745238A (en) Pantograph identification method based on AdaBoost and active shape model
CN117671972A (en) Vehicle speed detection method and device for slow traffic system
CN116403185A (en) All-weather road obstacle detection and early warning method based on visual fusion perception
CN105718908B (en) A traffic police detection method and system based on clothing feature and attitude detection
CN106960193A (en) A kind of lane detection apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170620

WD01 Invention patent application deemed withdrawn after publication