[go: up one dir, main page]

CN104318237A - Fatigue driving warning method based on face identification - Google Patents

Fatigue driving warning method based on face identification Download PDF

Info

Publication number
CN104318237A
CN104318237A CN201410587499.0A CN201410587499A CN104318237A CN 104318237 A CN104318237 A CN 104318237A CN 201410587499 A CN201410587499 A CN 201410587499A CN 104318237 A CN104318237 A CN 104318237A
Authority
CN
China
Prior art keywords
image
eye
pupil
fatigue
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410587499.0A
Other languages
Chinese (zh)
Inventor
孙海信
成垦
古叶
齐洁
程恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201410587499.0A priority Critical patent/CN104318237A/en
Publication of CN104318237A publication Critical patent/CN104318237A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

基于人脸识别的疲劳驾驶预警方法,涉及计算机视觉。设置应用环境,采用红外光源,在摄像头添加滤光片,使人眼能够产生红眼效应;摄像头采集红外环境的图像,获取图像帧,进行差分处理,得到的差分图像能够凸显出具有红眼效应的瞳孔部分;然后将差分后的图像进行图像增强操作,突出感兴趣的区域,即眼部区域瞳孔部分;对增强后的图像进行自适应阈值,将图像二值化,用来自适应凸显瞳孔与背景图像的差异;对经过二值化后的图像做开运算,消除图片仍旧存在的非感兴趣区域噪点或者偏移条纹;提取眼部特征,定位瞳孔,采用卡尔曼滤波,缩小下一帧眼部区域提取范围,准确进行人眼进行动态跟踪定位;通过提取到的眼部特征,计算眼睛特征参数,由PERCLOS值的大小做出疲劳程度判决。

A fatigue driving early warning method based on face recognition, involving computer vision. Set up the application environment, use an infrared light source, add a filter to the camera, so that the human eye can produce the red-eye effect; the camera collects the image of the infrared environment, obtains the image frame, and performs differential processing, and the obtained differential image can highlight the pupil with the red-eye effect part; then perform image enhancement on the differenced image to highlight the area of interest, that is, the pupil part of the eye area; perform adaptive thresholding on the enhanced image, and binarize the image to adaptively highlight the pupil and the background image The difference; open the binarized image to eliminate the non-interest area noise or offset stripes that still exist in the image; extract eye features, locate the pupil, and use Kalman filter to reduce the eye area of the next frame Extract the range and accurately perform dynamic tracking and positioning of the human eye; calculate the eye feature parameters through the extracted eye features, and make a fatigue judgment based on the PERCLOS value.

Description

基于人脸识别的疲劳驾驶预警方法Fatigue driving warning method based on face recognition

技术领域technical field

本发明涉及计算机视觉,特别涉及一种基于人脸识别的疲劳驾驶预警方法。The invention relates to computer vision, in particular to a fatigue driving early warning method based on face recognition.

背景技术Background technique

在全球范围内,司机的疲劳驾驶已成为导致交通安全事故的重要原因之一。当驾驶员因睡眠不足、饮酒及生病等因素而引起视线模糊、反映迟钝、注意力分散等现象时,将直接影响到驾驶员的感知、思维、判断、决策及运动执行能力,特别是经常进行24小时任意时间工作的货车,疲劳驾驶引起的灾难事故可以达到60%(中国交通年鉴[M].中国交通出版社.2010)。在严峻的交通事故形势下,研究预防驾驶员疲劳驾驶的方法很有必要。如今通过人体信息判别驾驶员身体的疲劳状况来进行预警并提供相应的防护手段,是目前国内外专家和学者的一个研究热点,研究成果也具有重要的意义(李都厚,刘群,袁伟等.疲劳驾驶与交通事故关系[J].交通运输工程学报,2010,10(2):104-109)。当前研究疲劳驾驶检测的机构和正在开发的疲劳检测方法主要有:Globally, driver fatigue driving has become one of the important reasons leading to traffic safety accidents. When the driver's vision is blurred, slow to respond, and distracted due to factors such as lack of sleep, drinking, and illness, it will directly affect the driver's perception, thinking, judgment, decision-making, and exercise execution capabilities, especially when frequent For trucks working at any time in 24 hours, the catastrophic accidents caused by fatigue driving can reach 60% (China Traffic Yearbook [M]. China Communications Press. 2010). In the severe traffic accident situation, it is necessary to study the method of preventing driver's fatigue driving. Nowadays, judging the fatigue status of the driver's body through human body information to carry out early warning and provide corresponding protective measures is a research hotspot of experts and scholars at home and abroad, and the research results are also of great significance (Li Duhou, Liu Qun, Yuan Wei etc. Relationship between fatigue driving and traffic accidents [J]. Journal of Traffic and Transportation Engineering, 2010,10(2):104-109). The institutions currently researching fatigue driving detection and the fatigue detection methods under development mainly include:

基于生理信号检测方法(李增勇,焦昆,陈铭.汽车驾驶员模拟精神负荷与心率变异性的相关性分析[J].北京生物医学工程,2002,3:49-51)是将生物度量与疲劳状况联系起来,主要是利用各种终传感器获取驾驶员在驾车过程中的各项生理指标,包括脑电波,心电图,肌电图,体液分泌情况等各种生理反应。Based on the physiological signal detection method (Li Zengyong, Jiao Kun, Chen Ming. Correlation analysis between car driver's simulated mental load and heart rate variability [J]. Beijing Biomedical Engineering, 2002, 3:49-51) is a combination of biometrics and Fatigue status is linked, mainly by using various final sensors to obtain various physiological indicators of the driver during driving, including brain waves, electrocardiograms, electromyography, body fluid secretion and other physiological responses.

基于交通工具行为的检测方法(A.Williamson,A.Feyer,R.Friswell.The impactof work practices on fatigue in long distance truck drivers Accident Analysis andPrevention,1996,28(6):709–719)是当车辆偏离原来的行驶路线时,系统则会捕获到该信息并发出报警信号,主要应用在高速公路上车辆的红外监测。The detection method based on vehicle behavior (A.Williamson, A.Feyer, R.Friswell.The impact of work practices on fatigue in long distance truck drivers Accident Analysis and Prevention,1996,28(6):709–719) is when the vehicle deviates from When the original driving route, the system will capture the information and send out an alarm signal, which is mainly used in the infrared monitoring of vehicles on expressways.

基于计算机视觉的检测方法(眼部识别的驾驶员疲劳检测方法研究[D].大连:大连海事大学,2013)是通过分析捕捉到驾驶员的一些与疲劳相关的细微动作如垂头打盹、闭眼或者眯眼,目光是否注视路面,打哈欠的频率等动作信息来判断驾驶员的疲劳程度。The detection method based on computer vision (Research on driver fatigue detection method based on eye recognition [D]. Dalian: Dalian Maritime University, 2013) is to capture some subtle actions related to fatigue of the driver through analysis, such as drooping head, dozing, closing Eyes or squints, whether the eyes are on the road, the frequency of yawning and other motion information to judge the driver's fatigue.

因此提高车载疲劳报警方法的实时性、准确性、可靠性,降低制作成本,以及寻找非接触性的、信息融合的疲劳检测方法,都将是未来疲劳驾驶的研究方向。Therefore, improving the real-time performance, accuracy, and reliability of vehicle fatigue alarm methods, reducing production costs, and finding non-contact, information fusion fatigue detection methods will all be research directions for future fatigue driving.

发明内容Contents of the invention

本发明的目的在于提供基于车载的、视觉的、实时性、检测速度快、准确的一种基于人脸识别的疲劳驾驶预警方法。The purpose of the present invention is to provide a vehicle-based, visual, real-time, fast and accurate detection method for fatigue driving warning based on face recognition.

本发明包括以下步骤:The present invention comprises the following steps:

1)设置应用环境,采用红外光源,在摄像头添加滤光片,消除可见光的影响,使人眼能够产生红眼效应;1) Set up the application environment, use infrared light source, add a filter to the camera to eliminate the influence of visible light, so that the human eye can produce the red-eye effect;

2)摄像头采集红外环境的图像,获取图像帧,并对图像进行差分处理,得到的差分图像能够凸显出具有红眼效应的瞳孔部分;然后将差分后的图像进行图像增强操作,改善图像的视觉效果,突出感兴趣的区域,即眼部区域瞳孔部分;2) The camera collects the image of the infrared environment, obtains the image frame, and performs differential processing on the image, and the obtained differential image can highlight the pupil part with the red-eye effect; then perform image enhancement operation on the differential image to improve the visual effect of the image , to highlight the area of interest, that is, the pupil part of the eye area;

3)对增强后的图像进行自适应阈值,将图像二值化,用来自适应凸显瞳孔与背景图像的差异;3) Perform adaptive thresholding on the enhanced image, and binarize the image to adaptively highlight the difference between the pupil and the background image;

4)对经过二值化后的图像做开运算,消除图片仍旧存在的非感兴趣区域噪点或者偏移条纹;4) Perform an open operation on the binarized image to eliminate the noise or offset stripes in the non-interest area still existing in the image;

5)提取眼部特征,定位瞳孔,采用卡尔曼滤波,缩小下一帧眼部区域提取范围,准确进行人眼进行动态跟踪定位;5) Extract eye features, locate pupils, use Kalman filter to narrow the extraction range of eye area in the next frame, and accurately perform dynamic tracking and positioning of human eyes;

6)通过提取到的眼部特征,计算眼睛特征参数,由PERCLOS值的大小做出疲劳程度判决。6) Calculate the eye feature parameters through the extracted eye features, and make a fatigue judgment based on the PERCLOS value.

在步骤1)中,所述设置应用环境可采用内外红外光圈的设计。In step 1), the setting application environment can adopt the design of inner and outer infrared apertures.

在步骤2)中,所述摄像头采集红外图像的具体方法可为:对采集后的图像进行差分运算,通过人眼对红外光源产生的红眼效应,将前后相邻的亮暗瞳孔的两幅图像进行差分,得到的差分图像除去背景干扰,突出了像素值差异较大的部分,即瞳孔区域。In step 2), the specific method for the camera to collect infrared images can be: perform differential calculation on the collected images, and use the red-eye effect generated by the human eye on the infrared light source to combine the two images of the bright and dark pupils adjacent to each other. After performing a difference, the obtained difference image removes the background interference, and highlights the part with a large difference in pixel values, that is, the pupil area.

所述差分图像除去背景干扰的具体方法可为:对图像做滤波处理,用于滤除在图像采集或传输转换过程中产生的噪声,消除对人眼特征提取和目标分割的不利影响。The specific method for removing background interference from the differential image may be: filter the image to filter out noise generated during image acquisition or transmission conversion, and eliminate adverse effects on human eye feature extraction and target segmentation.

在步骤3)中,所述对增强后的图像进行自适应阈值,是采用大津法根据差分图像来自动选择合适的阈值进行分割,使图像的二值化具有自适应性,对环境的适应性增强。In step 3), the adaptive threshold is carried out to the enhanced image, which is to use the Otsu method to automatically select a suitable threshold for segmentation according to the difference image, so that the binarization of the image is adaptive and adaptable to the environment. enhanced.

在步骤4)中,所述对经过二值化后的图像做开运算,是对二值图像先腐蚀后膨胀的过程,开运算可以用来消除小物体,断开某些狭窄连接像素,同时能够平滑感兴趣的瞳孔区域的边界却不明显改变其面积影响计算。In step 4), the opening operation on the binarized image is a process of corroding and then expanding the binary image. The opening operation can be used to eliminate small objects, disconnect some narrow connected pixels, and at the same time Being able to smooth the boundary of the pupil region of interest without significantly changing its area affects the computation.

在步骤5)中,所述提取眼部特征,定位瞳孔的具体方法可为:通过摄像头获取瞳孔亮暗程度不同的帧间图像,结合人眼的几何约束条件能够快速而准确的找到人眼区域,并采用卡尔曼滤波跟踪定位瞳孔,通过卡尔曼滤波预测下一帧图像的运动状态,缩小眼部特征的提取范围,准确实现眼部区域的动态跟踪;In step 5), the specific method of extracting eye features and locating pupils can be: using the camera to obtain inter-frame images with different degrees of pupil brightness, combined with the geometric constraints of the human eye, the human eye area can be quickly and accurately found , and use Kalman filter to track and locate the pupil, predict the motion state of the next frame image through Kalman filter, narrow the extraction range of eye features, and accurately realize the dynamic tracking of the eye area;

所述卡尔曼滤波跟踪定位瞳孔的具体步骤可为:The concrete steps of described Kalman filter tracking and positioning pupil can be:

(1)初始化系统状态。(1) Initialize the system state.

(2)在第k帧中,根据上一帧预测的范围内进行眼部瞳孔提取,若提取到瞳孔,则修正卡尔曼滤波器的系统状态参数,若没有提取到,则下一帧搜索范围回归到整幅图片,并将下一次搜索的结果重新开始卡尔曼滤波过程。(2) In the kth frame, the eye pupil is extracted according to the predicted range of the previous frame. If the pupil is extracted, the system state parameters of the Kalman filter are corrected. If not, the next frame search range Return to the whole image and restart the Kalman filtering process with the results of the next search.

(3)使用卡尔曼滤波器预测下一帧图像的运动状态,缩小特征的提取范围。(3) Use the Kalman filter to predict the motion state of the next frame image and narrow the range of feature extraction.

(4)重复步骤(2)(3)的动作。(4) Repeat steps (2) and (3).

在步骤6)中,所述计算眼睛特征参数,是当定位到瞳孔区域后,就可以利用它来提取轮廓计算眼睛的特征参数。所述眼睛的特征参数选择眼睛的高宽比和眼部的面积,眼睛的高宽比采用瞳孔定位后的轮廓的外接矩形框的长宽来确定;眼部的面积除了统计像素点个数之外也可以通过外接矩形框进行长宽的积来快速计算,通过计算轮廓外接矩形的面积和高宽比,若其面积小于一定的阈值,则认为是闭眼,否则视为睁眼;所述疲劳程度判决是通过计算一段时间内眼睛闭合状态占所有状态的比率,驾驶员眼睛闭合的时间越长,计算得到的PERCLOS值就越大,反映出驾驶员的疲劳程度也就会越大,通过测量眼睛闭合状态的比率来判断驾驶员的疲劳程度,对驾驶员的疲劳状态得出一个数值化的结果,一旦检测出驾驶员处于疲劳驾驶状态,就采取相对应的措施发出警报。In step 6), the calculation of the characteristic parameters of the eyes means that after the pupil area is located, it can be used to extract the outline and calculate the characteristic parameters of the eyes. The feature parameters of the eyes are selected from the aspect ratio of the eyes and the area of the eye, and the aspect ratio of the eyes is determined by the length and width of the circumscribed rectangular frame of the contour after the pupil positioning; the area of the eye is determined in addition to the number of statistical pixels. It can also be quickly calculated by multiplying the length and width of the circumscribed rectangular frame. By calculating the area and height-to-width ratio of the circumscribed rectangle of the outline, if the area is less than a certain threshold, it is considered closed eyes, otherwise it is regarded as open eyes; Judging the degree of fatigue is calculated by calculating the ratio of the closed state of the eyes to all states within a period of time. The longer the driver's eyes are closed, the greater the calculated PERCLOS value, which reflects the greater the driver's fatigue. Through Measure the ratio of the eyes closed state to judge the driver's fatigue degree, and obtain a numerical result for the driver's fatigue state. Once the driver is detected to be in a fatigue driving state, corresponding measures will be taken to issue an alarm.

本发明是通过对眼部瞳孔特征的提取并结合疲劳判定的准则,对驾驶员的疲劳程度进行科学判决。本发明的有益效果如下:The invention makes a scientific judgment on the driver's fatigue degree by extracting the features of the eye pupil and combining the fatigue judgment criterion. The beneficial effects of the present invention are as follows:

本发明采用机器视觉的方法对驾驶员的疲劳状态进行判定具有众多的优势,是目前最主流的技术之一。本发明将白天和夜晚的疲劳检测统一起来,采用同一征提取的方式,对环境的变化不敏感,有较好的自适应性,准确度高,与模式识别的特征提取相比该方法针对系统使用环境进行设计,简单高效,且准确率高,改进了模式识别的缺点,为系统的大规模推广奠定了基础。The present invention adopts the method of machine vision to judge the driver's fatigue state and has many advantages, and is one of the most mainstream technologies at present. The present invention unifies the day and night fatigue detection, adopts the same feature extraction method, is not sensitive to environmental changes, has better adaptability, and has high accuracy. Compared with the feature extraction of pattern recognition, this method is aimed at the system Using the environment to design, simple and efficient, and high accuracy, improved the shortcomings of pattern recognition, and laid the foundation for the large-scale promotion of the system.

附图说明Description of drawings

图1是本发明实施例的原理框图;Fig. 1 is a functional block diagram of an embodiment of the present invention;

图2是本发明实施例的红外灯源结构图;Fig. 2 is a structural diagram of an infrared lamp source according to an embodiment of the present invention;

图3是本发明实施例的人眼跟踪定位算法结构框图;Fig. 3 is a structural block diagram of the human eye tracking and positioning algorithm of an embodiment of the present invention;

图4是本发明实施例的疲劳判别框图。Fig. 4 is a block diagram of fatigue discrimination in an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和技术效果更加清楚明白,以下结合具体实施例,并参照附图,对本发明作进一步说明。In order to make the object, technical solution and technical effect of the present invention clearer, the present invention will be further described below in conjunction with specific embodiments and with reference to the accompanying drawings.

本发明是一个基于人脸识别的疲劳驾驶预警方法。The invention is a fatigue driving early warning method based on face recognition.

如图1所示,系统眼部特征的提取及定位。人眼定位是判断驾驶员是否疲劳的关键步骤。驾驶员的眼睛是否能准确定位出来,将直接影响疲劳检测的准确度。可利用850nm红外光源人为产生红眼效应,因此摄像头可以获取瞳孔亮度不相同的前后两帧图像。由外圈灯亮产生暗瞳孔图片,即一般照明图片。然后将前后相邻的亮暗瞳孔的两帧图像进行差分运算。这样可以突出像素值相差的区域,尤其是瞳孔区域,并且可以排除不必要的背景干扰。从差分图像中可以观察到瞳孔区域的图像亮度相对于其他区域更亮,图像之间的差异明显。对差分后的图像进行图像增强操作,增强后的图像需要通过一个阈值进行二值化产生二值图片方便定位。对图像作阈值处理,需要选定一个适合的阈值V有效的突出眼睛区域。本发明采用大津法按图像的灰度特性,将图像分成背景和目标两部分,计算阈值进行图像二值化后,前景与背景图像的类间方差最大。这样能够根据差分图像来自动选择合适的阈值进行分割。对二值图像做开运算,消除图片仍旧存在一些非感兴趣区域噪点或者偏移条纹。采用卡尔曼滤波来跟踪定位瞳孔,实现眼部区域的动态定位。计算一段时间内眼睛闭合状态占所有状态的比率,由PERCLOS准则来判定驾驶员的疲劳程度。As shown in Figure 1, the system extracts and locates eye features. Human eye positioning is a key step in judging whether the driver is fatigued. Whether the driver's eyes can be accurately positioned will directly affect the accuracy of fatigue detection. The 850nm infrared light source can be used to artificially generate the red-eye effect, so the camera can acquire two frames of images with different pupil brightness. The dark pupil picture is generated by the outer ring light, that is, the general illumination picture. Then the difference operation is performed on the two frames of images of the adjacent bright and dark pupils. This highlights areas where pixel values differ, especially the pupil area, and eliminates unwanted background noise. From the difference images, it can be observed that the image brightness of the pupil area is brighter than other areas, and the difference between images is obvious. The image enhancement operation is performed on the differenced image, and the enhanced image needs to be binarized through a threshold to generate a binary image for easy positioning. To perform threshold processing on the image, it is necessary to select a suitable threshold V to effectively highlight the eye area. The invention adopts the Otsu method to divide the image into two parts, the background and the target, according to the grayscale characteristics of the image, and calculates the threshold value to carry out image binarization, so that the variance between the classes of the foreground and background images is the largest. In this way, an appropriate threshold can be automatically selected for segmentation according to the difference image. Perform an open operation on the binary image to eliminate some non-interest area noise or offset stripes in the image. The Kalman filter is used to track and locate the pupil to realize the dynamic positioning of the eye area. Calculate the ratio of eye closure state to all states within a period of time, and use the PERCLOS criterion to determine the driver's fatigue degree.

如图2所示,本发明采用红外光源来设置驾驶员的应用环境,红外光源越靠近摄像头镜头光轴,越容易把从眼睛反射出来的光线直接投射进摄像头的传感器。当用摄像头采集图像时,使红外光源照射在驾驶员脸上,在镜头上加贴红外滤光片来消除可见光的影响,通过摄像头实时采集驾驶员眼部区域图像。As shown in Figure 2, the present invention uses an infrared light source to set the driver's application environment. The closer the infrared light source is to the optical axis of the camera lens, the easier it is to directly project the light reflected from the eyes into the sensor of the camera. When the camera is used to collect images, the infrared light source is irradiated on the driver's face, an infrared filter is attached to the lens to eliminate the influence of visible light, and the image of the driver's eye area is collected in real time through the camera.

如图3所示,当摄像头获取到具有红眼效应的图像之后,对亮暗瞳孔图像做差分处理,得到凸显出瞳孔的图像。采用大津法做自适应阈值之后得到二值图像,系统对人眼的定位方法采用眼的几何特点做为相关约束条件,从而找到人眼区域。通过以下四个约束条件筛选眼部区域:As shown in Figure 3, after the camera acquires the image with the red-eye effect, it performs differential processing on the bright and dark pupil images to obtain an image that highlights the pupil. The binary image is obtained after using the Otsu method as an adaptive threshold, and the system uses the geometric characteristics of the eye as the relevant constraint conditions for the positioning method of the human eye to find the human eye area. The eye region is filtered by the following four constraints:

1、眼睛在图像的上半部,同时头发、额头也占一部分空间,即眼睛中点在的行应该满足L/15<r<L/2。其中,L是图像的总行数,即图像的高度;1. The eyes are in the upper half of the image, and the hair and forehead also occupy part of the space, that is, the line where the eyes are located should satisfy L/15<r<L/2. Among them, L is the total number of lines of the image, that is, the height of the image;

2、由瞳孔的形状决定区域的高宽比或者宽高比应该位于1左右。2. The aspect ratio or aspect ratio of the area determined by the shape of the pupil should be around 1.

3、眼睛区域所占的面积应该大于一定的阈值,排除躁点的干扰。3. The area occupied by the eye area should be greater than a certain threshold to eliminate the interference of noisy spots.

4、眼睛区域所占的面积应该小于一定的阈值,排除由于阈值取值不当导致大面积白斑的干扰。4. The area occupied by the eye area should be smaller than a certain threshold, so as to exclude the interference of large white spots caused by improper threshold value.

筛选出眼部区域之后,再采用卡尔曼滤波来跟踪瞳孔,实现眼部区域的动态跟踪定位。After the eye area is selected, the Kalman filter is used to track the pupil to realize the dynamic tracking and positioning of the eye area.

如图4所示,通过定位瞳孔所在的位置之后,就可以利用它来提取轮廓计算眼睛特征的参数。眼睛的高宽比和面积等参数在分析眼睛状态识别中比较简单可行并且有效的途径。眼睛的高宽比采用瞳孔定位后的轮廓的外接矩形框的长宽来确定。而眼部的面积除了统计像素点个数之外也可以通过外接矩形框进行长宽的积来快速的计算。As shown in Figure 4, after locating the position of the pupil, it can be used to extract the contour and calculate the parameters of the eye features. Parameters such as eye aspect ratio and area are relatively simple, feasible and effective ways to analyze eye state recognition. The aspect ratio of the eye is determined by the length and width of the circumscribed rectangle of the pupil-located contour. In addition to counting the number of pixels, the area of the eye can also be quickly calculated by multiplying the length and width of the circumscribed rectangular frame.

睁闭眼的判定采用如下方法:通过差分图像来定位眼睛,在闭眼的情况下,无法产生红眼效应,图像处理后的理想状态是瞳孔区域的面积几乎为0。眼睛睁得越大,红眼效应就越明显,获得的瞳孔区域的面积也就越大。因此,通过计算轮廓外接矩形的面积和高宽比。若面积小于一定的阈值,则认为是闭眼,否则视为睁眼。The following method is used to determine whether the eyes are open or closed: the eyes are located by the difference image. In the case of closed eyes, the red eye effect cannot be produced. The ideal state after image processing is that the area of the pupil area is almost 0. The wider the eye is opened, the more pronounced the red-eye effect is, and the larger the area of the pupil area obtained. Therefore, by calculating the area and aspect ratio of the rectangle bounding the outline. If the area is smaller than a certain threshold, it is considered as closed eyes, otherwise it is considered as open eyes.

计算一段时间内眼睛闭合状态占所有状态的比率。驾驶员眼睛闭合的时间越长,计算得到的PERCLOS值就越大,反映出驾驶员的疲劳程度也就会越严重。因此可以通过测量眼睛闭合状态的比率来判断驾驶员的疲劳程度。Computes the ratio of eye-closed states to all states over a period of time. The longer the driver's eyes are closed, the greater the calculated PERCLOS value, which reflects the more serious the driver's fatigue. Therefore, the driver's fatigue degree can be judged by measuring the ratio of eye closure state.

Claims (10)

1., based on the fatigue driving method for early warning of recognition of face, it is characterized in that comprising the following steps:
1) applied environment is set, adopts infrared light supply, add optical filter at camera, eliminate the impact of visible ray, enable human eye produce red-eye effect;
2) image of camera collection infrared environmental, obtain picture frame, and carry out difference processing to image, the difference image obtained can highlight the pupil portion with red-eye effect; Then differentiated image is carried out image enhancement operation, improve the visual effect of image, outstanding interested region, i.e. ocular pupil portion;
3) carry out adaptive threshold to the image after strengthening, by image binaryzation, being used for self-adaptation highlights the difference of pupil and background image;
4) opening operation is done to the image after binaryzation, the regions of non-interest noise that elimination picture still exists or skew striped;
5) extract eye feature, location pupil, adopts Kalman filtering, reduces next frame ocular and extracts scope, accurately carry out human eye and carry out dynamic track and localization;
6) eye feature by extracting, calculates eye feature parameter, makes degree of fatigue adjudicate by the size of PERCLOS value.
2., as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 1) in, the described design that the inside and outside infrared aperture of applied environment employing is set.
3. as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 2) in, the concrete grammar of described camera collection infrared image is: carry out calculus of differences to the image after gathering, by the red-eye effect that human eye produces infrared light supply, two width images of bright dark pupil adjacent for front and back are carried out difference, the difference image removing background interference obtained, highlights the part that value differences is larger, i.e. pupil region.
4. as claimed in claim 3 based on the fatigue driving method for early warning of recognition of face, it is characterized in that the concrete grammar of described difference image removing background interference is: filtering process is done to image, for the noise that filtering produces in image acquisition or transmission transfer process, eliminate the adverse effect to human eye feature extraction and Target Segmentation.
5. as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 3) in, described to strengthen after image carry out adaptive threshold, adopt Da-Jin algorithm automatically to select suitable threshold value to split according to difference image, make the binaryzation of image have adaptivity, the adaptability of environment is strengthened.
6., as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 4) in, describedly doing opening operation to the image after binaryzation, is the process of bianry image first being corroded to rear expansion.
7. as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 5) in, described extraction eye feature, the concrete grammar of location pupil is: obtain the different inter frame image of the bright dark degree of pupil by camera, geometry constraint conditions in conjunction with human eye can find human eye area fast and accurately, and adopt Kalman filter tracking to locate pupil, by the motion state of Kalman prediction next frame image, reduce the extraction scope of eye feature, accurately realize the dynamic tracking of ocular.
8., as claimed in claim 7 based on the fatigue driving method for early warning of recognition of face, it is characterized in that the concrete steps of described Kalman filter tracking location pupil are:
(1) initialization system state;
(2) in kth frame, the extraction of eye pupil is carried out according in the scope of previous frame prediction, if extract pupil, then revise the system status parameters of Kalman filter, if do not extract, then next frame hunting zone revert to picture in its entirety, and the result of next time searching for is restarted Kalman filtering process;
(3) use the motion state of Kalman filter prediction next frame image, reduce the extraction scope of feature;
(4) action of step (2) (3) is repeated.
9., as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 6) in, described calculating eye feature parameter is after navigating to pupil region, utilizes it to extract the characteristic parameter that profile calculates eyes; The characteristic parameter of described eyes selects the depth-width ratio of eyes and the area of eye, and the length and width of the boundary rectangle frame of the profile after the depth-width ratio employing Pupil diameter of eyes are determined; The area of eye also can carry out the long-pending of length and width by boundary rectangle frame and calculate fast except statistical pixel point number, by calculating area and the depth-width ratio of profile boundary rectangle, if its area is less than certain threshold value, then think to close one's eyes, otherwise be considered as opening eyes.
10. as claimed in claim 1 based on the fatigue driving method for early warning of recognition of face, it is characterized in that in step 6) in, described degree of fatigue judgement accounts for the ratio of all states by calculating eyes closed state in a period of time, the time that driver's eyes is closed is longer, the PERCLOS value calculated is larger, reflect that the degree of fatigue of driver also will be larger, the degree of fatigue of driver is judged by the ratio measuring eyes closed state, a result quantized is drawn to the fatigue state of driver, once detect that driver is in fatigue driving state, give the alarm with regard to taking corresponding measure.
CN201410587499.0A 2014-10-28 2014-10-28 Fatigue driving warning method based on face identification Pending CN104318237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410587499.0A CN104318237A (en) 2014-10-28 2014-10-28 Fatigue driving warning method based on face identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410587499.0A CN104318237A (en) 2014-10-28 2014-10-28 Fatigue driving warning method based on face identification

Publications (1)

Publication Number Publication Date
CN104318237A true CN104318237A (en) 2015-01-28

Family

ID=52373466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410587499.0A Pending CN104318237A (en) 2014-10-28 2014-10-28 Fatigue driving warning method based on face identification

Country Status (1)

Country Link
CN (1) CN104318237A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN105844218A (en) * 2016-03-16 2016-08-10 中山大学 Fatigue driving monitoring method based on TLD and contour coding
CN106127123A (en) * 2016-06-16 2016-11-16 江苏大学 A kind of human pilot face real-time detection method of driving round the clock based on RGB I
CN106774929A (en) * 2016-12-30 2017-05-31 维沃移动通信有限公司 The display processing method and virtual reality terminal of a kind of virtual reality terminal
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device
CN107566744A (en) * 2016-07-01 2018-01-09 现代自动车株式会社 For catching the apparatus and method of the reduced face-image of the reflection on glasses in vehicle
CN107595307A (en) * 2017-10-23 2018-01-19 湖南科技大学 Fatigue driving detection device and detection method based on machine vision eye recognition
CN107959756A (en) * 2017-11-30 2018-04-24 西安科锐盛创新科技有限公司 The system and method for electronic equipment are automatically closed in sleep
CN108304784A (en) * 2018-01-15 2018-07-20 武汉神目信息技术有限公司 A kind of blink detection method and device
CN108369480A (en) * 2015-12-25 2018-08-03 华为技术有限公司 A kind of man-machine interaction method and electronic equipment
CN108734086A (en) * 2018-03-27 2018-11-02 西安科技大学 The frequency of wink and gaze estimation method of network are generated based on ocular
CN108814630A (en) * 2018-07-11 2018-11-16 长安大学 A kind of driving behavior monitor detection device and method
CN110269716A (en) * 2019-06-21 2019-09-24 重庆医药高等专科学校 A kind of living body frog experimental provision
CN111493897A (en) * 2020-06-01 2020-08-07 南京国科医工科技发展有限公司 An intelligent health monitoring and promotion system for automobile drivers
CN111724408A (en) * 2020-06-05 2020-09-29 广东海洋大学 Validation experiment method of abnormal driving behavior algorithm model based on 5G communication
CN113805334A (en) * 2021-09-18 2021-12-17 京东方科技集团股份有限公司 Eye tracking system, control method and display panel
CN114971285A (en) * 2022-05-25 2022-08-30 福州大学 A method and device, equipment and medium for evaluating information load of driving environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
CN102201148A (en) * 2011-05-25 2011-09-28 北京航空航天大学 Driver fatigue detecting method and system based on vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6130617A (en) * 1999-06-09 2000-10-10 Hyundai Motor Company Driver's eye detection method of drowsy driving warning system
CN102201148A (en) * 2011-05-25 2011-09-28 北京航空航天大学 Driver fatigue detecting method and system based on vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴厚源: "基于DM3730的疲劳驾驶预警系统", 《中国优秀硕士学位论文全文数据库 工程科技二辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN105286802B (en) * 2015-11-30 2019-05-14 华南理工大学 Driver Fatigue Detection based on video information
CN108369480A (en) * 2015-12-25 2018-08-03 华为技术有限公司 A kind of man-machine interaction method and electronic equipment
CN105844218A (en) * 2016-03-16 2016-08-10 中山大学 Fatigue driving monitoring method based on TLD and contour coding
CN106127123A (en) * 2016-06-16 2016-11-16 江苏大学 A kind of human pilot face real-time detection method of driving round the clock based on RGB I
CN107566744A (en) * 2016-07-01 2018-01-09 现代自动车株式会社 For catching the apparatus and method of the reduced face-image of the reflection on glasses in vehicle
CN106774929A (en) * 2016-12-30 2017-05-31 维沃移动通信有限公司 The display processing method and virtual reality terminal of a kind of virtual reality terminal
CN106774929B (en) * 2016-12-30 2020-04-03 维沃移动通信有限公司 Display processing method of virtual reality terminal and virtual reality terminal
CN106874871A (en) * 2017-02-15 2017-06-20 广东光阵光电科技有限公司 A kind of recognition methods of living body faces dual camera and identifying device
CN107595307A (en) * 2017-10-23 2018-01-19 湖南科技大学 Fatigue driving detection device and detection method based on machine vision eye recognition
CN107959756B (en) * 2017-11-30 2020-09-18 深圳市普斯美医疗科技有限公司 System and method for automatically turning off electronic equipment during sleeping
CN107959756A (en) * 2017-11-30 2018-04-24 西安科锐盛创新科技有限公司 The system and method for electronic equipment are automatically closed in sleep
CN108304784A (en) * 2018-01-15 2018-07-20 武汉神目信息技术有限公司 A kind of blink detection method and device
CN108734086A (en) * 2018-03-27 2018-11-02 西安科技大学 The frequency of wink and gaze estimation method of network are generated based on ocular
CN108734086B (en) * 2018-03-27 2021-07-27 西安科技大学 Blink frequency and gaze estimation method based on eye region generative network
CN108814630A (en) * 2018-07-11 2018-11-16 长安大学 A kind of driving behavior monitor detection device and method
CN110269716A (en) * 2019-06-21 2019-09-24 重庆医药高等专科学校 A kind of living body frog experimental provision
CN111493897A (en) * 2020-06-01 2020-08-07 南京国科医工科技发展有限公司 An intelligent health monitoring and promotion system for automobile drivers
CN111724408A (en) * 2020-06-05 2020-09-29 广东海洋大学 Validation experiment method of abnormal driving behavior algorithm model based on 5G communication
CN111724408B (en) * 2020-06-05 2021-09-03 广东海洋大学 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication
CN113805334A (en) * 2021-09-18 2021-12-17 京东方科技集团股份有限公司 Eye tracking system, control method and display panel
CN113805334B (en) * 2021-09-18 2025-01-21 京东方科技集团股份有限公司 Eye tracking system, control method, and display panel
CN114971285A (en) * 2022-05-25 2022-08-30 福州大学 A method and device, equipment and medium for evaluating information load of driving environment
CN114971285B (en) * 2022-05-25 2024-07-02 福州大学 Driving environment information load evaluation method and device, equipment and medium

Similar Documents

Publication Publication Date Title
CN104318237A (en) Fatigue driving warning method based on face identification
CN102289660B (en) Method for detecting illegal driving behavior based on hand gesture tracking
CN104809445B (en) method for detecting fatigue driving based on eye and mouth state
CN100485710C (en) Method for recognizing vehicle type by digital picture processing technology
CN105354987B (en) Vehicle-mounted type fatigue driving detection and identification authentication system and its detection method
CN107292251B (en) Driver fatigue detection method and system based on human eye state
CN104013414B (en) A kind of Study in Driver Fatigue State Surveillance System based on intelligent movable mobile phone
CN101375796B (en) Fatigue driving real-time detection system
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN102054163B (en) Method for testing driver fatigue based on monocular vision
CN100373397C (en) A kind of iris image preprocessing method
CN109934199A (en) A method and system for driver fatigue detection based on computer vision
Li et al. Nighttime lane markings recognition based on Canny detection and Hough transform
CN105354985B (en) Fatigue driving monitoring apparatus and method
CN101984478B (en) Abnormal S-type driving warning method based on binocular vision lane marking detection
CN101339603A (en) Method for selecting iris image with qualified quality from video stream
CN102768726B (en) Pedestrian detection method for preventing pedestrian collision
CN105286802B (en) Driver Fatigue Detection based on video information
CN106156725A (en) A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist
CN104021370A (en) Driver state monitoring method based on vision information fusion and driver state monitoring system based on vision information fusion
CN101281646A (en) Real-time detection method of driver fatigue based on vision
CN103268479A (en) All-weather fatigue driving detection method
CN104915642B (en) Front vehicles distance measuring method and device
CN103129468A (en) Vehicle-mounted roadblock recognition system and method based on laser imaging technique
CN104123549A (en) Eye positioning method for real-time monitoring of fatigue driving

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150128