[go: up one dir, main page]

CN109948433A - An embedded face tracking method and device - Google Patents

An embedded face tracking method and device Download PDF

Info

Publication number
CN109948433A
CN109948433A CN201910100302.9A CN201910100302A CN109948433A CN 109948433 A CN109948433 A CN 109948433A CN 201910100302 A CN201910100302 A CN 201910100302A CN 109948433 A CN109948433 A CN 109948433A
Authority
CN
China
Prior art keywords
face
image
window
infrared camera
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910100302.9A
Other languages
Chinese (zh)
Inventor
庄千洋
张克华
王佳逸
陈倩倩
朱苗苗
丁璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201910100302.9A priority Critical patent/CN109948433A/en
Publication of CN109948433A publication Critical patent/CN109948433A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of embedded human face tracing method and devices, face image processing module carries out processing to the facial image that infrared camera acquires and obtains final face window, horizontal departure computing module calculates the horizontal departure of the central point of final face window center point abscissa and infrared camera acquisition image, and the rotation of PWM module control flaps machine head drives infrared camera rotation amendment deviation.The present invention follows technology using infrared camera, and the human face region detected can be made to be constantly in acquisition picture centre, and driver's face is avoided to cause missing inspection beyond acquisition image range.

Description

一种嵌入式人脸跟踪方法及装置An embedded face tracking method and device

技术领域technical field

本发明属于汽车安全驾驶技术,具体涉及一种驾驶员疲劳检测人脸跟踪技术及装置。The invention belongs to the safe driving technology of automobiles, and in particular relates to a face tracking technology and device for detecting driver fatigue.

背景技术Background technique

随着科技应用水平越来越高,以“互联网+”、大数据、车联网为代表的新技术,改变着人们的交通行为和交通方式,汽车已经成为人们生活和工作中不可或缺的交通工具。在我国汽车保有量不断飙升的同时,交通事故也不断增加。根据公安部交通数据统计,去除货车超载和超速驾驶等主观因素导致的交通事故,疲劳驾驶占2018年交通事故诱因的14%,是在客观上引发交通事故的一大“元凶”。目前,对于驾驶员面部特征进行疲劳检测的研究层出不穷,而人脸检测是对于面部特征进行疲劳检测的关键步骤。由于在汽车驾驶过程中,人脸会进行移动,因此人脸检测面临着实时性低,容易错检或者漏检。With the increasing application level of science and technology, new technologies represented by "Internet +", big data, and Internet of Vehicles are changing people's traffic behavior and mode of transportation. Cars have become an indispensable transportation in people's life and work. tool. While the number of cars in our country is soaring, traffic accidents are also increasing. According to the traffic statistics of the Ministry of Public Security, excluding traffic accidents caused by subjective factors such as overloading of trucks and speeding, fatigue driving accounted for 14% of the causes of traffic accidents in 2018, which is a major "culprit" in objectively causing traffic accidents. At present, researches on fatigue detection of driver facial features emerge in an endless stream, and face detection is a key step in fatigue detection of facial features. Due to the movement of the face during the driving of the car, the face detection is faced with low real-time performance and is prone to false detection or missed detection.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题就是提供一种嵌入式人脸跟踪方法及装置,可以实时跟踪驾驶员人脸,可以做到实时检测,准确度高,避免误检。The technical problem to be solved by the present invention is to provide an embedded face tracking method and device, which can track the driver's face in real time, realize real-time detection, have high accuracy, and avoid false detection.

为解决上述技术问题,本发明采用如下技术方案:一种嵌入式人脸跟踪方法,包括如下步骤:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions: an embedded face tracking method, comprising the following steps:

步骤1:通过红外摄像头采集驾驶员工作图像,并将图像转换为灰度图;Step 1: Collect the working image of the driver through the infrared camera, and convert the image into a grayscale image;

步骤2:通过统计步骤1生成的灰度图的各个灰度级像素出现次数,利用灰度均匀化处理,获得均匀化图像;Step 2: Obtain a homogenized image by counting the occurrence times of each gray-level pixel of the gray-scale image generated in step 1, and using gray-scale homogenization processing;

步骤3:利用正面人脸检测器对步骤2生成的均匀化图像进行人脸检测,获得多个初级人脸候选窗口,并应用非极大抑制NMS合并高度重叠的初级人脸候选窗口;Step 3: Use the frontal face detector to perform face detection on the homogenized image generated in step 2, obtain multiple primary face candidate windows, and apply non-maximum suppression NMS to merge highly overlapping primary face candidate windows;

步骤4:将经过非极大抑制NMS处理的初级人脸候选窗口在纵向上进行二等分,截取上半部分区域图像输入已经训练好的深度卷积神经网络进行判断人眼是否存在,保留检测到眼睛的人脸候选窗口作为最终人脸窗口;Step 4: Divide the primary face candidate window processed by the non-maximum suppression NMS into two equal parts in the vertical direction, intercept the upper half of the image and input it into the trained deep convolutional neural network to judge whether the human eye exists, and reserve the detection The face candidate window to the eyes is used as the final face window;

步骤5:经过步骤1-4,获得了最终人脸窗口,计算最终人脸窗口中心点的坐标,以及计算最终人脸窗口中心点横坐标与红外摄像头采集图像的中心点的水平偏差;Step 5: After steps 1-4, the final face window is obtained, the coordinates of the center point of the final face window are calculated, and the horizontal deviation between the abscissa of the center point of the final face window and the center point of the image captured by the infrared camera is calculated;

其中水平偏差是由候选人脸窗口中心点横坐标x与红外摄像头采集图像的中心点横坐标w1差值,经过归一化计算而得,其公式为:The horizontal deviation is the difference between the abscissa x of the center point of the candidate face window and the abscissa w 1 of the center point of the image captured by the infrared camera, which is calculated by normalization, and its formula is:

Δ=((x-w1)/w1-a1)*a2 (8)Δ=((xw 1 )/w 1 -a 1 )*a 2 (8)

其中,x代表候选人脸窗口中心点横坐标值,w1代表红外摄像头采集图像中心点横坐标,当a1取值0.5,a2取值2时,Δ取值将归一化为[-1,1]范围;Among them, x represents the abscissa value of the center point of the candidate face window, w 1 represents the abscissa value of the center point of the image captured by the infrared camera, when a 1 is 0.5 and a 2 is 2, the value of Δ will be normalized to [- 1, 1] range;

步骤6:将步骤5所获得水平偏差通过串口通讯传送给LattePanda控制板内嵌的Arduino Leonardo单片机,Arduino Leonardo单片机通过PWM控制舵机云台转动红外摄像头下的舵机云台转动修正偏差;Step 6: Send the horizontal deviation obtained in Step 5 to the Arduino Leonardo microcontroller embedded in the LattePanda control board through serial communication, and the Arduino Leonardo microcontroller controls the steering gear gimbal to rotate the steering gear gimbal under the infrared camera through PWM to correct the deviation;

舵机云台修正偏差是将归一化后水平偏差最小化,从而更新舵机云台转角,其公式为:The correction deviation of the servo gimbal is to minimize the normalized horizontal deviation, thereby updating the rotation angle of the servo gimbal. The formula is:

angle=ag+T*Δ (9)angle=ag+T*Δ (9)

其中,angle为更新后舵机云台转角,ag为舵机云台原始转角,T为旋转因子,Δ为候选人脸中心点横坐标与红外采集图像中心点横坐标水平偏差。Among them, angle is the rotation angle of the servo gimbal after the update, ag is the original rotation angle of the servo gimbal, T is the rotation factor, and Δ is the horizontal deviation between the abscissa of the center point of the candidate face and the center point of the infrared acquisition image.

可选的,窗口位置信息记为(x1,y1,w,h),其中x1,y1表示最终人脸窗口左上顶底坐标信息,w,h表示最终人脸窗口的长和宽,计算最终人脸窗口中心点的坐标,其公式为:Optionally, the window position information is denoted as (x 1 , y 1 , w, h), where x 1 , y 1 represent the upper left, top and bottom coordinate information of the final face window, and w, h represent the length and width of the final face window , calculate the coordinates of the center point of the final face window, the formula is:

可选的,T=0.85。Optionally, T=0.85.

可选的,所述步骤2的灰度均匀化处理,首先对输入的灰度图片的各个灰度级像素个数进行统计,其公式为:Optionally, in the grayscale uniformization process in step 2, firstly count the number of pixels of each grayscale level of the input grayscale image, and the formula is as follows:

h(rk)=nk,k=0,1,2,...,255 (1)h(r k )=n k ,k=0,1,2,...,255 (1)

其中,rk表示第k级灰度级,nk表示图像中第k级灰度所对应像素值的个数;Among them, r k represents the k-th gray level, and n k represents the number of pixel values corresponding to the k-th gray level in the image;

接着计算灰度级概率值,其公式为:Then calculate the gray level probability value, and its formula is:

其中,nk表示图像中第k级灰度所对应像素值的个数,n表示图像中像素点的总个数;Among them, n k represents the number of pixel values corresponding to the k-th grayscale in the image, and n represents the total number of pixel points in the image;

最后更新各个灰度级的灰度值,其公式为:Finally, the gray value of each gray level is updated, and its formula is:

S=255*s(rk) (4)S=255*s(r k ) (4)

其中,p(rk)为第k级灰度概率值,s(rk)表示前k个灰度级概率总和,S表示进灰度均化后的灰度级。Among them, p(r k ) is the gray level probability value of the k-th level, s(r k ) represents the sum of the probabilities of the first k gray levels, and S represents the gray level after gray level equalization.

可选的,所述步骤4的深度卷积神经网络,其结构由1层数据层,4层卷积层,3层池化层,1层输出层组成;在训练过程中,采用16000张分辨率为20*20像素的人眼不同状态图像训练数据,并且提取训练数据主要特征,其公式为:Optionally, the deep convolutional neural network in step 4 has a structure consisting of 1 data layer, 4 convolution layers, 3 pooling layers, and 1 output layer; The image training data of different states of the human eye with a rate of 20*20 pixels is extracted, and the main features of the training data are extracted. The formula is:

其中X表示训练数据,矩阵尺寸为16000×400,每一行表示一个训练样本,C表示训练数据的协方差矩阵;利用协方差矩阵是对称矩阵,计算其特征矩阵,公式为:Where X represents the training data, the matrix size is 16000×400, each row represents a training sample, and C represents the covariance matrix of the training data; the covariance matrix is a symmetric matrix, and its characteristic matrix is calculated. The formula is:

P=(e1 e2 ... en)T (6)P=(e 1 e 2 ... e n ) T (6)

其中ei表示特征列向量,λi为特征值;通过对训练图像进行主要轮廓特征进行提取,减少计算量,提高识别准确率。Among them, e i represents the feature column vector, and λ i is the feature value; by extracting the main contour features of the training image, the amount of calculation is reduced and the recognition accuracy is improved.

本发明还提供了一种嵌入式人脸跟踪装置,包括底座、固定于底座上的LattePanda控制板与舵机云台、与舵机云台连接的摄像头支架、安装于摄像头支架上的红外摄像头,所述红外摄像头与LattePanda控制板通过USB连接,所述LattePanda控制板设有人脸图像处理模块、水平偏差计算模块以及PWM模块,所述人脸图像处理模块对红外摄像头采集的人脸图像进行处理获得最终人脸窗口,所述水平偏差计算模块计算最终人脸窗口中心点横坐标与红外摄像头采集图像的中心点的水平偏差,所述PWM模块控制舵机云台转动带动红外摄像头转动修正偏差。The invention also provides an embedded face tracking device, comprising a base, a LattePanda control board and a steering gear pan-tilt fixed on the base, a camera bracket connected with the steering gear pan-tilt, and an infrared camera mounted on the camera bracket, The infrared camera is connected with the LattePanda control board through USB, and the LattePanda control board is provided with a face image processing module, a horizontal deviation calculation module and a PWM module, and the face image processing module processes the face image collected by the infrared camera to obtain In the final face window, the horizontal deviation calculation module calculates the horizontal deviation between the abscissa of the center point of the final face window and the center point of the image captured by the infrared camera, and the PWM module controls the rotation of the steering gear pan and tilt to drive the infrared camera to rotate to correct the deviation.

本发明采用上述技术方案,通过图像灰度均匀化技术,正面人脸检测技术,深度卷积神经网络图像识别技术以及红外摄像头跟随技术,来获取驾驶员面目特征图像。具有以下优点:The present invention adopts the above technical scheme, and obtains the facial feature image of the driver through the image gray level homogenization technology, the frontal face detection technology, the deep convolutional neural network image recognition technology and the infrared camera following technology. Has the following advantages:

1、采用红外摄像头,保证在白天或者夜晚可采集清晰图像,进行人脸检测。1. The infrared camera is used to ensure that clear images can be collected during the day or night for face detection.

2、采用灰度均化技术处理采集图像,有利于图像对比度增加,提高正面人脸检测准确率。2. The grayscale equalization technology is used to process the collected images, which is beneficial to increase the image contrast and improve the accuracy of frontal face detection.

3、通过训练好的深度卷积神经网络识别眼睛是否存在,可以检测的人脸候选窗口正确行,降低了人脸检测的误检率。3. Through the trained deep convolutional neural network to identify the existence of eyes, the face candidate window that can be detected is correct, which reduces the false detection rate of face detection.

4、采用红外摄像头跟随技术,可以使检测到的人脸区域一直处于采集图像中心,避免驾驶员人脸超出采集图像范围而造成漏检。4. Using the infrared camera follow-up technology, the detected face area can always be in the center of the collected image, so as to avoid missed detection caused by the driver's face exceeding the range of the collected image.

本发明的具体技术方案及其有益效果将会在下面的具体实施方式中结合附图进行详细的说明。The specific technical solutions of the present invention and the beneficial effects thereof will be described in detail in the following specific embodiments in conjunction with the accompanying drawings.

附图说明Description of drawings

下面结合附图和具体实施方式对本发明作进一步描述:The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments:

图1是本发明人脸检测流程示意图;Fig. 1 is the schematic flow chart of face detection of the present invention;

图2为本发明深度卷积神经网络结构示意图;2 is a schematic structural diagram of a deep convolutional neural network of the present invention;

图3为本发明装置结构示意图;Figure 3 is a schematic diagram of the structure of the device of the present invention;

图3中:1、红外摄像头,2、摄像头支架,3、LattePanda控制板,4、舵机云台,5、底座。In Figure 3: 1. Infrared camera, 2. Camera bracket, 3. LattePanda control board, 4. Servo head, 5. Base.

具体实施方式Detailed ways

实施例一Example 1

参考图1所示,一种嵌入式人脸跟踪方法,包括如下步骤:Referring to Figure 1, an embedded face tracking method includes the following steps:

步骤1:通过红外摄像头采集驾驶员工作图像,并将图像转换为灰度图,作为以下步骤输入;Step 1: Collect the driver's working image through the infrared camera, convert the image into a grayscale image, and enter it as the following steps;

进一步,红外摄像头镜头周围配有16个红外LED,采集图像分辨率为200*200像素,图像内容包含驾驶员肩部以上部位。Further, there are 16 infrared LEDs around the lens of the infrared camera, the resolution of the collected image is 200*200 pixels, and the image content includes the part above the driver's shoulder.

步骤2:通过统计步骤1生成的灰度图的各个灰度级像素出现次数,利用灰度均匀化处理,获得均匀化图像,作为以下步骤输入;Step 2: By counting the occurrences of each gray-level pixel of the gray-scale image generated in step 1, and using the gray-scale homogenization process, a homogenized image is obtained, which is input as the following steps;

进一步,所述的灰度均匀化处理,首先对输入的灰度图片的各个灰度级像素个数进行统计,其公式为:Further, in the grayscale uniformization process, firstly, the number of pixels of each grayscale level of the input grayscale image is counted, and the formula is as follows:

h(rk)=nk,k=0,1,2,...,255 (10)h(r k )=n k ,k=0,1,2,...,255 (10)

其中,rk表示第k级灰度级,nk表示图像中第k级灰度所对应像素值的个数;接着计算灰度级概率值,其公式为:Among them, r k represents the k-th gray level, and n k represents the number of pixel values corresponding to the k-th gray level in the image; then calculate the gray level probability value, and its formula is:

其中,nk表示图像中第k级灰度所对应像素值的个数,n表示图像中像素点的总个数;最后更新各个灰度级的灰度值,其公式为:Among them, n k represents the number of pixel values corresponding to the k-th gray level in the image, and n represents the total number of pixel points in the image; finally, the gray value of each gray level is updated, and the formula is:

S=255*s(rk) (13)S=255*s(r k ) (13)

其中p(rk)为第k级灰度概率值,s(rk)表示前k个灰度级概率总和,S表示进灰度均化后的灰度级。Among them, p(r k ) is the gray level probability value of the k-th level, s(r k ) represents the probability sum of the first k gray levels, and S represents the gray level after gray leveling.

步骤3:利用正面人脸检测器对步骤2生成的均匀化图像进行人脸检测,获得多个初级人脸候选窗口,并应用非极大抑制NMS合并高度重叠的初级人脸候选窗口;Step 3: Use the frontal face detector to perform face detection on the homogenized image generated in step 2, obtain multiple primary face candidate windows, and apply non-maximum suppression NMS to merge highly overlapping primary face candidate windows;

进一步,所述的正面人脸检测器是将图像提取的Haar-Like特征输入训练好的AdaBoost级联分类器进行识别,其中Haar-Like特征表示图像的灰度值变化情况;接着通过非极大抑制NMS将正面人脸检测器所检测到的人脸候选窗口按照规则进行合并,其规则是:当检测到的候选人脸窗口重叠度高于50%时,删除置信度低的人脸候选窗口;当检测到一个人脸候选窗口是另外一个人脸候选窗口的子窗口时,删除该子窗口。Further, the described frontal face detector is to input the Haar-Like feature extracted from the image into the trained AdaBoost cascade classifier for identification, wherein the Haar-Like feature represents the change of the gray value of the image; The suppression NMS merges the face candidate windows detected by the frontal face detector according to the rules. The rule is: when the detected candidate face window overlap is higher than 50%, delete the face candidate window with low confidence. ; When it is detected that a face candidate window is a sub-window of another face candidate window, delete the sub-window.

步骤4:将经过非极大抑制NMS处理的初级人脸候选窗口在纵向上进行二等分,截取上半部分区域图像输入已经训练好的深度卷积神经网络进行判断人眼是否存在,保留检测到眼睛的人脸候选窗口作为最终人脸窗口;Step 4: Divide the primary face candidate window processed by the non-maximum suppression NMS into two equal parts in the vertical direction, intercept the upper half of the image and input it into the trained deep convolutional neural network to judge whether the human eye exists, and reserve the detection The face candidate window to the eyes is used as the final face window;

进一步,所述的深度卷积神经网络,如图2所示,其结构由1层数据层,4层卷积层,3层池化层,1层输出层组成;在训练过程中,采用16000张分辨率为20*20像素的人眼不同状态图像训练数据,并且提取训练数据主要特征,其公式为:Further, the described deep convolutional neural network, as shown in Figure 2, has a structure consisting of 1 data layer, 4 convolutional layers, 3 pooling layers, and 1 output layer; in the training process, 16000 The image training data of different states of the human eye with a resolution of 20*20 pixels is extracted, and the main features of the training data are extracted. The formula is:

其中X表示训练数据,矩阵尺寸为16000×400,每一行表示一个训练样本,C表示训练数据的协方差矩阵;利用协方差矩阵是对称矩阵,计算其特征矩阵,公式为:Where X represents the training data, the matrix size is 16000×400, each row represents a training sample, and C represents the covariance matrix of the training data; the covariance matrix is a symmetric matrix, and its characteristic matrix is calculated. The formula is:

P=(e1 e2 ... en)T (6)P=(e 1 e 2 ... e n ) T (6)

其中ei表示特征列向量,λi为特征值。where e i represents the eigencolumn vector, and λ i is the eigenvalue.

步骤5:经过步骤1-4,获得了最终人脸窗口,计算最终人脸窗口中心点的坐标,以及计算最终人脸窗口中心点横坐标与红外摄像头采集图像的中心点的水平偏差;Step 5: After steps 1-4, the final face window is obtained, the coordinates of the center point of the final face window are calculated, and the horizontal deviation between the abscissa of the center point of the final face window and the center point of the image captured by the infrared camera is calculated;

窗口位置信息记为(x1,y1,w,h),其中x1,y1表示最终人脸窗口左上顶底坐标信息,w,h表示最终人脸窗口的长和宽,计算最终人脸窗口中心点的坐标,其公式为:The window position information is recorded as (x 1 , y 1 , w, h), where x 1 , y 1 represent the top left, top and bottom coordinate information of the final face window, w, h represent the length and width of the final face window, and calculate the final face window. The coordinates of the center point of the face window, the formula is:

水平偏差是由候选人脸窗口中心点横坐标x与红外摄像头采集图像的中心点横坐标差值w1,经过归一化计算而得,其公式为:The horizontal deviation is calculated by normalizing the difference w 1 between the abscissa x of the center point of the candidate face window and the abscissa of the center point of the image captured by the infrared camera. The formula is:

Δ=((x-w1)/w1-a1)*a2 (5)Δ=((xw 1 )/w 1 -a 1 )*a 2 (5)

其中,x代表候选人脸窗口中心点横坐标值,w1代表红外摄像头采集图像中心点横坐标,为计算方便,本发明采用归一化处理水平偏差,只有当a1=0.5,a2=2时,才可以将水平偏差数据范围压缩到[-1,1]。Among them, x represents the abscissa value of the center point of the candidate face window, and w 1 represents the abscissa of the center point of the image collected by the infrared camera. For the convenience of calculation, the present invention adopts normalization to deal with the horizontal deviation, and only when a 1 =0.5, a 2 = 2, the horizontal deviation data range can be compressed to [-1, 1].

步骤6:将步骤5所获得水平偏差通过串口通讯传送给LattePanda控制板内嵌的Arduino Leonardo单片机,Arduino Leonardo单片机通过PWM控制红外摄像头下的舵机云台转动修正偏差。Step 6: Send the horizontal deviation obtained in Step 5 to the Arduino Leonardo MCU embedded in the LattePanda control board through serial communication. The Arduino Leonardo MCU controls the rotation of the servo pan/tilt under the infrared camera to correct the deviation through PWM.

进一步,所述步骤6的舵机云台修正偏差是将水平偏差最小化,从而更新舵机云台转角,其公式为:Further, the steering gear gimbal correction deviation in the step 6 is to minimize the horizontal deviation, thereby updating the steering gear gimbal rotation angle, and its formula is:

angle=ag+T*Δ (6)angle=ag+T*Δ (6)

其中,angle为更新后舵机云台转角,ag为舵机云台原始转角,T为旋转因子,Δ为候选人脸中心点横坐标与红外采集图像中心点横坐标水平偏差。Among them, angle is the rotation angle of the servo gimbal after the update, ag is the original rotation angle of the servo gimbal, T is the rotation factor, and Δ is the horizontal deviation between the abscissa of the center point of the candidate face and the center point of the infrared acquisition image.

当最终人脸窗口中心点与采集图像中心点出现水平偏差时,当T=0.85时,使计算更新云台转角angle能够快速响应偏差。When there is a horizontal deviation between the center point of the final face window and the center point of the collected image, when T=0.85, the calculation and update of the pan-tilt angle angle can quickly respond to the deviation.

本发明方法步骤简单,可操作性强,可实现良好的人脸检测功能。The method of the invention has simple steps, strong operability, and can realize a good face detection function.

实施例二Embodiment 2

参考图3所示,一种嵌入式人脸跟踪装置,包括底座、固定于底座上的LattePanda控制板与舵机云台、与舵机云台连接的摄像头支架、安装于摄像头支架上的红外摄像头,所述红外摄像头与LattePanda控制板通过USB连接,所述LattePanda控制板设有人脸图像处理模块、水平偏差计算模块以及PWM模块,所述人脸图像处理模块对红外摄像头采集的人脸图像进行处理获得最终人脸窗口,所述水平偏差计算模块计算最终人脸窗口中心点横坐标与红外摄像头采集图像的中心点的水平偏差,所述PWM模块控制舵机云台转动带动红外摄像头转动修正偏差。Referring to Figure 3, an embedded face tracking device includes a base, a LattePanda control board and a steering gear gimbal fixed on the base, a camera bracket connected to the steering gear gimbal, and an infrared camera mounted on the camera bracket. The infrared camera and the LattePanda control board are connected via USB, and the LattePanda control board is provided with a face image processing module, a horizontal deviation calculation module and a PWM module, and the face image processing module processes the face image collected by the infrared camera The final face window is obtained, and the horizontal deviation calculation module calculates the horizontal deviation between the abscissa of the center point of the final face window and the center point of the image captured by the infrared camera, and the PWM module controls the rotation of the steering gear pan and tilt to drive the infrared camera to rotate to correct the deviation.

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,熟悉该本领域的技术人员应该明白本发明包括但不限于上面具体实施方式中描述的内容。任何不偏离本发明的功能和结构原理的修改都将包括在权利要求书的范围中。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Those skilled in the art should understand that the present invention includes but is not limited to the contents described in the above specific embodiments. Any modifications that do not depart from the functional and structural principles of the present invention are intended to be included within the scope of the claims.

Claims (6)

1. a kind of embedded human face tracing method, it is characterised in that include the following steps:
Step 1: driver's work image being acquired by infrared camera, and converts the image into grayscale image;
Step 2: through each gray-level pixels frequency of occurrence for the grayscale image that statistic procedure 1 generates, at Histogram equalization Reason obtains homogenization image;
Step 3: Face datection being carried out to the homogenization image that step 2 generates using obverse face detection device, obtains multiple primary Face candidate window, and application non-maximum restraining NMS merges the primary face candidate window of high superposed;
Step 4: will be halved in the longitudinal direction by the primary face candidate window of non-maximum restraining NMS processing, in interception Trained depth convolutional neural networks carry out judging that human eye whether there is the input of half part area image, and reservation detects The face candidate window of eyes is as final face window;
Step 5: passing through step 1-4, obtain final face window, calculate the coordinate of final face window center point, Yi Jiji Calculate the horizontal departure of the central point of final face window center point abscissa and infrared camera acquisition image;Wherein horizontal departure It is by the central point abscissa w of candidate face window center point abscissa x and infrared camera acquisition image1Difference, by returning One changes calculating and obtains, formula are as follows:
Δ=((x-w1)/w1-a1)*a2 (8)
Wherein, x represents candidate face window center point abscissa value, w1Infrared camera acquisition image center abscissa is represented, Work as a1Value 0.5, a2When value 2, Δ value will be normalized to [- 1,1] range;
Step 6: the obtained horizontal departure of step 5 being sent to LattePanda control panel embeds by serial communication Arduino Leonardo single-chip microcontroller, Arduino Leonardo single-chip microcontroller rotate infrared photography by PWM control flaps machine head Steering engine cloud platform rotation under head corrects deviation;
Steering engine holder amendment deviation is that horizontal departure will minimize after normalizing, to update steering engine holder corner, formula are as follows:
Angle=ag+T* Δ (9)
Wherein, angle is steering engine holder corner after updating, and ag is the original corner of steering engine holder, and T is twiddle factor, and Δ is candidate Face center abscissa and infrared collecting image center abscissa horizontal departure.
2. a kind of embedded human face tracing method according to claim 1, it is characterised in that: window position information is denoted as (x1,y1, w, h), wherein x1,y1Indicate that bottom coordinate information is pushed up in final face window upper left, w, h indicate the length of final face window And width, calculate the coordinate of final face window center point, formula are as follows:
3. a kind of embedded human face tracing method according to claim 1, it is characterised in that: T=0.85.
4. a kind of embedded human face tracing method according to claim 1, it is characterised in that: the gray scale of the step 2 is equal Processing is homogenized, each gray-level pixels number of the gray scale picture of input is counted first, formula are as follows:
h(rk)=nk, k=0,1,2 ..., 255 (1)
Wherein, rkIndicate kth grade gray level, nkIndicate the number of pixel value corresponding to kth grade gray scale in image;
Then gray level probability value, formula are calculated are as follows:
Wherein, nkIndicate the number of pixel value corresponding to kth grade gray scale in image, n indicates the total number of pixel in image;
The gray value of each gray level of final updating, formula are as follows:
S=255*s (rk) (4)
Wherein, p (rk) it is kth grade gray probability value, s (rk) indicating preceding k gray level probability summation, S is indicated after being homogenized into gray scale Gray level.
5. a kind of embedded human face tracing method according to claim 1, it is characterised in that: the depth of the step 4 is rolled up Product neural network, structure is by 1 layer data layer, 4 layers of convolutional layer, 3 layers of pond layer, 1 layer of output layer composition;In the training process, Use 16000 resolution ratio for the human eye different conditions image training data of 20*20 pixel, and it is mainly special to extract training data Sign, formula are as follows:
Wherein X indicates that training data, matrix size are 16000 × 400, and every a line indicates that a training sample, C indicate training number According to covariance matrix;It is symmetrical matrix using covariance matrix, calculates its eigenmatrix, formula are as follows:
P=(e1 e2 ... en)T (6)
Wherein eiIndicate feature column vector, λiIt is characterized value;It is extracted, is subtracted by carrying out principal outline feature to training image Few calculation amount, improves recognition accuracy.
6. a kind of embedded human face tracking device, it is characterised in that: controlled including pedestal, the LattePanda being fixed on pedestal Plate and steering engine holder, the camera bracket being connect with steering engine holder, the infrared camera being installed on camera bracket, it is described red Outer camera is connect with LattePanda control panel by USB, and the LattePanda control panel is equipped with face image processing mould Block, horizontal departure computing module and PWM module, the facial image that the face image processing module acquires infrared camera Carry out processing and obtain final face window, the horizontal departure computing module calculate final face window center point abscissa with it is red The horizontal departure of the central point of outer camera collection image, the PWM module control flaps machine head rotation drive infrared camera Rotation amendment deviation.
CN201910100302.9A 2019-01-31 2019-01-31 An embedded face tracking method and device Pending CN109948433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910100302.9A CN109948433A (en) 2019-01-31 2019-01-31 An embedded face tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910100302.9A CN109948433A (en) 2019-01-31 2019-01-31 An embedded face tracking method and device

Publications (1)

Publication Number Publication Date
CN109948433A true CN109948433A (en) 2019-06-28

Family

ID=67006694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910100302.9A Pending CN109948433A (en) 2019-01-31 2019-01-31 An embedded face tracking method and device

Country Status (1)

Country Link
CN (1) CN109948433A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111721420A (en) * 2020-04-27 2020-09-29 浙江智物慧云技术有限公司 Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence
CN112686851A (en) * 2020-12-25 2021-04-20 合肥联宝信息技术有限公司 Image detection method, device and storage medium
CN113112668A (en) * 2021-04-15 2021-07-13 新疆爱华盈通信息技术有限公司 Face recognition-based holder tracking method, holder and entrance guard recognition machine
CN115314609A (en) * 2022-06-21 2022-11-08 中南大学 A kind of automatic acquisition method and device of fire eye video of aluminum electrolytic cell
US11882366B2 (en) 2021-02-26 2024-01-23 Hill-Rom Services, Inc. Patient monitoring system
US12176105B2 (en) 2021-08-23 2024-12-24 Hill-Rom Services, Inc. Patient monitoring system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
WO2008151470A1 (en) * 2007-06-15 2008-12-18 Tsinghua University A robust human face detecting method in complicated background image
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN109034133A (en) * 2018-09-03 2018-12-18 北京诚志重科海图科技有限公司 A kind of face identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6940545B1 (en) * 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
WO2008151470A1 (en) * 2007-06-15 2008-12-18 Tsinghua University A robust human face detecting method in complicated background image
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN109034133A (en) * 2018-09-03 2018-12-18 北京诚志重科海图科技有限公司 A kind of face identification method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
中国优秀硕士学位论文全文数据库 信息科技辑: "基于多角度视频的人脸识别系统的设计与实现", 中国优秀硕士学位论文全文数据库 信息科技辑 *
周保兴: "《三维激光扫描技术及其在变形监测中的应用》", 31 January 2018 *
周品,赵新芬: "《MATLAB数理统计》", 30 April 2009 *
姚峰林: "《数字图像处理及在工程中的应用》", 30 April 2014 *
李东洁,何岸花: "基于Mega2560的人脸检测与定位跟踪", 计算机工程与科学 *
李东洁等: "基于Mega2560的人脸检测与定位跟踪", 《计算机工程与科学》 *
杨卫华,吴茂念: "《眼科人工智能》", 28 February 2018 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111721420A (en) * 2020-04-27 2020-09-29 浙江智物慧云技术有限公司 Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence
CN111721420B (en) * 2020-04-27 2021-06-29 浙江智物慧云技术有限公司 Semi-supervised artificial intelligence human body detection embedded algorithm based on infrared array time sequence
CN112686851A (en) * 2020-12-25 2021-04-20 合肥联宝信息技术有限公司 Image detection method, device and storage medium
CN112686851B (en) * 2020-12-25 2022-02-08 合肥联宝信息技术有限公司 Image detection method, device and storage medium
US11882366B2 (en) 2021-02-26 2024-01-23 Hill-Rom Services, Inc. Patient monitoring system
CN113112668A (en) * 2021-04-15 2021-07-13 新疆爱华盈通信息技术有限公司 Face recognition-based holder tracking method, holder and entrance guard recognition machine
US12176105B2 (en) 2021-08-23 2024-12-24 Hill-Rom Services, Inc. Patient monitoring system
CN115314609A (en) * 2022-06-21 2022-11-08 中南大学 A kind of automatic acquisition method and device of fire eye video of aluminum electrolytic cell
CN115314609B (en) * 2022-06-21 2023-11-28 中南大学 An automated collection method and device for aluminum electrolytic cell fire eye video

Similar Documents

Publication Publication Date Title
CN109948433A (en) An embedded face tracking method and device
CN104361332B (en) A kind of face eye areas localization method for fatigue driving detection
CN106682603B (en) Real-time driver fatigue early warning system based on multi-source information fusion
CN109871799B (en) Method for detecting mobile phone playing behavior of driver based on deep learning
Li et al. Yawning detection for monitoring driver fatigue based on two cameras
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
CN104123549B (en) Eye positioning method for real-time monitoring of fatigue driving
CN105844257A (en) Early warning system based on machine vision driving-in-fog road denoter missing and early warning method
CN113763326B (en) A pantograph detection method based on Mask Scoring R-CNN network
CN110992693A (en) A multi-dimensional analysis method of traffic congestion degree based on deep learning
CN104408932A (en) Drunk driving vehicle detection system based on video monitoring
CN106919902B (en) Vehicle identification and track tracking method based on CNN
CN109886086B (en) Pedestrian detection method based on HOG feature and linear SVM cascade classifier
CN114708532A (en) Monitoring video quality evaluation method, system and storage medium
CN107273852A (en) Escalator floor plates object and passenger behavior detection algorithm based on machine vision
CN106446792A (en) A feature extraction method for pedestrian detection in road traffic assisted driving environment
CN106886778A (en) A kind of car plate segmentation of the characters and their identification method under monitoring scene
CN111079675A (en) Driving behavior analysis method based on target detection and target tracking
CN106548163A (en) Method based on TOF depth camera passenger flow countings
CN115965926B (en) A vehicle-mounted road sign marking inspection system
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN110349415A (en) A kind of running speed measurement method based on multi-scale transform
CN105447431B (en) A kind of docking aircraft method for tracking and positioning and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190628

RJ01 Rejection of invention patent application after publication