CN110276271A - Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference - Google Patents
Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference Download PDFInfo
- Publication number
- CN110276271A CN110276271A CN201910462756.0A CN201910462756A CN110276271A CN 110276271 A CN110276271 A CN 110276271A CN 201910462756 A CN201910462756 A CN 201910462756A CN 110276271 A CN110276271 A CN 110276271A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- heart rate
- ippg
- depth information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/024—Measuring pulse rate or heart rate
- A61B5/02416—Measuring pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Cardiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Physiology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
本发明涉及融合IPPG和深度信息抗噪声干扰的非接触心率估计法,其包括以下步骤:1)通过RGBD摄像头采集被测者的人脸图像及其对应的深度信息;2)采用人脸检测算法对采集到的人脸图像利用模板匹配进行人脸检测和定位,形成监视区域;3)采用人脸跟踪算法对连续采集的多个人脸图像所对应的监视区域进行跟踪捕获人脸运动,获得多个具有运动变化的人脸图像;4)运用加权均值方法消除所获得的多个具有运动变化的人脸图像的运动噪声干扰,实现动态人脸识别;5)对人脸识别后的人脸图像利用深度信息来自适应的选取ROI区域;6)采用IPPG技术对ROI区域进行源信号提取、利用经验分解模式EMD方法进行信号去噪以及利用ARMA模型进行能量谱分析求取被检测者的心率值。
The invention relates to a non-contact heart rate estimation method integrating IPPG and depth information to resist noise interference, which includes the following steps: 1) collecting the face image of the subject and its corresponding depth information through an RGBD camera; 2) adopting a face detection algorithm Use template matching to detect and locate the collected face images to form a surveillance area; 3) Use face tracking algorithm to track the surveillance area corresponding to multiple continuously collected face images to capture the movement of the face, and obtain more 4) Use the weighted mean method to eliminate the motion noise interference of the obtained multiple face images with motion changes, and realize dynamic face recognition; 5) The face image after face recognition The depth information is used to adaptively select the ROI area; 6) The source signal is extracted from the ROI area by IPPG technology, the signal denoising is performed by the empirical decomposition mode EMD method, and the ARMA model is used for energy spectrum analysis to obtain the heart rate of the detected person.
Description
技术领域technical field
本发明涉及心率估计技术领域,尤其涉及一种融合图像式光电容积描记法(IPPG)和深度信息抗运动噪声干扰的非接触心率估计方法。The invention relates to the technical field of heart rate estimation, in particular to a non-contact heart rate estimation method that combines image photoplethysmography (IPPG) and depth information to resist motion noise interference.
背景技术Background technique
根据世界卫生组织数据显示,每年因心血管疾病死亡人数超过1750万人,该疾病位于非传染疾病致死率病因的首位。心率是能够反映心脏状态的重要生理参数,是心血管疾病预防及临床诊断的重要指标。According to the World Health Organization, more than 17.5 million people die each year due to cardiovascular disease, which is the leading cause of non-communicable disease mortality. Heart rate is an important physiological parameter that can reflect the state of the heart, and is an important indicator for cardiovascular disease prevention and clinical diagnosis.
传统的心率测量方法,心电图是临床实践心率测量的标准方法。它需要医护人员按照正确的导联位置安置导联电极,过程繁琐、需要专业的操作,在使用时需要在电极和皮肤之间填充导电介质或施加压力固定,因此给被测者带来很大的不适,且复杂的连接线不便于操作。现有的心率检测装置及相关信号处理技术主要包括以下几种方案:The traditional method of heart rate measurement, the electrocardiogram is the standard method for heart rate measurement in clinical practice. It requires medical staff to place the lead electrodes according to the correct lead positions. The process is cumbersome and requires professional operation. During use, it is necessary to fill conductive media or apply pressure between the electrodes and the skin to fix it, so it brings a great deal to the subject. discomfort, and the complicated connecting lines are inconvenient to operate. The existing heart rate detection devices and related signal processing technologies mainly include the following solutions:
(1)贴片式心率检测仪:在被测者身上的某些特定位置粘贴电极片,从人体信号中提取心率。然而,该贴片式心率检测仪在检测时粘贴电极片会对被测者造成一定不适和刺激,测试环境受限制,不能长时间实时检测,过程复杂操作不方便。(1) Patch-type heart rate detector: paste electrode patches on some specific positions on the subject's body to extract the heart rate from the human body signal. However, the patch-type heart rate detector will cause some discomfort and stimulation to the subject when the electrode patch is attached during detection, and the test environment is limited, so it cannot be detected in real time for a long time, and the process is complicated and inconvenient to operate.
(2)电容式心率测量:基于容性耦合原理,完成了非接触心率检测;电容式非接触心电传感器具有良好的心电测量性能,可靠性高,可用于心电信号的长时间测量。然而,该电容式心率测量的心率测量结果容易受到运动噪声影响,所以该方式得到的效果并不理想。(2) Capacitive heart rate measurement: Based on the capacitive coupling principle, non-contact heart rate detection is completed; the capacitive non-contact ECG sensor has good ECG measurement performance and high reliability, and can be used for long-term measurement of ECG signals. However, the heart rate measurement result of the capacitive heart rate measurement is easily affected by motion noise, so the effect obtained by this method is not ideal.
(3)光电脉搏测量:利用血液的高度不透明性,以及光照在一般组织中的穿透性要比在血液中大几十倍的特点,对脉搏信号进行检查。然而,该光电脉搏测量不管是采用反射式还是投射式方式,除了同样容易受到运动噪声的影响外,还为了减小外界环境光线噪声干扰,要求传感器贴紧皮肤。(3) Photoelectric pulse measurement: Using the high opacity of blood and the characteristics that the penetration of light in general tissues is dozens of times larger than that in blood, the pulse signal is checked. However, whether the photoelectric pulse measurement adopts the reflection or projection method, in addition to being easily affected by motion noise, it also requires the sensor to be close to the skin in order to reduce the interference of ambient light noise.
(4)光电体积描记法心率测量:通过记录光线透过皮肤和组织后,皮肤组织对于光的吸收的多少的方式来计算心率,该技术由于不与被测对象直接接触,属于非接触测量,具有较好的人机体验感。然而,该光电体积描记法进行心率检测,受环境光的限制,一旦光照有变化或者不够强,心率测试就会变得不准确;并且需要检测者坐在固定位置,位置改变也会使得检测结果不准确;在移动状态下,ROI区域选择也会不准确;并且消除信号中由于被测对象与图像采集设备之间相对运动产生的噪声是技术难点。(4) Heart rate measurement by photoplethysmography: The heart rate is calculated by recording the amount of light absorbed by the skin tissue after the light passes through the skin and tissue. Since this technology does not directly contact the measured object, it is a non-contact measurement. Has a better human-machine experience. However, the photoplethysmography method for heart rate detection is limited by the ambient light. Once the light changes or is not strong enough, the heart rate test will become inaccurate; and the tester needs to sit in a fixed position, and the change of position will also make the detection result. Inaccurate; in the moving state, the ROI area selection will also be inaccurate; and it is a technical difficulty to eliminate the noise in the signal due to the relative movement between the measured object and the image acquisition device.
(5)图像脉搏测量:依据心搏引起的血管变形能够通过由此产生血管周围皮肤变形来观测。然而,该图像脉搏测量容易受到伪动和振动对测量数据的干扰且受到实验的光条件影响。(5) Image pulse measurement: The deformation of blood vessels according to the heartbeat can be observed by the resulting deformation of the skin around the blood vessels. However, this image pulse measurement is susceptible to artifacts and vibrations perturbing the measurement data and to the experimental light conditions.
(6)针对信号处理方面,目前主要处理技术有:有限脉冲响应(finite impulseresponse,FIR)滤波器,带通滤波器和滑动平均滤波器。(6) For signal processing, the current main processing technologies are: finite impulse response (finite impulse response, FIR) filter, band-pass filter and moving average filter.
近年来,随着电子技术以及测量方法的发展。非接触式心率测量被提出并实现。光电容积描记法,利用光电传感器在活体组织中检测血液容积变化的一种无创检测方法。由于人体不同组织对于光线的吸收量不同,在通过透射或者反射后,接收到的光强会出现规律性变化。心脏规律性收缩和舒张是导致规律性变化的主因,因此这些脉动信号与心脏跳动直接相关。但是上述测量方法存在采集信号容易受到运动噪声干扰,导致准确率低的问题。In recent years, with the development of electronic technology and measurement methods. Non-contact heart rate measurement is proposed and implemented. Photoplethysmography is a non-invasive detection method that uses photoelectric sensors to detect changes in blood volume in living tissue. Due to the different absorption of light by different tissues of the human body, the received light intensity will change regularly after transmission or reflection. The regular contraction and relaxation of the heart is the main cause of the regular changes, so these pulsatile signals are directly related to the beating of the heart. However, the above measurement method has the problem that the collected signal is easily disturbed by motion noise, resulting in low accuracy.
图像式光电容积描记法(image photoplenthysmography,IPPG)是近几年在传统光电容积描记技术基础上发展起来的一种能够实现“远距离”(可以大于0.5米)非接触式生理信号测量技术。IPPG技术的非接触测量、低成本、容易操作等特点,尤其是非接触测量方式使其能够实现一些特定情况下的临床及日常检测。已提出的图像光电容积描记法(IPPG),普遍采用固定的ROI区域,同时要求被测者静坐在图像采集设备前,没有考虑被测者移动的情况。方法中设定固定的ROI区域不能克服被测者运动时引入的噪声造成的估计心率信号不准确的问题。Image photoplenthysmography (IPPG) is a non-contact physiological signal measurement technology developed on the basis of traditional photoplethysmography in recent years. The characteristics of non-contact measurement, low cost, and easy operation of IPPG technology, especially the non-contact measurement method, enable it to realize clinical and daily detection in some specific cases. The proposed image photoplethysmography (IPPG) generally uses a fixed ROI area and requires the subject to sit still in front of the image acquisition device without considering the subject's movement. The fixed ROI area set in the method cannot overcome the problem of inaccurate estimated heart rate signal caused by the noise introduced by the subject's movement.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对现有技术的不足,提供一种设计合理,准确性高,能抗运动噪声干扰的融合图像式光电容积描记法(IPPG)和深度信息的非接触心率估计方法。The purpose of the present invention is to provide a non-contact heart rate estimation method integrating image photoplethysmography (IPPG) and depth information with reasonable design, high accuracy, and anti-motion noise interference, aiming at the deficiencies of the prior art.
为实现上述目的,本发明采用以下技术方案:To achieve the above object, the present invention adopts the following technical solutions:
融合IPPG和深度信息抗噪声干扰的非接触心率估计法,其包括以下步骤:A non-contact heart rate estimation method that combines IPPG and depth information to resist noise interference, which includes the following steps:
1)信息采集:通过RGBD摄像头采集被测者的人脸图像及其对应的深度信息;1) Information collection: collect the subject's face image and its corresponding depth information through the RGBD camera;
2)人脸图像检测:采用人脸检测算法对采集到的人脸图像利用模板匹配进行人脸检测和定位,形成监视区域;2) Face image detection: the face detection algorithm is used to detect and locate the collected face image using template matching to form a monitoring area;
3)人脸跟踪:采用人脸跟踪算法对连续采集到的多个人脸图像所对应的监视区域进行跟踪捕获人脸运动,获得多个具有运动变化的人脸图像;3) face tracking: adopt the face tracking algorithm to track and capture the movement of the face in the monitoring area corresponding to the multiple face images collected continuously, and obtain a plurality of face images with motion changes;
4)运动噪声干扰消除:运用加权均值方法消除所获得的多个具有运动变化的人脸图像的运动噪声干扰,实现动态人脸识别;4) Elimination of motion noise interference: use the weighted mean method to eliminate the motion noise interference of a plurality of acquired face images with motion changes, and realize dynamic face recognition;
5)ROI区域提取:对人脸识别后的人脸图像利用深度信息来自适应的选取ROI区域;5) ROI area extraction: adaptively select the ROI area by using the depth information for the face image after face recognition;
6)采用IPPG技术进行心率计算:采用IPPG技术对ROI区域进行源信号提取、利用经验分解模式EMD方法进行信号去噪以及利用ARMA模型进行能量谱分析求取被检测者的心率值。6) Heart rate calculation using IPPG technology: using IPPG technology to extract the source signal in the ROI area, using the empirical decomposition mode EMD method to denoise the signal, and using the ARMA model to perform energy spectrum analysis to obtain the heart rate value of the subject.
作为优选,步骤1)中所述RGBD摄像头包括RGB摄像头和深度摄像头,所述RGB摄像头用于拍摄人体的RGB图像,深度摄像头用于拍摄人脸的深度图像,所述RGB摄像头和深度摄像头标定后,深度摄像头拍摄的深度图像和RGB摄像头拍摄的RGB图像重合。Preferably, the RGBD camera described in step 1) includes an RGB camera and a depth camera, the RGB camera is used to capture the RGB image of the human body, and the depth camera is used to capture the depth image of the human face. After the RGB camera and the depth camera are calibrated , the depth image captured by the depth camera and the RGB image captured by the RGB camera overlap.
作为优选,步骤2)人脸图像检测的方法为:将采集到的人脸图像作为输入图像,搜索输入图像中可能尺度和位置的矩形区域,即待检测的候选窗口,对每个候选窗口按照如下步骤进行处理:先使用双眼模板匹配进行粗筛选,再对窗口内图像进行均方差标准化,使人脸区域分布区域标准化,消除光照变化的影响,然后进行人脸模板匹配,若没有超过设定的阈值,则作为候选人脸输出。Preferably, the method of step 2) face image detection is: taking the collected face image as an input image, searching for a rectangular area of possible scales and positions in the input image, that is, a candidate window to be detected, for each candidate window according to The following steps are used for processing: first use binocular template matching for rough screening, and then standardize the mean square error of the images in the window to standardize the distribution area of the face area, eliminate the influence of illumination changes, and then perform face template matching. The threshold of , then it is output as a candidate face.
作为优选,步骤3)人脸跟踪的方法为:假设序列中每一帧中只出现一个人脸,设定初始监视区域为R0,先在各帧的监视区域内检测人脸,若检测到人脸,则根据其定位结果按照求解监视区域的公式计算下一帧中新的监视区域,如此处理各帧;若检测失败,则保持监视区域不变,并在后续帧中继续检测,以防止由于偶然的漏检造成的跟踪失败;当漏检帧超过一定数量,即漏检超时,认为人脸已经消失,则重新开始在初始监视区域中检测后续帧中可能出现的新的人脸;Preferably, step 3) the method of face tracking is: assuming that only one face appears in each frame in the sequence, set the initial monitoring area as R 0 , first detect the face in the monitoring area of each frame, if detected face, according to its positioning result, calculate the new monitoring area in the next frame according to the formula for solving the monitoring area, and process each frame in this way; if the detection fails, keep the monitoring area unchanged, and continue to detect in subsequent frames to prevent Tracking failure due to accidental missed detection; when the number of missed frames exceeds a certain number, that is, the missed detection times out, and the face is considered to have disappeared, it will restart to detect new faces that may appear in subsequent frames in the initial monitoring area;
假设,监视区域用一个六元组表示为LR=(xmin,xmax,ymin,ymax,chmin,chmax),xmin,xmax是人脸中心位置在x方向的范围,ymin,ymax是人脸中心位置在y方向的范围,chmin,chmax是人脸尺度的范围;将人脸区域近似为一个正方形,用一个三元组表示为L=(xcenter,xcenter,chface),各元素依次描述了人脸的中心位置和大小;则,求解监视区域的公式如下:Suppose, the surveillance area is represented by a six-tuple as LR=(x min , x max , y min , y max , ch min , ch max ), x min , x max are the range of the center position of the face in the x direction, y min , y max is the range of the center position of the face in the y direction, ch min , ch max is the range of the face scale; the face area is approximated as a square, which is represented by a triple as L=(x center , x center ,ch face ), each element describes the center position and size of the face in turn; then, the formula for solving the monitoring area is as follows:
其中,a,b分别表示限定了人脸在两帧之间x,y位置,c表示人脸尺度的最大变化。Among them, a and b represent the x and y positions of the face between two frames, respectively, and c represents the maximum change of the face scale.
作为优选,步骤4)运动噪声干扰消除的方法为:根据人脸左右摇摆角度的变化,定义每幅人脸变化的权值计算方法,提出加权平均人脸的构建策略,然后基于人脸的俯仰角度变化,将其划分为俯视、平视和仰视三个层次,并在每个层次中构建加权均值人脸,形成加权均值人脸矩阵;Preferably, step 4) the method for eliminating motion noise interference is: according to the change of the left and right swing angle of the human face, define the weight calculation method of each face change, propose a construction strategy of the weighted average face, and then based on the pitch of the face The angle changes, it is divided into three levels of looking down, looking up and looking up, and constructing a weighted mean face in each level to form a weighted mean face matrix;
假设,给定的运动变化灰度人脸图像为Ij(x,y),则,加权均值人脸的公式如下:Assuming that the given motion change grayscale face image is I j (x, y), the formula of the weighted mean face is as follows:
其中,ZM表示同一人的不同运动图像总数,j代表第j幅变化图像,x,y表示二维图像平面坐标,ωj表示对应第j幅人脸图像的权重值;Among them, ZM represents the total number of different moving images of the same person, j represents the j-th changing image, x, y represent the two-dimensional image plane coordinates, and ω j represents the weight value corresponding to the j-th face image;
ωj的确定与人脸运动左右摇摆角度相关,ωj的计算方法是:假设人脸左右摇摆前的双眼坐标分别为A(xA,yA)和B(xB,yB),人脸发生向右摇摆后的双眼坐标分别为A'(xA',yA')和B'(xB',yB'),人脸横截面图所在圆的圆心坐标为O(xo,yo),人脸向右的摇摆角为∠δ,则第K幅人脸图像权重值ωK的计算公式如下:The determination of ω j is related to the left and right swing angle of the face movement. The calculation method of ω j is as follows: Assuming that the coordinates of the eyes before the face swings left and right are A(x A , y A ) and B (x B , y B ), the human The coordinates of the eyes after the face is swayed to the right are A'(x A' ,y A' ) and B'(x B' ,y B' ) respectively, and the coordinates of the center of the circle where the cross-sectional view of the face is located is O(x o , y o ), the swing angle of the face to the right is ∠δ, then the calculation formula of the weight value ω K of the K-th face image is as follows:
其中,δk表示对应第k幅人脸图像的左右摇摆角度,则权值ωk的求解转换为计算摇摆角度δk;Among them, δ k represents the left and right swing angle corresponding to the kth face image, then the solution of the weight ω k is converted into the calculation of the swing angle δ k ;
双眼坐标A(xA,yA)、B(xB,yB)及其发生左右摇摆后双眼坐标A'(xA',yA')、B'(xB',yB')已确定位;假设双眼旋转所在圆的半径为R,则Eye coordinates A(x A , y A ), B(x B , y B ) and their eye coordinates A'(x A' , y A' ), B'(x B' , y B' ) after rocking left and right The positioning has been determined; assuming that the radius of the circle where the eyes rotate is R, then
所述圆心点O(xo,yo)的坐标通过以下计算过程获取:The coordinates of the center point O(x o , y o ) are obtained through the following calculation process:
联立上面(6)(7)两公式,化简整理可得:Combining the above two formulas (6) and (7), we can get:
同理可得:The same can be obtained:
通过上面(8)(9)两式联立可得圆心点o(xo,yo)的坐标,将该结果带入公式(6),求出半径R,则通过公式(5)可解摇摆角度δ;摇摆角δ是双眼坐标值的函数,即The coordinates of the center point o(x o , y o ) can be obtained by combining the two equations (8) and (9) above, and the result is brought into the equation (6) to obtain the radius R, then the solution can be obtained by the equation (5). The rocking angle δ; the rocking angle δ is a function of the coordinate value of the eyes, namely
δ=f(xA,xB,xA',xB') (10)δ=f(x A ,x B ,x A' ,x B' ) (10)
假设左右摇摆角δ,俯仰角为γ,制得人脸的运动变化范围划分及其对应加权均值人脸的关系表;给定运动变化的灰度人脸图像Ii(x,y),则左右摇摆角度为0°-90°内的加权均值人脸定义如公式(11)所示:Assuming that the left and right sway angle δ, the pitch angle is γ, the relationship table between the division of the movement range of the face and its corresponding weighted mean face is obtained; given the gray-scale face image I i (x, y) of the movement change, then The weighted mean face with a left-right swing angle of 0°-90° is defined as shown in formula (11):
其中,ZM代表某一俯仰角度、某一左右摇摆角度范围内的人脸总数,j表示第j幅人脸,x,y代表二维图像平面坐标,p,q是对应于该变化范围的均值人脸编号,F'p,q表示左右摇摆角度在-90°-0°之间的加权均值人脸;Among them, ZM represents the total number of faces within a certain pitch angle and a certain left and right swing angle, j represents the jth face, x and y represent the two-dimensional image plane coordinates, and p and q are the mean values corresponding to the variation range. Face number, F' p, q represents the weighted mean face with left and right swing angles between -90°-0°;
计算得出的加权均值人脸,最终形成人脸存在多种运动变化情况下的加权均值人脸矩阵:The calculated weighted mean face finally forms the weighted mean face matrix when the face has various motion changes:
对人脸识别后的人脸图像,用长方形框出来,并用h表示深度摄像头到物体表面的距离,N表示图像中框出来的人脸区域内像素个数的测量值,S表示物体实际面积大小;M表示深度摄像头整个视场范围内像素的个数,S1表示深度摄像头整个视场的面积;α和β分别表示深度摄像头的水平视场和垂直视场。根据深度摄像头在该距离的视场范围和该视场范围内的像素个数测量值N与实际物体的面积大小S的比值和在该距离下的深度摄像头整个视场范围内像素的个数M与整个视场面积s1的比值相等,即For the face image after face recognition, frame it with a rectangle, and use h to represent the distance from the depth camera to the surface of the object, N to represent the measured value of the number of pixels in the face area framed in the image, and S to represent the actual size of the object. ; M represents the number of pixels in the entire field of view of the depth camera, S 1 represents the area of the entire field of view of the depth camera; α and β represent the horizontal and vertical fields of view of the depth camera, respectively. According to the ratio of the field of view range of the depth camera and the number of pixels in the field of view range of the depth camera to the area size S of the actual object and the number of pixels M in the entire field of view of the depth camera at this distance is equal to the ratio of the entire field of view area s 1 , that is
其中N是根据图像统计出的人脸所在区域的像素个数,S1可根据深度摄像头的视场范围由几何关系求出,如下式所示:Among them, N is the number of pixels in the area where the face is located according to the statistics of the image, and S 1 can be calculated from the geometric relationship according to the field of view of the depth camera, as shown in the following formula:
又由于视场S1中总的像素个数M,利用这个原理,对于搭建好的图像采集系统,在S1、M、N已知的情况下即可以求出人脸区域面积S:And because of the total number of pixels M in the field of view S 1 , using this principle, for the built image acquisition system, the face area area S can be obtained when S 1 , M, and N are known:
对于ROI区域面积的确定,本发明选择对人脸识别后的人脸图像从上边缘往下45%-75%的区域画平行线,取其图像整体高度的30%作为ROI的高度,ROI的宽度取识别后图像宽度的70%,获得仅包含鼻子和左右脸颊的ROI区域。For the determination of the area of the ROI area, the present invention chooses to draw parallel lines from the upper edge to the lower 45%-75% area of the face image after face recognition, and takes 30% of the overall height of the image as the height of the ROI. The width is taken as 70% of the width of the image after recognition, and the ROI area containing only the nose and left and right cheeks is obtained.
作为优选,步骤6)采用IPPG技术进行心率计算的方法为:对ROI区域进行源信号提取,将ROI区域图像分为R、G、B三个通道,以噪声最小的G通道作为源信号的通道,对分离后的G通道图像进行空间像素平均处理:Preferably, step 6) adopts IPPG technology to calculate the heart rate as follows: extract the source signal from the ROI area, divide the ROI area image into three channels: R, G, and B, and use the G channel with the least noise as the channel of the source signal , perform spatial pixel averaging on the separated G channel image:
其中k为图像帧数,K是图像的总帧数,Z(k)是R通道的一维源信号,zi,j(n)为像素点(i,j)在G通道颜色强度值,g,d分别是图像的高度和宽度;where k is the number of image frames, K is the total number of frames of the image, Z(k) is the one-dimensional source signal of the R channel, zi ,j (n) is the color intensity value of the pixel (i,j) in the G channel, g, d are the height and width of the image, respectively;
利用经验分解模式EMD方法进行信号去噪,其步骤如下:Using the empirical decomposition mode EMD method to denoise the signal, the steps are as follows:
(1)取原始信号u(t)的平均值,得到信号m1(t);(1) Take the average value of the original signal u(t) to obtain the signal m 1 (t);
(2)计算一阶余量hp1(t)=u(t)-m1(t),检查hp1(t)是否满足IMF条件,若否,则返回到步骤(1),使用hp1(t)作为第二次筛选的原始信号,即:(2) Calculate the first-order margin hp 1 (t)=u(t)-m 1 (t), check whether hp 1 (t) satisfies the IMF condition, if not, return to step (1) and use hp 1 (t) as the raw signal for the second screening, namely:
hp2(t)=hp1(t)-m2(t) (17)hp 2 (t)=hp 1 (t)-m 2 (t) (17)
重复筛选k次:Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)hp k (t)=hp k-1 (t)-m k (t) (18)
在hpk(t)满足IMF条件之前,得到IMF的第一个分量IMF1,即Before hp k (t) satisfies the IMF condition, the first component IMF 1 of the IMF is obtained, namely
IMF1=hpk(t) (19)IMF 1 = hp k (t) (19)
(3)原始信号u(t)减去IMF1可得到剩余量r1(t),即(3) Subtract IMF 1 from the original signal u(t) to obtain the residual amount r 1 (t), namely
r1(t)=u(t)-IMF1 (20)r 1 (t)=u(t)-IMF 1 (20)
(4)令u1(t)=r1(t),将u1(t)作为新的原始信号,重复执行上述步骤得到第二个IMF分量IMF2,如此重复n次:(4) Let u 1 (t)=r 1 (t), take u 1 (t) as the new original signal, repeat the above steps to obtain the second IMF component IMF 2 , repeat this n times:
(5)当第n个分量rn(t)已成为单调函数,无法再分解IMF时,整个EMD的分解过程完成;原始信号u(t)可以表示成n个IMF分量与一个平均趋势分量rn(t)的组合,即:(5) When the nth component r n (t) has become a monotonic function and can no longer decompose the IMF, the entire EMD decomposition process is completed; the original signal u(t) can be expressed as n IMF components and an average trend component r The combination of n (t), namely:
用ARMA模型对EMD分解后其频率位于心跳频带内0.75-2.0Hz的信号做能量谱分析,即对应人体正常心率范围45-120次/min,其能量最高点对应的频率即为心跳频率fh,心率为:The ARMA model is used to analyze the energy spectrum of the signal with a frequency of 0.75-2.0Hz in the heartbeat frequency band after EMD decomposition, which corresponds to the normal human heart rate range of 45-120 beats/min, and the frequency corresponding to the highest energy point is the heartbeat frequency f h , the heart rate is:
XLV=60fh (22)。XLV= 60fh (22).
本发明采用以上技术方案,为了进一步提高图像式光电容积描记法(IPPG)心率估计的准确性,本发明提出了融合图像式光电容积描记法(IPPG)和深度信息的非接触心率估计方法,本发明使用RGBD摄像头可以实时测得的图像,通过增加深度信息并将其融合到IPPG信号提取中,设计了自适应ROI区域提取方法,减少由于不恰当ROI区域选取引入噪声对IPPG源信号的干扰,同时考虑被检测者摆头情况会造成人脸图像的上下仰俯变化和左右摇摆变化,采用加权平均方法来降低被测者头部运动所产生的运动噪声干扰,提高图像式光电容积描记法(IPPG)心率估计的准确率。与现有方法相比,本发明提出的心率采集方法采用RGBD摄像头非接触的获取人脸图像信息和深度信息,心率监测时不需要贴电极片,使用较方便,不受环境光的限制和运动噪声的影响,提高了心率监测的准确率。The present invention adopts the above technical solutions, in order to further improve the accuracy of image photoplethysmography (IPPG) heart rate estimation, the present invention proposes a non-contact heart rate estimation method that combines image photoplethysmography (IPPG) and depth information. The invention uses an image that can be measured in real time by an RGBD camera. By adding depth information and fusing it into the IPPG signal extraction, an adaptive ROI area extraction method is designed to reduce the interference of the IPPG source signal due to the introduction of noise due to inappropriate ROI area selection. At the same time, considering that the subject's head swing will cause the up and down pitch change and left and right swing change of the face image, the weighted average method is used to reduce the motion noise interference caused by the subject's head movement, and the image-based photoplethysmography ( IPPG) accuracy of heart rate estimation. Compared with the existing method, the heart rate acquisition method proposed by the present invention adopts an RGBD camera to acquire face image information and depth information non-contact, and does not need to attach electrodes during heart rate monitoring, which is more convenient to use, and is not limited by ambient light and motion. The influence of noise improves the accuracy of heart rate monitoring.
附图说明Description of drawings
现结合附图对本发明作进一步阐述:Now in conjunction with accompanying drawing, the present invention is further elaborated:
图1为本发明融合IPPG技术和深度信息的非接触心率估计方法的流程示意图;Fig. 1 is the flow chart of the non-contact heart rate estimation method of fusion IPPG technology and depth information of the present invention;
图2为本发明人脸检测算法的流程示意图;2 is a schematic flowchart of a face detection algorithm of the present invention;
图3为本发明人脸跟踪算法的流程示意图;3 is a schematic flowchart of a face tracking algorithm of the present invention;
图4为本发明人脸图像横截面图;4 is a cross-sectional view of a face image of the present invention;
图5为本发明人脸的运动变化范围划分及其对应加权均值人脸的关系表;Fig. 5 is the relation table of the motion variation range division of the human face of the present invention and its corresponding weighted mean human face;
图6为本发明深度摄像头的视场图。FIG. 6 is a view of the field of view of the depth camera of the present invention.
具体实施方式Detailed ways
如图1-6之一所示,本发明的融合IPPG技术和深度信息的非接触心率估计方法,融合IPPG和深度信息抗噪声干扰的非接触心率估计法,其包括以下步骤:As shown in one of Figures 1-6, the non-contact heart rate estimation method that integrates IPPG technology and depth information of the present invention, and the non-contact heart rate estimation method that integrates IPPG and depth information to resist noise interference, includes the following steps:
1)信息采集:通过RGBD摄像头采集被测者的人脸图像及其对应的深度信息;1) Information collection: collect the subject's face image and its corresponding depth information through the RGBD camera;
2)人脸图像检测:采用人脸检测算法对采集到的人脸图像利用模板匹配进行人脸检测和定位,形成监视区域;2) Face image detection: the face detection algorithm is used to detect and locate the collected face image using template matching to form a monitoring area;
3)人脸跟踪:采用人脸跟踪算法对连续采集到的多个人脸图像所对应的监视区域进行跟踪捕获人脸运动,获得多个具有运动变化的人脸图像;3) face tracking: adopt the face tracking algorithm to track and capture the movement of the face in the monitoring area corresponding to the multiple face images collected continuously, and obtain a plurality of face images with motion changes;
4)运动噪声干扰消除:运用加权均值方法消除所获得的多个具有运动变化的人脸图像的运动噪声干扰,实现动态人脸识别;4) Elimination of motion noise interference: use the weighted mean method to eliminate the motion noise interference of a plurality of acquired face images with motion changes, and realize dynamic face recognition;
5)ROI区域提取:对人脸识别后的人脸图像利用深度信息来自适应的选取ROI区域;5) ROI area extraction: adaptively select the ROI area by using the depth information for the face image after face recognition;
6)采用IPPG技术进行心率计算:采用IPPG技术对ROI区域进行源信号提取、利用经验分解模式EMD方法进行信号去噪以及利用ARMA模型进行能量谱分析求取被检测者的心率值。6) Heart rate calculation using IPPG technology: using IPPG technology to extract the source signal in the ROI area, using the empirical decomposition mode EMD method to denoise the signal, and using the ARMA model to perform energy spectrum analysis to obtain the heart rate value of the subject.
作为优选,步骤1)中所述RGBD摄像头包括RGB摄像头和深度摄像头,所述RGB摄像头用于拍摄人体的RGB图像,深度摄像头用于拍摄人脸的深度图像,所述RGB摄像头和深度摄像头标定后,深度摄像头拍摄的深度图像和RGB摄像头拍摄的RGB图像重合。Preferably, the RGBD camera described in step 1) includes an RGB camera and a depth camera, the RGB camera is used to capture the RGB image of the human body, and the depth camera is used to capture the depth image of the human face. After the RGB camera and the depth camera are calibrated , the depth image captured by the depth camera and the RGB image captured by the RGB camera overlap.
如图2所示,步骤2)人脸图像检测的方法为:将采集到的人脸图像作为输入图像,搜索输入图像中可能尺度和位置的矩形区域,即待检测的候选窗口,对每个候选窗口按照如下步骤进行处理:先使用双眼模板匹配进行粗筛选,再对窗口内图像进行均方差标准化,使人脸区域分布区域标准化,消除光照变化的影响,然后进行人脸模板匹配,若没有超过设定的阈值,则作为候选人脸输出。As shown in Figure 2, step 2) the method of face image detection is: take the collected face image as the input image, search for the rectangular area of possible scale and position in the input image, that is, the candidate window to be detected, for each The candidate window is processed according to the following steps: first use binocular template matching for rough screening, and then standardize the mean square error of the images in the window to standardize the distribution area of the face area and eliminate the influence of illumination changes, and then perform face template matching. If it exceeds the set threshold, it will be output as a candidate face.
如图3所示,步骤3)人脸跟踪的方法为:假设序列中每一帧中只出现一个人脸,设定初始监视区域为R0,先在各帧的监视区域内检测人脸,若检测到人脸,则根据其定位结果按照求解监视区域的公式计算下一帧中新的监视区域,如此处理各帧;若检测失败,则保持监视区域不变,并在后续帧中继续检测,以防止由于偶然的漏检造成的跟踪失败;当漏检帧超过一定数量,即漏检超时,认为人脸已经消失,则重新开始在初始监视区域中检测后续帧中可能出现的新的人脸;As shown in Figure 3, step 3) the method of face tracking is: assuming that only one face appears in each frame in the sequence, set the initial monitoring area as R 0 , first detect the face in the monitoring area of each frame, If a human face is detected, the new monitoring area in the next frame is calculated according to the formula for solving the monitoring area according to the positioning result, and each frame is processed in this way; if the detection fails, the monitoring area is kept unchanged, and the detection is continued in subsequent frames. , to prevent the tracking failure caused by accidental missed detection; when the number of missed frames exceeds a certain number, that is, the missed detection times out, and the face is considered to have disappeared, it will restart to detect new people that may appear in subsequent frames in the initial monitoring area. Face;
所述监视区域是指某个人脸在某帧中可能的位置和尺度范围,假设,监视区域用一个六元组表示为LR=(xmin,xmax,ymin,ymax,chmin,chmax),xmin,xmax是人脸中心位置在x方向的范围,ymin,ymax是人脸中心位置在y方向的范围,chmin,chmax是人脸尺度的范围;将人脸区域近似为一个正方形,用一个三元组表示为L=(xcenter,xcenter,chface),各元素依次描述了人脸的中心位置和大小;The monitoring area refers to the possible position and scale range of a certain face in a certain frame. It is assumed that the monitoring area is represented by a six-tuple as LR=(x min , x max , y min , y max , ch min , ch max ), x min , x max is the range of the center position of the face in the x direction, y min , y max is the range of the center position of the face in the y direction, ch min , ch max is the range of the face scale; The area is approximately a square, which is represented by a triple as L=(x center , x center , ch face ), and each element sequentially describes the center position and size of the face;
人脸跟踪的任务是连续图像中跟踪捕获人脸的运动。跟踪算法根据已有的人脸定位结果,预测新一帧中各个人脸对应的监视区域并在其约束的范围内检测人脸。由于人体运动具有随意性(如突然改变运动方向),因而在对人物行为缺乏理解的前提下,利用已有的人脸运动轨迹预测新的监视区域具有一定的难度。采用较为简单的方法,仅使用前一帧的跟踪结果,根据实际应用的需求限定人脸在两帧间的最大变化,进一步求解新的监视区域。这样虽然处理速度会受到一些影响,但能够保证跟踪的鲁棒性。人脸跟踪问题所针对是正面人脸图像序列,人脸的运动主要表现为平移与尺度变化两种形式。因此,求解监视区域的公式如下:The task of face tracking is to track and capture the motion of a face in consecutive images. The tracking algorithm predicts the surveillance area corresponding to each face in a new frame according to the existing face localization results, and detects the face within its constraints. Due to the randomness of human movement (such as a sudden change of movement direction), it is difficult to predict a new surveillance area by using the existing human face movement trajectory without understanding the behavior of the person. Using a relatively simple method, only the tracking results of the previous frame are used, and the maximum change of the face between two frames is limited according to the actual application requirements, and the new monitoring area is further solved. In this way, although the processing speed will be affected to some extent, the robustness of the tracking can be guaranteed. The face tracking problem is aimed at the frontal face image sequence, and the movement of the face is mainly manifested in two forms: translation and scale change. Therefore, the formula for solving the surveillance area is as follows:
其中,a,b分别表示限定了人脸在两帧之间x,y位置,c表示人脸尺度的最大变化,他们与人脸的运动速度以及人脸与摄像头的相对位置关系等因素有关。Among them, a and b represent the x and y positions of the face between two frames, respectively, and c represents the maximum change of the face scale, which is related to the movement speed of the face and the relative position relationship between the face and the camera.
人脸在三维空间中的变化分为沿横轴、纵轴和数轴的平移和旋转,其中部分平移和旋转都可以通过集合归一化的方法得到有效克服。但对于人脸图像和上下俯仰变化和左右摇摆变化,几何归一化往往无法克服。针对运动时对人脸检测困难问题,采用基于加权均值人脸识别算法。The change of the face in three-dimensional space is divided into translation and rotation along the horizontal axis, vertical axis and number axis, and some of the translation and rotation can be effectively overcome by the method of set normalization. But for face images and up and down pitch changes and left and right roll changes, geometric normalization is often insurmountable. Aiming at the difficulty of face detection during motion, a face recognition algorithm based on weighted mean is adopted.
作为优选,步骤4)运动噪声干扰消除的方法为:根据人脸左右摇摆角度的变化,定义每幅人脸变化的权值计算方法,提出加权平均人脸的构建策略,然后基于人脸的俯仰角度变化,将其划分为俯视、平视和仰视三个层次,并在每个层次中构建加权均值人脸,形成加权均值人脸矩阵,实现动态人脸识别;将采集到的具有运动变化的人脸图像叠加起来形成均值人脸,这样一张图像包含了多幅运动人脸的综合信息,能够反映不同运动的人脸变化情况。Preferably, step 4) the method for eliminating motion noise interference is: according to the change of the left and right swing angle of the human face, define the weight calculation method of each face change, propose a construction strategy of the weighted average face, and then based on the pitch of the face The angle change is divided into three levels: looking down, looking up and looking up, and constructing a weighted mean face in each level to form a weighted mean face matrix to realize dynamic face recognition; The face images are superimposed to form an average face. Such an image contains the comprehensive information of multiple moving faces and can reflect the changes of faces with different movements.
假设,给定的运动变化灰度人脸图像为Ij(x,y),则,加权均值人脸(weightedmean face,WMF)的公式如下:Assuming that the given motion change grayscale face image is I j (x, y), the formula of weighted mean face (WMF) is as follows:
其中,ZM表示同一人的不同运动图像总数,j代表第j幅变化图像,x,y表示二维图像平面坐标,ωj表示对应第j幅人脸图像的权重值;Among them, ZM represents the total number of different moving images of the same person, j represents the j-th changing image, x, y represent the two-dimensional image plane coordinates, and ω j represents the weight value corresponding to the j-th face image;
如图4所示,ωj的确定与人脸运动左右摇摆角度相关,ωj的计算方法是:假设人脸左右摇摆前的双眼坐标分别为A(xA,yA)和B(xB,yB),人脸发生向右摇摆后的双眼坐标分别为A'(xA',yA')和B'(xB',yB'),人脸横截面图所在圆的圆心坐标为O(xo,yo),人脸向右的摇摆角为∠δ,则第K幅人脸图像权重值ωK的计算公式如下:As shown in Figure 4, the determination of ω j is related to the left and right swing angle of the face movement. The calculation method of ω j is: Assume that the coordinates of the eyes before the face swing is A(x A , y A ) and B(x B respectively , y B ), the coordinates of the eyes after the face swings to the right are A'(x A' ,y A' ) and B'(x B' ,y B' ), respectively, the center of the circle where the cross-sectional view of the face is located The coordinates are O(x o , y o ), and the swing angle of the face to the right is ∠δ, then the calculation formula of the weight value ω K of the K-th face image is as follows:
其中,δk表示对应第k幅人脸图像的左右摇摆角度,则权值ωk的求解转换为计算摇摆角度δk;Among them, δ k represents the left and right swing angle corresponding to the kth face image, then the solution of the weight ω k is converted into the calculation of the swing angle δ k ;
双眼坐标A(xA,yA)、B(xB,yB)及其发生左右摇摆后双眼坐标A'(xA',yA')、B'(xB',yB')已确定位;假设双眼旋转所在圆的半径为R,则Eye coordinates A(x A , y A ), B(x B , y B ) and their eye coordinates A'(x A' , y A' ), B'(x B' , y B' ) after rocking left and right The positioning has been determined; assuming that the radius of the circle where the eyes rotate is R, then
所述圆心点O(xo,yo)的坐标通过以下计算过程获取:The coordinates of the center point O(x o , y o ) are obtained through the following calculation process:
联立上面(6)(7)两公式,化简整理可得:Combining the above two formulas (6) and (7), we can get:
同理可得:The same can be obtained:
通过上面(8)(9)两式联立可得圆心点o(xo,yo)的坐标,将该结果带入公式(6),求出半径R,则通过公式(5)可解摇摆角度δ;摇摆角δ是双眼坐标值的函数,即The coordinates of the center point o(x o , y o ) can be obtained by combining the two equations (8) and (9) above, and the result is brought into the equation (6) to obtain the radius R, then the solution can be obtained by the equation (5). The rocking angle δ; the rocking angle δ is a function of the coordinate value of the eyes, namely
δ=f(xA,xB,xA',xB') (10)δ=f(x A ,x B ,x A' ,x B' ) (10)
假设左右摇摆角δ,俯仰角为γ,制得人脸的运动变化范围划分及其对应加权均值人脸的关系表,如图5所示;给定运动变化的灰度人脸图像Ii(x,y),则左右摇摆角度为0°-90°内的加权均值人脸定义如公式(11)所示:Assuming the left and right swing angle δ, the pitch angle is γ, the motion variation range division of the face and the relation table of the corresponding weighted mean face are obtained, as shown in Figure 5; the grayscale face image I i ( x, y), then the left and right swing angle is the weighted mean face definition within 0°-90° as shown in formula (11):
其中,ZM代表某一俯仰角度、某一左右摇摆角度范围内的人脸总数,j表示第j幅人脸,x,y代表二维图像平面坐标,p,q是对应于该变化范围的均值人脸编号,F'p,q表示左右摇摆角度在-90°-0°之间的加权均值人脸;Among them, ZM represents the total number of faces within a certain pitch angle and a certain left and right swing angle, j represents the jth face, x and y represent the two-dimensional image plane coordinates, and p and q are the mean values corresponding to the variation range. Face number, F' p, q represents the weighted mean face with left and right swing angles between -90°-0°;
计算得出的加权均值人脸,最终形成人脸存在多种运动变化情况下的加权均值人脸矩阵:The calculated weighted mean face finally forms the weighted mean face matrix when the face has various motion changes:
如图6所示,对人脸识别后的人脸图像,用长方形框出来,并用h表示深度摄像头到物体表面的距离,N表示图像中框出来的人脸区域内像素个数的测量值,S表示物体实际面积大小;M表示深度摄像头整个视场范围内像素的个数,S1表示深度摄像头整个视场的面积;α和β分别表示深度摄像头的水平视场和垂直视场。As shown in Figure 6, the face image after face recognition is framed by a rectangle, and h represents the distance from the depth camera to the surface of the object, N represents the measured value of the number of pixels in the face area framed in the image, S represents the actual area of the object; M represents the number of pixels in the entire field of view of the depth camera, S 1 represents the area of the entire field of view of the depth camera; α and β represent the horizontal and vertical fields of view of the depth camera, respectively.
根据深度摄像头在该距离的视场范围和该视场范围内的像素个数测量值N与实际物体的面积大小S的比值和在该距离下的深度摄像头整个视场范围内像素的个数M与整个视场面积s1的比值相等,即According to the ratio of the field of view range of the depth camera and the number of pixels in the field of view range of the depth camera to the area size S of the actual object and the number of pixels M in the entire field of view of the depth camera at this distance is equal to the ratio of the entire field of view area s 1 , that is
其中N是根据图像统计出的人脸所在区域的像素个数,S1可根据深度摄像头的视场范围由几何关系求出,如下式所示:Among them, N is the number of pixels in the area where the face is located according to the statistics of the image, and S 1 can be calculated from the geometric relationship according to the field of view of the depth camera, as shown in the following formula:
又由于视场S1中总的像素个数M,利用这个原理,对于搭建好的图像采集系统,在S1、M、N已知的情况下即可以求出人脸区域面积S:And because of the total number of pixels M in the field of view S 1 , using this principle, for the built image acquisition system, the face area area S can be obtained when S 1 , M, and N are known:
对于ROI区域面积的确定,本发明选择对人脸识别后的人脸图像从上边缘往下45%-75%的区域画平行线,取其图像整体高度的30%作为ROI的高度,ROI的宽度取识别后图像宽度的70%,获得仅包含鼻子和左右脸颊的ROI区域。For the determination of the area of the ROI area, the present invention chooses to draw parallel lines from the upper edge to the lower 45%-75% area of the face image after face recognition, and takes 30% of the overall height of the image as the height of the ROI. The width is taken as 70% of the width of the image after recognition, and the ROI area containing only the nose and left and right cheeks is obtained.
作为优选,步骤6)采用IPPG技术进行心率计算的方法为:对ROI区域进行源信号提取,将ROI区域图像分为R、G、B三个通道,以噪声最小的G通道作为源信号的通道,对分离后的G通道图像进行空间像素平均处理:Preferably, step 6) adopts IPPG technology to calculate the heart rate as follows: extract the source signal from the ROI area, divide the ROI area image into three channels: R, G, and B, and use the G channel with the least noise as the channel of the source signal , perform spatial pixel averaging on the separated G channel image:
其中k为图像帧数,K是图像的总帧数,Z(k)是R通道的一维源信号,zi,j(n)为像素点(i,j)在G通道颜色强度值,g,d分别是图像的高度和宽度;where k is the number of image frames, K is the total number of frames of the image, Z(k) is the one-dimensional source signal of the R channel, zi ,j (n) is the color intensity value of the pixel (i,j) in the G channel, g, d are the height and width of the image, respectively;
利用经验分解模式EMD方法进行信号去噪,其步骤如下:Using the empirical decomposition mode EMD method to denoise the signal, the steps are as follows:
(1)取原始信号u(t)的平均值,得到信号m1(t);(1) Take the average value of the original signal u(t) to obtain the signal m 1 (t);
(2)计算一阶余量hp1(t)=u(t)-m1(t),检查hp1(t)是否满足IMF条件,若否,则返回到步骤(1),使用hp1(t)作为第二次筛选的原始信号,即:(2) Calculate the first-order margin hp 1 (t)=u(t)-m 1 (t), check whether hp 1 (t) satisfies the IMF condition, if not, return to step (1) and use hp 1 (t) as the raw signal for the second screening, namely:
hp2(t)=hp1(t)-m2(t) (17)hp 2 (t)=hp 1 (t)-m 2 (t) (17)
重复筛选k次:Repeat screening k times:
hpk(t)=hpk-1(t)-mk(t) (18)hp k (t)=hp k-1 (t)-m k (t) (18)
在hpk(t)满足IMF条件之前,得到IMF的第一个分量IMF1,即Before hp k (t) satisfies the IMF condition, the first component IMF 1 of the IMF is obtained, namely
IMF1=hpk(t) (19)IMF 1 = hp k (t) (19)
(3)原始信号u(t)减去IMF1可得到剩余量r1(t),即(3) Subtract IMF 1 from the original signal u(t) to obtain the residual amount r 1 (t), namely
r1(t)=u(t)-IMF1 (20)r 1 (t)=u(t)-IMF 1 (20)
(4)令u1(t)=r1(t),将u1(t)作为新的原始信号,重复执行上述步骤得到第二个IMF分量IMF2,如此重复n次:(4) Let u 1 (t)=r 1 (t), take u 1 (t) as the new original signal, repeat the above steps to obtain the second IMF component IMF 2 , repeat this n times:
(5)当第n个分量rn(t)已成为单调函数,无法再分解IMF时,整个EMD的分解过程完成;原始信号u(t)可以表示成n个IMF分量与一个平均趋势分量rn(t)的组合,即:(5) When the nth component r n (t) has become a monotonic function and can no longer decompose the IMF, the entire EMD decomposition process is completed; the original signal u(t) can be expressed as n IMF components and an average trend component r The combination of n (t), namely:
用ARMA模型对EMD分解后其频率位于心跳频带内0.75-2.0Hz的信号做能量谱分析,即对应人体正常心率范围45-120次/min,其能量最高点对应的频率即为心跳频率fh,心率为:The ARMA model is used to analyze the energy spectrum of the signal with a frequency of 0.75-2.0Hz in the heartbeat frequency band after EMD decomposition, which corresponds to the normal human heart rate range of 45-120 beats/min, and the frequency corresponding to the highest energy point is the heartbeat frequency f h , the heart rate is:
XLV=60fh (22)。XLV= 60fh (22).
本发明采用以上技术方案,为了进一步提高图像式光电容积描记法(IPPG)心率估计的准确性,本发明提出了融合图像式光电容积描记法(IPPG)和深度信息的非接触心率估计方法,本发明使用RGBD摄像头可以实时测得的图像,通过增加深度信息并将其融合到IPPG信号提取中,设计了自适应ROI区域提取方法,减少由于不恰当ROI区域选取引入噪声对IPPG源信号的干扰,同时考虑被检测者摆头情况会造成人脸图像的上下仰俯变化和左右摇摆变化,采用加权平均方法来降低被测者头部运动所产生的运动噪声干扰,提高图像式光电容积描记法(IPPG)心率估计的准确率。与现有方法相比,本发明提出的心率采集方法采用RGBD摄像头非接触的获取人脸图像信息和深度信息,心率监测时不需要贴电极片,使用较方便,不受环境光的限制和运动噪声的影响,提高了心率监测的准确率。The present invention adopts the above technical solutions, in order to further improve the accuracy of image photoplethysmography (IPPG) heart rate estimation, the present invention proposes a non-contact heart rate estimation method that combines image photoplethysmography (IPPG) and depth information. The invention uses an image that can be measured in real time by an RGBD camera. By adding depth information and fusing it into the IPPG signal extraction, an adaptive ROI area extraction method is designed to reduce the interference of the IPPG source signal due to the introduction of noise due to inappropriate ROI area selection. At the same time, considering that the subject's head swing will cause the up and down pitch change and left and right swing change of the face image, the weighted average method is used to reduce the motion noise interference caused by the subject's head movement, and the image-based photoplethysmography ( IPPG) accuracy of heart rate estimation. Compared with the existing method, the heart rate acquisition method proposed by the present invention adopts an RGBD camera to acquire face image information and depth information non-contact, and does not need to attach electrodes during heart rate monitoring, which is more convenient to use, and is not limited by ambient light and motion. The influence of noise improves the accuracy of heart rate monitoring.
以上描述不应对本发明的保护范围有任何限定。The above description should not limit the protection scope of the present invention in any way.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462756.0A CN110276271A (en) | 2019-05-30 | 2019-05-30 | Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462756.0A CN110276271A (en) | 2019-05-30 | 2019-05-30 | Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110276271A true CN110276271A (en) | 2019-09-24 |
Family
ID=67961224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910462756.0A Pending CN110276271A (en) | 2019-05-30 | 2019-05-30 | Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110276271A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866498A (en) * | 2019-11-15 | 2020-03-06 | 北京华宇信息技术有限公司 | Portable heart rate monitoring device and heart rate monitoring method thereof |
CN112155540A (en) * | 2020-09-30 | 2021-01-01 | 重庆仙桃前沿消费行为大数据有限公司 | Acquisition imaging system and method for complex object |
CN112733650A (en) * | 2020-12-29 | 2021-04-30 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
WO2021184620A1 (en) * | 2020-03-19 | 2021-09-23 | 南京昊眼晶睛智能科技有限公司 | Camera-based non-contact heart rate and body temperature measurement method |
CN113876311A (en) * | 2021-09-02 | 2022-01-04 | 天津大学 | Self-adaptively-selected non-contact multi-player heart rate efficient extraction device |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
CN114387479A (en) * | 2022-01-07 | 2022-04-22 | 河北工业大学 | A non-contact heart rate measurement method and system based on face video |
CN114403838A (en) * | 2022-01-24 | 2022-04-29 | 佛山科学技术学院 | A portable remote heart rate detection device and method based on Raspberry Pi |
CN114431849A (en) * | 2022-01-10 | 2022-05-06 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
WO2022101785A1 (en) * | 2020-11-14 | 2022-05-19 | Facense Ltd. | Improvements in acquisition and analysis of imaging photoplethysmogram signals |
CN118411751A (en) * | 2024-07-03 | 2024-07-30 | 宁波星巡智能科技有限公司 | Heart rate measurement stability augmentation method, device and equipment based on facial image processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678780A (en) * | 2016-01-14 | 2016-06-15 | 合肥工业大学智能制造技术研究院 | Video heart rate detection method removing interference of ambient light variation |
CN106778695A (en) * | 2017-01-19 | 2017-05-31 | 北京理工大学 | A kind of many people's examing heartbeat fastly methods based on video |
CN106886216A (en) * | 2017-01-16 | 2017-06-23 | 深圳前海勇艺达机器人有限公司 | Robot automatic tracking method and system based on RGBD Face datections |
US9750420B1 (en) * | 2014-12-10 | 2017-09-05 | Amazon Technologies, Inc. | Facial feature selection for heart rate detection |
CN107358220A (en) * | 2017-07-31 | 2017-11-17 | 江西中医药大学 | A kind of human heart rate and the contactless measurement of breathing |
-
2019
- 2019-05-30 CN CN201910462756.0A patent/CN110276271A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9750420B1 (en) * | 2014-12-10 | 2017-09-05 | Amazon Technologies, Inc. | Facial feature selection for heart rate detection |
CN105678780A (en) * | 2016-01-14 | 2016-06-15 | 合肥工业大学智能制造技术研究院 | Video heart rate detection method removing interference of ambient light variation |
CN106886216A (en) * | 2017-01-16 | 2017-06-23 | 深圳前海勇艺达机器人有限公司 | Robot automatic tracking method and system based on RGBD Face datections |
CN106778695A (en) * | 2017-01-19 | 2017-05-31 | 北京理工大学 | A kind of many people's examing heartbeat fastly methods based on video |
CN107358220A (en) * | 2017-07-31 | 2017-11-17 | 江西中医药大学 | A kind of human heart rate and the contactless measurement of breathing |
Non-Patent Citations (3)
Title |
---|
刘祎 等: "基于人脸视频的非接触式心率测量方法", 《纳米技术与精密工程》 * |
梁路宏 等: "基于人脸检测的人脸跟踪算法", 《计算机工程与应用》 * |
邹国锋 等: "基于加权均值人脸的多姿态人脸识别", 《计算机应用研究》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866498B (en) * | 2019-11-15 | 2021-07-13 | 北京华宇信息技术有限公司 | Heart rate monitoring method |
CN110866498A (en) * | 2019-11-15 | 2020-03-06 | 北京华宇信息技术有限公司 | Portable heart rate monitoring device and heart rate monitoring method thereof |
WO2021184620A1 (en) * | 2020-03-19 | 2021-09-23 | 南京昊眼晶睛智能科技有限公司 | Camera-based non-contact heart rate and body temperature measurement method |
CN112155540A (en) * | 2020-09-30 | 2021-01-01 | 重庆仙桃前沿消费行为大数据有限公司 | Acquisition imaging system and method for complex object |
WO2022101785A1 (en) * | 2020-11-14 | 2022-05-19 | Facense Ltd. | Improvements in acquisition and analysis of imaging photoplethysmogram signals |
CN112733650A (en) * | 2020-12-29 | 2021-04-30 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
CN112733650B (en) * | 2020-12-29 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | Target face detection method and device, terminal equipment and storage medium |
CN114120770A (en) * | 2021-03-24 | 2022-03-01 | 张银合 | Barrier-free communication method for hearing-impaired people |
CN113876311B (en) * | 2021-09-02 | 2023-09-15 | 天津大学 | An adaptive selection non-contact multi-player heart rate efficient extraction device |
CN113876311A (en) * | 2021-09-02 | 2022-01-04 | 天津大学 | Self-adaptively-selected non-contact multi-player heart rate efficient extraction device |
CN114387479A (en) * | 2022-01-07 | 2022-04-22 | 河北工业大学 | A non-contact heart rate measurement method and system based on face video |
CN114387479B (en) * | 2022-01-07 | 2024-11-22 | 河北工业大学 | A non-contact heart rate measurement method and system based on face video |
CN114431849A (en) * | 2022-01-10 | 2022-05-06 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
CN114431849B (en) * | 2022-01-10 | 2023-08-11 | 厦门大学 | Aquatic animal heart rate detection method based on video image processing |
CN114403838A (en) * | 2022-01-24 | 2022-04-29 | 佛山科学技术学院 | A portable remote heart rate detection device and method based on Raspberry Pi |
CN118411751A (en) * | 2024-07-03 | 2024-07-30 | 宁波星巡智能科技有限公司 | Heart rate measurement stability augmentation method, device and equipment based on facial image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276271A (en) | Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference | |
Fan et al. | Robust blood pressure estimation using an RGB camera | |
EP3595509B1 (en) | Device, system and method for measuring and processing physiological signals of a subject | |
Tasli et al. | Remote PPG based vital sign measurement using adaptive facial regions | |
US9928607B2 (en) | Device and method for obtaining a vital signal of a subject | |
RU2675036C2 (en) | Device and method for obtaining information about vital signs of subject | |
US8360986B2 (en) | Non-contact and passive measurement of arterial pulse through thermal IR imaging, and analysis of thermal IR imagery | |
US20110251493A1 (en) | Method and system for measurement of physiological parameters | |
Gudi et al. | Efficient real-time camera based estimation of heart rate and its variability | |
CN111728602A (en) | Non-contact blood pressure measurement device based on PPG | |
CN105989357A (en) | Human face video processing-based heart rate detection method | |
Blöcher et al. | An online PPGI approach for camera based heart rate monitoring using beat-to-beat detection | |
CN106413533A (en) | Device, system and method for detecting apnoea of a subject | |
CN102309318A (en) | Method for detecting human body physiological parameters on basis of infrared sequence image | |
Przybyło | A deep learning approach for remote heart rate estimation | |
Lian et al. | Robust and remote photoplethysmography based on smartphone imaging of the human palm | |
CN112200099A (en) | A video-based dynamic heart rate detection method | |
CN112001862A (en) | Non-contact visual heart rate detection method to eliminate motion noise of video shock heart rate signal | |
CN114246570B (en) | Near-infrared heart rate detection method by fusing peak signal-to-noise ratio and Peerson correlation coefficient | |
JP2024525886A (en) | Signal extraction from camera observations | |
Wiede et al. | Signal fusion based on intensity and motion variations for remote heart rate determination | |
Ben Salah et al. | Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis | |
Chiang et al. | Contactless Blood Pressure Measurement by AI Robot. | |
Li et al. | Non-contact monitoring of ambulatory blood pressure based on webcam | |
Zhang et al. | Adaptive Pulse Wave Extraction Method for Radial Artery Video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190924 |
|
RJ01 | Rejection of invention patent application after publication |