[go: up one dir, main page]

CN116889388A - An intelligent detection system and method based on rPPG technology - Google Patents

An intelligent detection system and method based on rPPG technology Download PDF

Info

Publication number
CN116889388A
CN116889388A CN202311161515.5A CN202311161515A CN116889388A CN 116889388 A CN116889388 A CN 116889388A CN 202311161515 A CN202311161515 A CN 202311161515A CN 116889388 A CN116889388 A CN 116889388A
Authority
CN
China
Prior art keywords
rppg
signal
unit
channel
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311161515.5A
Other languages
Chinese (zh)
Other versions
CN116889388B (en
Inventor
孙运杰
嵇晓强
李贵文
隋雅茹
王美娇
饶治
郝颢
陶雪
马艳蓉
曹国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Gauss Vision Technology Co ltd
Changchun University of Science and Technology
Original Assignee
Changchun Gauss Vision Technology Co ltd
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Gauss Vision Technology Co ltd, Changchun University of Science and Technology filed Critical Changchun Gauss Vision Technology Co ltd
Priority to CN202311161515.5A priority Critical patent/CN116889388B/en
Publication of CN116889388A publication Critical patent/CN116889388A/en
Application granted granted Critical
Publication of CN116889388B publication Critical patent/CN116889388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Cardiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Vascular Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及rPPG技术领域,具体为一种基于rPPG技术的智能检测系统及方法,所述系统包括信息数据预处理模块、rPPG信号分割与波形选择模块、双通道特征融合数据预测模块以及rPPG信号特征提取与数据预测模块,所述信息数据预处理模块用于通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理,本发明提供一种基于rPPG的非接触数据测量方法,具有非侵入性、便携性以及普适性的优点。

The invention relates to the field of rPPG technology, specifically an intelligent detection system and method based on rPPG technology. The system includes an information data preprocessing module, an rPPG signal segmentation and waveform selection module, a dual-channel feature fusion data prediction module and rPPG signal characteristics. Extraction and data prediction module, the information data preprocessing module is used to collect the user's palm information in real time through the camera, combine the collected user's palm information to divide the ROI area and extract the image G channel signal in the divided area, and average the G channel signal The pixel values are preprocessed as original rPPG signals. The present invention provides a non-contact data measurement method based on rPPG, which has the advantages of non-invasiveness, portability and universality.

Description

一种基于rPPG技术的智能检测系统及方法An intelligent detection system and method based on rPPG technology

技术领域Technical field

本发明涉及rPPG技术领域,具体为一种基于rPPG技术的智能检测系统及方法。The invention relates to the field of rPPG technology, specifically an intelligent detection system and method based on rPPG technology.

背景技术Background technique

远程光电容积描记是一种非接触式的光学检测技术,可以通过视频提取人体的脉搏波信号,通过血管中血液容积由于心脏的收缩与舒张而发生周期性变化,并且不同血液容积中的血红蛋白对光的吸收能力不同,进而导致皮肤表面反射光线的规律性改变,通过视频成像设备进行拍摄和使用相应的信号处理方式,就可以从视频中得到原始的脉搏波,而血液容积的变化与血液对血管壁产生的侧压有直接关系,血液容积的增加,血液对血管壁的侧压也随之增大,现有技术中通过受训练的专业人员进行相关数据测量,不具备家庭监护和日常使用的条件,因此需要一种基于rPPG技术的智能检测系统及方法实现非接触式测量。Remote photoplethysmography is a non-contact optical detection technology that can extract the human body's pulse wave signal through video, through which the blood volume in blood vessels changes periodically due to the contraction and relaxation of the heart, and the hemoglobin in different blood volumes changes Different light absorption abilities lead to regular changes in the reflected light on the skin surface. By shooting with video imaging equipment and using corresponding signal processing methods, the original pulse wave can be obtained from the video, and the changes in blood volume are related to the blood response. The lateral pressure generated by the blood vessel wall is directly related. As the blood volume increases, the lateral pressure of the blood on the blood vessel wall also increases. In the existing technology, trained professionals are used to measure relevant data, which does not allow for home monitoring and daily use. conditions, therefore an intelligent detection system and method based on rPPG technology is needed to achieve non-contact measurement.

发明内容Contents of the invention

本发明的目的在于提供一种基于rPPG技术的智能检测系统及方法,以解决上述背景技术中提出的问题,本发明提供如下技术方案:The purpose of the present invention is to provide an intelligent detection system and method based on rPPG technology to solve the problems raised in the above background technology. The present invention provides the following technical solutions:

一种基于rPPG技术的智能检测方法,所述方法包括以下步骤:An intelligent detection method based on rPPG technology, the method includes the following steps:

S1、通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理;S1. Collect the user's palm part information in real time through the camera, combine the collected user palm part information to divide the ROI area and extract the image G channel signal in the divided area, and preprocess the average pixel value in the G channel signal as the original rPPG signal;

S2、将预处理后的rPPG信号进行针对性单周期分割,结合人体标准脉率对分割结果进行初次筛选,并结合筛选结果进行图基检测处理;S2. Perform targeted single-cycle segmentation on the preprocessed rPPG signal, conduct initial screening of the segmentation results based on the human standard pulse rate, and perform Tukey detection processing based on the screening results;

S3、基于S2的分析结果,提取图基检测处理后rPPG信号中的特征值,并通过Adma优化器进行网络参数训练,构建双通道特征融合数据预测模型;S3. Based on the analysis results of S2, extract the feature values in the rPPG signal after Tu-based detection processing, and conduct network parameter training through the Adma optimizer to build a dual-channel feature fusion data prediction model;

S4、实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。S4. Collect the current intensive care unit patient information in real time, and perform data training on the information through a dual-channel feature fusion data prediction model. The trained dual-channel feature fusion data prediction model will be obtained, and the preprocessed signal will be input to the The trained dual-channel feature fusion data prediction model is used for prediction, and then the feedback results are obtained.

进一步的,所述S1中的方法包括以下步骤:Further, the method in S1 includes the following steps:

步骤1001、通过摄像头实时采集用户手掌部位活动视频,并将视频中每一帧图像进行存储,记为集合A,Step 1001: Collect the video of the user's palm movement in real time through the camera, and store each frame of the video as set A.

A=(A1,A2,A3,...,An),A=(A 1 , A 2 , A 3 ,..., A n ),

其中An表示第n帧图像,n表示采集的视频总帧数;Where A n represents the nth frame image, and n represents the total number of video frames collected;

步骤1002、任意提取其中一帧图像进行划分ROI区域,其中通过图像识别技术将第n帧图像中手部区域中21个关键点进行标注,Step 1002: arbitrarily extract one of the frame images to divide the ROI area, and use image recognition technology to mark 21 key points in the hand area in the n-th frame image.

以手掌与手腕交界处中心作为第一个关键点,记为关键点0,以关键点0作为原点,以原点为参考点,以单位长度为间隔构建第一平面直角坐标系,将第n帧图像中手部区域中对应关键点在第一平面直角坐标系中进行标注,并按照顺序进行数字标记;Take the center of the junction of the palm and the wrist as the first key point, recorded as key point 0, take key point 0 as the origin, use the origin as the reference point, construct the first plane rectangular coordinate system with unit length as the interval, and divide the nth frame The corresponding key points in the hand area in the image are marked in the first plane rectangular coordinate system, and digitally marked in order;

步骤1003、在第一平面直角坐标系中分别计算各个关键点与原点构成的线段斜率值以及对应关键点与原点在第一平面直角坐标系中的距离值,并将对应关键点与原点的分析结果作为组合,生成集合B,Step 1003. Calculate the slope value of the line segment formed by each key point and the origin in the first plane Cartesian coordinate system and the distance value between the corresponding key point and the origin in the first plane Cartesian coordinate system, and analyze the corresponding key points and the origin. The result, as a combination, generates set B,

B={[(XA(n) 0,1,DA(n) 0,1),(XA(n) 0,2,DA(n) 0,2),...,(XA(n) 0,20,DA(n) 0,20)]},B={[(X A(n) 0,1 , D A(n) 0,1 ), (X A(n) 0,2 , D A(n) 0,2 ),..., (X A(n) 0,20 ,D A(n) 0,20 )]},

其中XA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点构成的线段斜率值,DA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值, Among them , _ The distance value of the key point 0 in the first plane Cartesian coordinate system,

其中XA(n) 0,20=(y20-y0)/(x20-x0),DA(n) 0,20=[(y20-y0)2+(x20-x0)2]1/2 Among them , _ _ _ _ _ _ _ _ _ 0 ) 2 ] 1/2 ;

步骤1004、重复步骤1002至步骤1003得到采集的视频中每一帧图像对应关键点与原点构成的线段斜率值以及相应关键点与原点在第一平面直角坐标系中的距离值,依次计算每一帧图像中关键点位置相对于标准位置之间的差异情况,记为集合C,Step 1004: Repeat steps 1002 to 1003 to obtain the slope value of the line segment formed by the corresponding key point and the origin of each frame of the video in the collected video, as well as the distance value between the corresponding key point and the origin in the first plane rectangular coordinate system, and calculate each step in turn. The difference between the key point positions in the frame image relative to the standard position is recorded as set C,

C=(C1,C2,C3,...,Cn),C=(C 1 , C 2 , C 3 ,..., C n ),

其中Cn表示第n帧图像中关键点位置相对于标准位置之间的差异情况,where Cn represents the difference between the key point position in the nth frame image relative to the standard position,

其中Cn=α·∑20 a=1|XA(n) 0,a-Xa standard|/20+β·∑20 a=1|DA(n) 0,a-Da standard|/20,Where C n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/ 20,

α和β均表示比例系数,所述比例系数为数据库预设值,XA(n) 0,a表示第n帧图像中数字标记为a的关键点与数字标记为0的关键点构成的线段斜率值,Xa standard表示数据标记为a的关键点与数字标记为0的关键点构成的线段斜率标准值,所述斜率标准值为数据库预设值,DA(n) 0,a表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值,Da standard表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离标准值,所述距离标准值为数据库预设值; Both α and β represent proportional coefficients, which are preset values in the database. Slope value , _ The distance value between the key point marked a and the key point marked 0 in the first plane rectangular coordinate system. D a standard means that the key point marked a and the key point marked 0 are at right angles to the first plane. The distance standard value in the coordinate system, the distance standard value is the database preset value;

步骤1005、将集合C中差异情况最小值对应的图像作为当前采集视频中最佳图像,并将所述最佳图像与Hand Landmarker模型中图像进行匹配,并将匹配结果中对应模型中手部区域的21个关键点在最佳图像中进行定位,其中Hand Landmarker模型是由谷歌公司根据30K个现实世界手部图像训练得到,实现了手部区域21个关键点定位,对于采集到的每一帧图像进行手部地标的判断,因为掌部区域血管丰富,所以根据关键点的坐标,人为划分手掌ROI区域,即便存在手部运动,也可以实现精确的ROI定位,最大程度的减少外检干扰;Step 1005: Use the image corresponding to the minimum value of the difference in set C as the best image in the currently collected video, match the best image with the image in the Hand Landmarker model, and use the matching result to match the hand area in the model 21 key points are positioned in the best image. The Hand Landmarker model is trained by Google based on 30K real-world hand images. It realizes the positioning of 21 key points in the hand area. For each frame collected The image is used to judge hand landmarks. Because the palm area is rich in blood vessels, the palm ROI area is artificially divided according to the coordinates of key points. Even if there is hand movement, accurate ROI positioning can be achieved, minimizing external inspection interference;

步骤1006、由于血液组织比其他组织能吸收更多的光,并且不透明物体的颜色由反射光线的颜色来决定,血液会反射红光和吸收绿光,所以摄像头采集到的视频中绿色通道的颜色变化最为明显,由于rPPG是通过反射光线的变化来进行信号处理,因此保留G通道,读取最佳图像中G通道像素均值,并将读取的G通道像素均值作为原始rPPG信号通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,通过背景ROI亮度的变化,校正手部ROI的rPPG信号,其中背景ROI亮度变化情况计算公式为I(t)=[∑w c=1h d=1Gv(c,d,t)]/s,Step 1006. Since blood tissue can absorb more light than other tissues, and the color of opaque objects is determined by the color of reflected light, blood will reflect red light and absorb green light, so the color of the green channel in the video collected by the camera The change is the most obvious. Since rPPG performs signal processing through changes in reflected light, the G channel is retained, the mean G channel pixels in the best image are read, and the read mean G channel pixels are used as the original rPPG signal through canny edge monitoring. Algorithm, separate the outline of the hand area from the background to obtain the background ROI, and correct the rPPG signal of the hand ROI through the change in background ROI brightness. The calculation formula for the change in background ROI brightness is I(t)=[∑ w c=1h d=1 G v (c,d,t)]/s,

其中I(t)表示光照轻度变化随时间t的变化情况,Gv(c,d,t)表示背景ROI中横坐标为c个像素间距且坐标为d个像素间距的像素点在时间为t时的对应的绿色通道的值,w表示背景ROI的像素宽度,h表示背景ROI的像素高度,s表示背景ROI总的像素点数,对亮度随时间变化的离散点,通过九阶多项式拟合出亮度变化曲线,并从原始rPPG信号中减去亮度变化曲线,进而消除光照仿影,另一方面为了减小采集原始rPPG时映入的噪声,使用集合经验模态分解对原始rPPG信号进行去噪,同时为了给单周期分割时提供一个定位的起始点和终值点,减小采样精度不够带来的误差,对rPPG信号进行三次样插值函数使原始波形从30Hz采样频率插值到300Hz。Among them, I(t) represents the change of mild illumination changes with time t, G v (c, d, t) represents the pixel point in the background ROI with an abscissa of c pixel intervals and a coordinate of d pixel intervals at time: The corresponding green channel value at time t, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, s represents the total number of pixels in the background ROI, and the discrete points whose brightness changes with time are fitted by a ninth-order polynomial. The brightness change curve is obtained, and the brightness change curve is subtracted from the original rPPG signal to eliminate illumination artifacts. On the other hand, in order to reduce the noise reflected when collecting the original rPPG, ensemble empirical mode decomposition is used to remove the original rPPG signal. At the same time, in order to provide a positioning starting point and end value point for single-cycle segmentation and reduce the error caused by insufficient sampling accuracy, a cubic sample interpolation function is performed on the rPPG signal to interpolate the original waveform from the 30Hz sampling frequency to 300Hz.

本发明通过实时采集用户手掌部位活动视频,并对视频中关键点位置进行分析,提取每一帧图像与Hand Landmarker模型中图像进行匹配得到手部区域21个关键点的准确定位,通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,通过背景ROI亮度变化,校正手部ROI的rPPG信号,为后续对数据预测提供数据参照。This invention collects the user's palm activity video in real time, analyzes the key point positions in the video, extracts each frame of image and matches it with the image in the Hand Landmarker model to obtain the accurate positioning of 21 key points in the hand area, and monitors the canny edge through canny edge monitoring. The algorithm separates the outline of the hand area from the background to obtain the background ROI. Through the brightness change of the background ROI, the rPPG signal of the hand ROI is corrected to provide data reference for subsequent data prediction.

进一步的,所述S2中的方法包括以下步骤:Further, the method in S2 includes the following steps:

步骤2001、获取预处理后的rPPG信号,以点o1作为原点,以时间作为横坐标,以幅值作为纵坐标,构建第二平面直角坐标系,并将预处理后的rPPG信号映射到第二平面直角坐标系中;Step 2001: Obtain the preprocessed rPPG signal, use point o1 as the origin, time as the abscissa, and amplitude as the ordinate, construct a second plane rectangular coordinate system, and map the preprocessed rPPG signal to the second In the plane Cartesian coordinate system;

步骤2002、在第二平面直角坐标系中将预处理后的rPPG信号中下降期中所有过零的点进行标记,依次将相邻两个标记点进行组合,将任意组合中相邻两个标记点作为区间端点,提取相应区间中rPPG信号中最小值作为分割点进行rPPG信号分割;Step 2002: Mark all zero-crossing points in the decline period of the preprocessed rPPG signal in the second plane Cartesian coordinate system, combine two adjacent marking points in sequence, and combine two adjacent marking points in any combination. As the endpoint of the interval, extract the minimum value of the rPPG signal in the corresponding interval as the segmentation point to segment the rPPG signal;

步骤2003、循环步骤2002将预处理后的rPPG信号分割成多个单周期信号波形,结合人体标准脉率范围对单周期信号波形进行初步筛选,并结合初步筛选结果依次对单周期信号波形进行波形校准,由于rPPG信号在拍摄过程中容易受到噪声影响,因此需要对rPPG剥削进行相关筛选,一方面考虑到人体正常的脉率范围为每分钟60~100次和采样样频率为300Hz,因此保留周期最少为120个采样点,最多为300个采样点的单周期信号波形,当对单周期信号完成初步筛选后,另一方面考虑到人体的脉率具有独特性,每个人的脉率范围都不尽相同,因此对初步筛选后的数据进行图基检测以每个单周期波形的采样点数量作为划分依据,则单周期波形的保留区间如下:Step 2003 and loop step 2002 divide the preprocessed rPPG signal into multiple single-cycle signal waveforms, conduct preliminary screening of the single-cycle signal waveforms based on the human body standard pulse rate range, and sequentially conduct waveform analysis of the single-cycle signal waveforms based on the preliminary screening results. Calibration, because the rPPG signal is easily affected by noise during the shooting process, it is necessary to perform relevant screening for rPPG exploitation. On the one hand, considering that the normal pulse rate range of the human body is 60 to 100 times per minute and the sampling frequency is 300Hz, the retention period A single-cycle signal waveform with a minimum of 120 sampling points and a maximum of 300 sampling points. After the preliminary screening of single-cycle signals is completed, on the other hand, considering the uniqueness of the human body's pulse rate, each person's pulse rate range is different. are all the same, so the graph-based detection of the preliminary filtered data is based on the number of sampling points of each single-cycle waveform. The retention interval of the single-cycle waveform is as follows:

P校准={[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]}·ξ·G异常P calibration ={[Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G anomaly ,

Q1表示所有单周期波形采用点数据的下四分位数,Q3表示所有单周期波形采用点数据的上四分位数,k表示异常系数,当k=3时,表示极端异常,当k=1.5时,表示中度异常,r表示采样点数量,ξ表示比例系数,所述比例系数为数据库预设值,G异常表示单周期信号波形中采样点异常总个数,若[Q1-k(Q3-Q1)-r]<0且[Q3+k(Q3-Q1)-r]>0,则[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]=1,反之均为0,所述单周期信号波形中采样点异常总个数通过将单周期信号波形的采样点与人体标准脉率范围进行差值运算,统计差值运算为负值个数;Q 1 means that all single-cycle waveforms use the lower quartile of point data, Q 3 means that all single-cycle waveforms use the upper quartile of point data, k means the anomaly coefficient, when k=3, it means extreme anomalies, when When k=1.5, it represents moderate anomaly, r represents the number of sampling points, ξ represents the proportion coefficient, and the proportion coefficient is the default value of the database. G anomaly represents the total number of abnormal sampling points in the single-cycle signal waveform. If [Q 1 -k(Q 3 -Q 1 )-r]<0 and [Q 3 +k(Q 3 -Q 1 )-r]>0, then [Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]=1, otherwise all are 0. The total number of abnormal sampling points in the single-cycle signal waveform is calculated by comparing the sampling points of the single-cycle signal waveform with the human body standard pulse rate. The difference operation is performed on the range, and the statistical difference operation is the number of negative values;

本发明通过对单周期信号进行分割,并对每个单周期波形的采样点数量作为划分依据进行波形筛选,根据筛选结果为后续模型训练和数据检测提供数据参照。The present invention divides a single-cycle signal and uses the number of sampling points of each single-cycle waveform as a basis for waveform screening, and provides data reference for subsequent model training and data detection based on the screening results.

进一步的,所述S3中的方法包括以下步骤:Further, the method in S3 includes the following steps:

步骤3001、将网络参数通过Adam优化器进行训练,将线性整流单元作为激活函数,将均方根误差作为损失函数,将平均绝对误差作为评价指标,设置学习率为δ,并且将数据预处理后得到rPPG信号顺序打乱,划分为训练集和测试集;Step 3001. Train the network parameters through the Adam optimizer, use the linear rectifier unit as the activation function, the root mean square error as the loss function, the average absolute error as the evaluation index, set the learning rate δ, and preprocess the data. The order of the rPPG signals obtained is scrambled and divided into a training set and a test set;

步骤3002、将输入模型的信号送入FNN多层感知机分支进行特征提取,其中FNN完全由全连接层组成,通过多层结构对输入的特征进行组合,用于挖掘输入特征与数据之间的强关联性;Step 3002: Send the signal of the input model to the FNN multi-layer perceptron branch for feature extraction. The FNN is completely composed of fully connected layers. The input features are combined through a multi-layer structure to mine the relationship between the input features and the data. strong correlation;

本发明设置一个输入层用于接收数据,九个隐藏层用于挖掘提取输入信号中的深层次特征,最后一层输出层包含160个节点,最后将包含信号特征的向量送入特征融合模块;This invention sets up an input layer for receiving data, nine hidden layers for mining and extracting deep-level features in the input signal, the last output layer contains 160 nodes, and finally the vector containing the signal features is sent to the feature fusion module;

步骤3003、将输入模型的信号同时送入CNN卷积神经网络模块进行多视角的特征提取;本模块以AlexNet网络为主干,网络结构一共包含9层,前8层由卷积层和池化层组成,卷积层用于提取信号中的特征,池化层选用最大池化,减小特征图的尺寸,降低计算复杂度,最后一层为flatten层,将特征向量展平为一维数据,送入特征融合模块;Step 3003: Send the input model signals to the CNN convolutional neural network module at the same time for multi-view feature extraction; this module uses the AlexNet network as the backbone, and the network structure contains a total of 9 layers. The first 8 layers are composed of convolutional layers and pooling layers. Composed, the convolution layer is used to extract features from the signal, the pooling layer uses maximum pooling to reduce the size of the feature map and reduce the computational complexity. The last layer is the flatten layer, which flattens the feature vector into one-dimensional data. Send it to the feature fusion module;

步骤3004、将步骤3002与步骤3003分支的输出特征进行融合,并通过两层全连接层进行数据的预测。Step 3004: Fuse the output features of the branches of step 3002 and step 3003, and predict the data through two fully connected layers.

进一步的,所述S4中的方法实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。Further, the method in S4 collects current intensive care unit patient information in real time, and performs data training on the information through a dual-channel feature fusion data prediction model. The trained dual-channel feature fusion data prediction model will be obtained, and the preprocessed The final signal is input to the trained dual-channel feature fusion data prediction model for prediction, and then the feedback result is obtained.

一种基于rPPG技术的智能检测系统,所述系统包括以下模块:An intelligent detection system based on rPPG technology, the system includes the following modules:

信息数据预处理模块:所述信息数据预处理模块用于通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理;Information data preprocessing module: The information data preprocessing module is used to collect the user's palm information in real time through the camera, combine the collected user's palm information to divide the ROI area and extract the image G channel signal in the divided area, and average the G channel signal Pixel values are preprocessed as raw rPPG signals;

rPPG信号分割与波形选择模块:所述rPPG信号分割与波形选择模块用于将预处理后的rPPG信号进行针对性单周期分割,根据分割结果单周期信号划分,并对划分结果进行rPPG波形筛选,结合筛选结果进行图基检测处理;rPPG signal segmentation and waveform selection module: The rPPG signal segmentation and waveform selection module is used to perform targeted single-cycle segmentation of the preprocessed rPPG signal, divide the single-cycle signal according to the segmentation result, and perform rPPG waveform screening on the segmentation result. Perform Tukey detection processing based on the screening results;

双通道特征融合数据预测模块:所述双通道特征融合数据预测模块用于结合rPPG信号分割与波形选择模块的分析结果构建双通道特征融合数据预测模型;Dual-channel feature fusion data prediction module: The dual-channel feature fusion data prediction module is used to construct a dual-channel feature fusion data prediction model by combining the analysis results of the rPPG signal segmentation and waveform selection modules;

rPPG信号特征提取与数据预测模块:所述rPPG信号特征提取与数据预测模块用于通过训练数据输入双通道特征融合数据预测模型进行训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。rPPG signal feature extraction and data prediction module: The rPPG signal feature extraction and data prediction module is used to input the dual-channel feature fusion data prediction model through training data for training. The trained dual-channel feature fusion data prediction model will be obtained and the pre-trained dual-channel feature fusion data prediction model will be obtained. The processed signal is input to the trained dual-channel feature fusion data prediction model for prediction, and then the feedback result is obtained.

进一步的,所述信息数据预处理模块包括图像采集单元、ROI区域划分单元、通道数据提取单元以及rPPG信号预处理单元:Further, the information data preprocessing module includes an image acquisition unit, an ROI area division unit, a channel data extraction unit and an rPPG signal preprocessing unit:

所述图像采集单元用于通过摄像头实时采集用户手掌部位活动视频,并提取采集的视频中每一帧图像;The image acquisition unit is used to collect real-time video of the user's palm movement through the camera, and extract each frame of the image in the collected video;

所述ROI区域划分单元用于结合图像采集单元的分析结果,对采集到的每一帧图像进行手部地标的判定,并根据判定结果对手部区域21个关键点进行定位;The ROI area division unit is used to determine the hand landmarks in each frame of image collected in combination with the analysis results of the image acquisition unit, and locate 21 key points in the hand area based on the determination results;

所述通道数据提取单元用于结合ROI区域划分单元的分析结果进行RGB三色通道获取,并保留G通道,通过计算G通道平均像素值生成原始rPPG信号;The channel data extraction unit is used to acquire RGB three-color channels in combination with the analysis results of the ROI area division unit, retain the G channel, and generate the original rPPG signal by calculating the average pixel value of the G channel;

所述rPPG信号预处理单元用于通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,并通过背景ROI亮度的变化,校正手部ROI的rPPG信号。The rPPG signal preprocessing unit is used to separate the outline of the hand area from the background to obtain the background ROI through the canny edge monitoring algorithm, and correct the rPPG signal of the hand ROI through changes in the brightness of the background ROI.

进一步的,所述rPPG信号分割与波形选择模块包括单周期分割单元、rPPG波形筛选单元以及图基检测处理单元:Further, the rPPG signal segmentation and waveform selection module includes a single-cycle segmentation unit, an rPPG waveform screening unit and a base detection processing unit:

所述单周期分割单元用于结合信息数据预处理模块的分析结果,提取原始rPPG信号中下降期所有过零的点,并取相邻两个过零点区间最小值作为一个周期的起始点,对原始rPPG信号进行分割处理;The single-cycle segmentation unit is used to combine the analysis results of the information data preprocessing module to extract all zero-crossing points in the declining period of the original rPPG signal, and take the minimum value of the interval between two adjacent zero-crossing points as the starting point of a cycle. The original rPPG signal is segmented;

所述rPPG波形筛选单元用于根据人体正常脉率范围值进行采样点数据获取,并根据获取的结果对单周期分割单元的分析结果进行初步筛选;The rPPG waveform screening unit is used to obtain sampling point data based on the normal pulse rate range of the human body, and perform preliminary screening of the analysis results of the single-cycle segmentation unit based on the obtained results;

所述图基检测单元用于结合rPPG波形筛选单元的分析结果进行图基基检测,以每个单周期波形采样点的数量作为划分依据,对初步筛选的单周期波形进行判断是否保留。The base detection unit is used to perform base detection in combination with the analysis results of the rPPG waveform screening unit, and uses the number of sampling points of each single-cycle waveform as a basis for division to determine whether to retain the initially screened single-cycle waveform.

进一步的,所述双通道特征融合数据预测模块包括双通道特征融合单元以及训练双通道特征融合单元:Further, the dual-channel feature fusion data prediction module includes a dual-channel feature fusion unit and a training dual-channel feature fusion unit:

所述双通道特征融合单元用于结合rPPG信号分割与波形选择模块的分析结果,将rPPG信号送入FNN多层感知机分支进行特征提取,并通过多层结构对提取的特征进行组合;The dual-channel feature fusion unit is used to combine the analysis results of the rPPG signal segmentation and waveform selection modules, send the rPPG signal to the FNN multi-layer perceptron branch for feature extraction, and combine the extracted features through the multi-layer structure;

所述训练双通道特征融合单元用于结合rPPG信号分割与波形选择模块的分析结果,将rPPG信号送入CNN卷积神经网络模块进行多视角特征提取。The training dual-channel feature fusion unit is used to combine the analysis results of the rPPG signal segmentation and waveform selection modules, and send the rPPG signal to the CNN convolutional neural network module for multi-view feature extraction.

进一步的,所述rPPG信号特征提取与数据预测模块包括特征融合单元以及数据预测单元:Further, the rPPG signal feature extraction and data prediction module includes a feature fusion unit and a data prediction unit:

所述特征融合单元用于将双通道特征融合单元以及训练双通道特征融合单元的分析结果进行融合;The feature fusion unit is used to fuse the analysis results of the dual-channel feature fusion unit and the training dual-channel feature fusion unit;

所述数据预测单元用于结合特征融合单元的分析结果进行数据预测。The data prediction unit is used to perform data prediction in combination with the analysis results of the feature fusion unit.

本发明提供一种基于rPPG的非接触测量方法,具有非侵入性、便携性以及普适性的优点,即通过摄像头捕捉皮肤表面的微小颜色变化,基于rPPG的测量方法便捷,通过普通智能手机的后置摄像头采集手掌区域的视频来进行数据预测,这使得用户可以在日常生活中进行检测数据监测,不再需要专门的设备或医疗机构的支持,并且本发明在测量时由于不需要进行体表接触,更适用特殊的使用环境,进而减少了对患者的干扰,更加适用和便捷。The present invention provides a non-contact measurement method based on rPPG, which has the advantages of non-invasiveness, portability and universality. That is, it captures minute color changes on the skin surface through a camera. The measurement method based on rPPG is convenient and can be used through ordinary smartphones. The rear camera collects video of the palm area for data prediction, which allows users to monitor detection data in daily life without the need for special equipment or support from medical institutions. Moreover, the present invention does not require body surface measurements during measurement. Contact, more suitable for special use environments, thereby reducing interference to patients, making it more applicable and convenient.

附图说明Description of the drawings

图1是本发明一种基于rPPG技术的智能检测方法的流程示意图;Figure 1 is a schematic flow chart of an intelligent detection method based on rPPG technology in the present invention;

图2是本发明一种基于rPPG技术的智能检测系统的模块示意图;Figure 2 is a module schematic diagram of an intelligent detection system based on rPPG technology of the present invention;

图3是本发明一种基于rPPG技术的智能检测方法的手部区域关键点位置信息示意图;Figure 3 is a schematic diagram of the position information of key points in the hand area of an intelligent detection method based on rPPG technology of the present invention;

图4是本发明一种基于rPPG技术的智能检测方法的双通道特征融合神经网络示意图;Figure 4 is a schematic diagram of a dual-channel feature fusion neural network of an intelligent detection method based on rPPG technology of the present invention;

图5是本发明一种基于rPPG技术的智能检测方法中数据预测结果示意图。Figure 5 is a schematic diagram of data prediction results in an intelligent detection method based on rPPG technology of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

实施例1:请参阅图1,本实施例中:Embodiment 1: Please refer to Figure 1. In this embodiment:

一种基于rPPG技术的智能检测方法,所述方法包括以下步骤:An intelligent detection method based on rPPG technology, the method includes the following steps:

S1、通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理;S1. Collect the user's palm part information in real time through the camera, combine the collected user palm part information to divide the ROI area and extract the image G channel signal in the divided area, and preprocess the average pixel value in the G channel signal as the original rPPG signal;

所述S1中的方法包括以下步骤:The method in S1 includes the following steps:

步骤1001、通过摄像头实时采集用户手掌部位活动视频,并将视频中每一帧图像进行存储,记为集合A,Step 1001: Collect the video of the user's palm movement in real time through the camera, and store each frame of the video as set A.

A=(A1,A2,A3,...,An),A=(A 1 , A 2 , A 3 ,..., A n ),

其中An表示第n帧图像,n表示采集的视频总帧数;Where A n represents the nth frame image, and n represents the total number of video frames collected;

步骤1002、任意提取其中一帧图像进行划分ROI区域,其中通过图像识别技术将第n帧图像中手部区域中21个关键点进行标注(如图3所示),Step 1002: arbitrarily extract one of the frame images to divide the ROI area, and use image recognition technology to mark 21 key points in the hand area in the n-th frame image (as shown in Figure 3).

以手掌与手腕交界处中心作为第一个关键点,记为关键点0,以关键点0作为原点,以原点为参考点,以单位长度为间隔构建第一平面直角坐标系,将第n帧图像中手部区域中对应关键点在第一平面直角坐标系中进行标注,并按照顺序进行数字标记;Take the center of the junction of the palm and the wrist as the first key point, recorded as key point 0, take key point 0 as the origin, use the origin as the reference point, construct the first plane rectangular coordinate system with unit length as the interval, and divide the nth frame The corresponding key points in the hand area in the image are marked in the first plane rectangular coordinate system, and digitally marked in order;

步骤1003、在第一平面直角坐标系中分别计算各个关键点与原点构成的线段斜率值以及对应关键点与原点在第一平面直角坐标系中的距离值,并将对应关键点与原点的分析结果作为组合,生成集合B,Step 1003. Calculate the slope value of the line segment formed by each key point and the origin in the first plane Cartesian coordinate system and the distance value between the corresponding key point and the origin in the first plane Cartesian coordinate system, and analyze the corresponding key points and the origin. The result, as a combination, generates set B,

B={[(XA(n) 0,1,DA(n) 0,1),(XA(n) 0,2,DA(n) 0,2),...,(XA(n) 0,20,DA(n) 0,20)]},B={[(X A(n) 0,1 , D A(n) 0,1 ), (X A(n) 0,2 , D A(n) 0,2 ),..., (X A(n) 0,20 ,D A(n) 0,20 )]},

其中XA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点构成的线段斜率值,DA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值, Among them , _ The distance value of the key point 0 in the first plane Cartesian coordinate system,

其中XA(n) 0,20=(y20-y0)/(x20-x0),DA(n) 0,20=[(y20-y0)2+(x20-x0)2]1/2 Among them , _ _ _ _ _ _ _ _ _ 0 ) 2 ] 1/2 ;

步骤1004、重复步骤1002至步骤1003得到采集的视频中每一帧图像对应关键点与原点构成的线段斜率值以及相应关键点与原点在第一平面直角坐标系中的距离值,依次计算每一帧图像中关键点位置相对于标准位置之间的差异情况,记为集合C,Step 1004: Repeat steps 1002 to 1003 to obtain the slope value of the line segment formed by the corresponding key point and the origin of each frame of the video in the collected video, as well as the distance value between the corresponding key point and the origin in the first plane rectangular coordinate system, and calculate each step in turn. The difference between the key point positions in the frame image relative to the standard position is recorded as set C,

C=(C1,C2,C3,...,Cn),C=(C 1 , C 2 , C 3 ,..., C n ),

其中Cn表示第n帧图像中关键点位置相对于标准位置之间的差异情况,where Cn represents the difference between the key point position in the nth frame image relative to the standard position,

其中Cn=α·∑20 a=1|XA(n) 0,a-Xa standard|/20+β·∑20 a=1|DA(n) 0,a-Da standard|/20,Where C n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/ 20,

α和β均表示比例系数,所述比例系数为数据库预设值,XA(n) 0,a表示第n帧图像中数字标记为a的关键点与数字标记为0的关键点构成的线段斜率值,Xa standard表示数据标记为a的关键点与数字标记为0的关键点构成的线段斜率标准值,所述斜率标准值为数据库预设值,DA(n) 0,a表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值,Da standard表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离标准值,所述距离标准值为数据库预设值; Both α and β represent proportional coefficients, which are preset values in the database. Slope value , _ The distance value between the key point marked a and the key point marked 0 in the first plane rectangular coordinate system. D a standard means that the key point marked a and the key point marked 0 are at right angles to the first plane. The distance standard value in the coordinate system, the distance standard value is the database preset value;

步骤1005、将集合C中差异情况最小值对应的图像作为当前采集视频中最佳图像,并将所述最佳图像与Hand Landmarker模型中图像进行匹配,并将匹配结果中对应模型中手部区域的21个关键点在最佳图像中进行定位;Step 1005: Use the image corresponding to the minimum value of the difference in set C as the best image in the currently collected video, match the best image with the image in the Hand Landmarker model, and use the matching result to match the hand area in the model 21 key points are positioned in the best image;

步骤1006、读取最佳图像中G通道像素均值,并将读取的G通道像素均值作为原始rPPG信号,通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,通过背景ROI亮度的变化,校正手部ROI的rPPG信号,其中背景ROI亮度变化情况计算公式为I(t)=[∑w c=1h d=1Gv(c,d,t)]/s,Step 1006: Read the mean G channel pixels in the best image, and use the read mean G channel pixels as the original rPPG signal. Use the canny edge monitoring algorithm to separate the outline of the hand area from the background to obtain the background ROI. Use the background ROI brightness changes, correct the rPPG signal of the hand ROI, where the calculation formula for the background ROI brightness change is I(t)=[∑ w c=1h d=1 G v (c,d,t)]/s,

其中I(t)表示光照强度变化随时间t的变化情况,Gv(c,d,t)表示背景ROI中横坐标为c个像素间距且坐标为d个像素间距的像素点在时间为t时的对应的绿色通道的值,w表示背景ROI的像素宽度,h表示背景ROI的像素高度,s表示背景ROI总的像素点数。Among them, I(t) represents the change of light intensity with time t, and G v (c, d, t) represents the pixel point in the background ROI with an abscissa of c pixel intervals and a coordinate of d pixel intervals at time t. The corresponding green channel value, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, and s represents the total number of pixels in the background ROI.

S2、将预处理后的rPPG信号进行针对性单周期分割,结合人体标准脉率对分割结果进行初次筛选,并结合筛选结果进行图基检测处理;S2. Perform targeted single-cycle segmentation on the preprocessed rPPG signal, conduct initial screening of the segmentation results based on the human standard pulse rate, and perform Tukey detection processing based on the screening results;

所述S2中的方法包括以下步骤:The method in S2 includes the following steps:

步骤2001、获取预处理后的rPPG信号,以点o1作为原点,以时间作为横坐标,以幅值作为纵坐标,构建第二平面直角坐标系,并将预处理后的rPPG信号映射到第二平面直角坐标系中;Step 2001: Obtain the preprocessed rPPG signal, use point o1 as the origin, time as the abscissa, and amplitude as the ordinate, construct a second plane rectangular coordinate system, and map the preprocessed rPPG signal to the second In the plane Cartesian coordinate system;

步骤2002、在第二平面直角坐标系中将预处理后的rPPG信号中下降期中所有过零的点进行标记,依次将相邻两个标记点进行组合,将任意组合中相邻两个标记点作为区间端点,提取相应区间中rPPG信号中最小值作为分割点进行rPPG信号分割;Step 2002: Mark all zero-crossing points in the decline period of the preprocessed rPPG signal in the second plane Cartesian coordinate system, combine two adjacent marking points in sequence, and combine two adjacent marking points in any combination. As the endpoint of the interval, extract the minimum value of the rPPG signal in the corresponding interval as the segmentation point to segment the rPPG signal;

步骤2003、循环步骤2002将预处理后的rPPG信号分割成多个单周期信号波形,结合人体标准脉率范围对单周期信号波形进行初步筛选,并结合初步筛选结果通过计算依次对单周期信号波形进行波形校准,表达式为:Step 2003 and loop step 2002 divide the preprocessed rPPG signal into multiple single-cycle signal waveforms, conduct preliminary screening of the single-cycle signal waveforms based on the human body standard pulse rate range, and combine the preliminary screening results to sequentially filter the single-cycle signal waveforms through calculations. To perform waveform calibration, the expression is:

P校准={[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]}·ξ·G异常P calibration ={[Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G anomaly ,

Q1表示所有单周期波形采用点数据的下四分位数,Q3表示所有单周期波形采用点数据的上四分位数,k表示异常系数,所述异常系数为数据库预设值,ξ表示比例系数,所述比例系数为数据库预设值,G异常表示单周期信号波形中采样点异常总个数,若[Q1-k(Q3-Q1)-r]<0且[Q3+k(Q3-Q1)-r]>0,则[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]=1,反之均为0。Q 1 represents the lower quartile of point data for all single-cycle waveforms, Q 3 represents the upper quartile of point data for all single-cycle waveforms, k represents the anomaly coefficient, and the anomaly coefficient is the database preset value, ξ represents a proportional coefficient, which is a database preset value. G anomaly represents the total number of abnormal sampling points in a single-cycle signal waveform. If [Q 1 -k (Q 3 -Q 1 ) -r] < 0 and [Q 3 +k(Q 3 -Q 1 )-r]>0, then [Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]=1 , otherwise all are 0.

S3、基于S2的分析结果,提取图基检测处理后rPPG信号中的特征值,并通过Adma优化器进行网络参数训练,构建双通道特征融合数据预测模型;S3. Based on the analysis results of S2, extract the feature values in the rPPG signal after Tu-based detection processing, and conduct network parameter training through the Adma optimizer to build a dual-channel feature fusion data prediction model;

所述S3中的方法包括以下步骤:The method in S3 includes the following steps:

步骤3001、将网络参数通过Adam优化器进行训练,将线性整流单元作为激活函数,将均方根误差作为损失函数,将平均绝对误差作为评价指标,设置学习率为0.001,并且将数据预处理后得到rPPG信号顺序打乱,划分为训练集和测试集;Step 3001. Train the network parameters through the Adam optimizer, use the linear rectifier unit as the activation function, the root mean square error as the loss function, the average absolute error as the evaluation index, set the learning rate to 0.001, and preprocess the data. The order of the rPPG signals obtained is scrambled and divided into a training set and a test set;

步骤3002、将输入模型的信号送入FNN多层感知机分支进行特征提取;Step 3002: Send the signal of the input model to the FNN multi-layer perceptron branch for feature extraction;

步骤3003、将输入模型的信号同时送入CNN卷积神经网络模块进行多视角的特征提取;Step 3003: Send the input model signals to the CNN convolutional neural network module at the same time for multi-view feature extraction;

步骤3004、将步骤3002与步骤3003分支的输出特征进行融合,并通过两层全连接层进行数据的预测。Step 3004: Fuse the output features of the branches of step 3002 and step 3003, and predict the data through two fully connected layers.

S4、实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。S4. Collect the current intensive care unit patient information in real time, and perform data training on the information through a dual-channel feature fusion data prediction model. The trained dual-channel feature fusion data prediction model will be obtained, and the preprocessed signal will be input to the The trained dual-channel feature fusion data prediction model is used for prediction, and then the feedback results are obtained.

所述S4中的方法实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。The method in S4 collects the current intensive ward patient information in real time, and performs data training on the information through a dual-channel feature fusion data prediction model. The trained dual-channel feature fusion data prediction model will be obtained, and the preprocessed signal Input to the trained dual-channel feature fusion data prediction model for prediction, and then obtain feedback results.

本实施例中:公开了一种基于rPPG技术的智能检测系统(如图2所示),所述系统用于实现方法的具体方案内容。In this embodiment: an intelligent detection system based on rPPG technology is disclosed (as shown in Figure 2), and the system is used to implement the specific solution content of the method.

实施例2:构建与训练双通道特征融合神经网络时使用Adam优化器来训练网络参数,将线性整流单元作为激活函数,将均方根误差作为损失函数,将平均绝对值作为评价指标,设置学习率为0.001,并且将数据预处理后得到rPPG信号顺序打乱,分为训练和测试集,用于模型后续的训练和预测(如图4所示),所述双通道特征融合神经网络具体构建过程如下:Example 2: When constructing and training a dual-channel feature fusion neural network, use the Adam optimizer to train network parameters, use the linear rectifier unit as the activation function, use the root mean square error as the loss function, use the average absolute value as the evaluation index, and set learning The rate is 0.001, and the order of rPPG signals obtained after data preprocessing is scrambled and divided into training and test sets for subsequent training and prediction of the model (as shown in Figure 4). The dual-channel feature fusion neural network is specifically constructed. The process is as follows:

步骤S4.1:设置一个输入层用于接收数据,将输入模型的信号送入FNN多层感知机分支进行特征提取,FNN完全由全连接层组成,通过多层结构对输入的特征进行组合,用于挖掘输入特征与血压之间的强关联性,其中九个隐藏层用于挖掘提取输入信号中的深层次特征,最后一层输出层包含160个节点,最后将包含信号特征的向量送入特征融合模块;Step S4.1: Set up an input layer to receive data, and send the input model signal to the FNN multi-layer perceptron branch for feature extraction. FNN is completely composed of fully connected layers, and the input features are combined through a multi-layer structure. It is used to mine the strong correlation between input features and blood pressure. Nine hidden layers are used to mine and extract deep features in the input signal. The last output layer contains 160 nodes. Finally, the vector containing the signal features is fed into Feature fusion module;

步骤S4.2: 输入模型的信号同时送入CNN卷积神经网络模块进行多视角的特征提取, 本模块以AlexNet网络为主干,网络结构一共包含9层,前8层由卷积层和池化层组成,卷积层用于提取信号中的特征,池化层选用最大池化,减小特征图的尺寸,降低计算复杂度,最后一层为flatten层,将特征向量展平为一维数据,送入特征融合模块;Step S4.2: The signals input to the model are simultaneously sent to the CNN convolutional neural network module for multi-view feature extraction. This module uses the AlexNet network as the backbone. The network structure contains a total of 9 layers. The first 8 layers are composed of convolutional layers and pooling. Layer composition, the convolutional layer is used to extract features from the signal, the pooling layer uses maximum pooling to reduce the size of the feature map and reduce the computational complexity. The last layer is the flatten layer, which flattens the feature vector into one-dimensional data. , sent to the feature fusion module;

步骤S4.3:将两个分支的输出特征经过融合,最后通过两层的全连接层进行血压的预测,使用训练数据输入双通道特征融合血压预测模型进行训练,得到双通道特征融合血压预测模型,提取MIMIC重症监护数据库中数据送入模型中进行训练得到预测的舒张压和收缩压同真实值有很好的拟合程度(如图5所示),其中数据库记录了重症病房的患者信息,包括血压、脉搏波信号。Step S4.3: Fusion the output features of the two branches, and finally predict blood pressure through the two fully connected layers. Use the training data to input the dual-channel feature fusion blood pressure prediction model for training, and obtain the dual-channel feature fusion blood pressure prediction model. , extract the data from the MIMIC intensive care database and send it to the model for training. The predicted diastolic blood pressure and systolic blood pressure have a good fit with the real values (as shown in Figure 5). The database records patient information in the intensive care unit. Including blood pressure and pulse wave signals.

对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。It is obvious to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and that the present invention can be implemented in other specific forms without departing from the spirit or essential characteristics of the present invention. Therefore, the embodiments should be regarded as illustrative and non-restrictive from any point of view, and the scope of the present invention is defined by the appended claims rather than the above description, and it is therefore intended that all claims falling within the claims All changes within the meaning and scope of equivalent elements are included in the present invention. Any reference signs in the claims shall not be construed as limiting the claim in question.

需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that these entities or operations are mutually exclusive. any such actual relationship or sequence exists between them. Furthermore, the terms "comprises," "comprises," or any other variations thereof are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that includes a list of elements includes not only those elements, but also those not expressly listed other elements, or elements inherent to the process, method, article or equipment.

最后应说明的是:以上所述仅为本发明的优选实施例而已,并不用于限制本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。Finally, it should be noted that the above are only preferred embodiments of the present invention and are not intended to limit the present invention. Although the present invention has been described in detail with reference to the foregoing embodiments, for those skilled in the art, it is still The technical solutions described in the foregoing embodiments may be modified, or some of the technical features may be equivalently replaced. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection scope of the present invention.

Claims (10)

1.一种基于rPPG技术的智能检测方法,其特征在于,所述方法包括以下步骤:1. An intelligent detection method based on rPPG technology, characterized in that the method includes the following steps: S1、通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理;S1. Collect the user's palm part information in real time through the camera, combine the collected user palm part information to divide the ROI area and extract the image G channel signal in the divided area, and preprocess the average pixel value in the G channel signal as the original rPPG signal; S2、将预处理后的rPPG信号进行针对性单周期分割,结合人体标准脉率对分割结果进行初次筛选,并结合筛选结果进行图基检测处理;S2. Perform targeted single-cycle segmentation on the preprocessed rPPG signal, conduct initial screening of the segmentation results based on the human standard pulse rate, and perform Tukey detection processing based on the screening results; S3、基于S2的分析结果,提取图基检测处理后rPPG信号中的特征值,并通过Adma优化器进行网络参数训练,构建双通道特征融合数据预测模型;S3. Based on the analysis results of S2, extract the feature values in the rPPG signal after Tu-based detection processing, and conduct network parameter training through the Adma optimizer to build a dual-channel feature fusion data prediction model; S4、实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。S4. Collect the current intensive care unit patient information in real time, and perform data training on the information through a dual-channel feature fusion data prediction model. The trained dual-channel feature fusion data prediction model will be obtained, and the preprocessed signal will be input to the The trained dual-channel feature fusion data prediction model is used for prediction, and then the feedback results are obtained. 2.根据权利要求1所述的一种基于rPPG技术的智能检测方法,其特征在于,所述S1中的方法包括以下步骤:2. An intelligent detection method based on rPPG technology according to claim 1, characterized in that the method in S1 includes the following steps: 步骤1001、通过摄像头实时采集用户手掌部位活动视频,并将视频中每一帧图像进行存储,记为集合A,Step 1001: Collect the video of the user's palm movement in real time through the camera, and store each frame of the video as set A. A=(A1,A2,A3,...,An),A=(A 1 , A 2 , A 3 ,..., A n ), 其中An表示第n帧图像,n表示采集的视频总帧数;Where A n represents the nth frame image, and n represents the total number of video frames collected; 步骤1002、任意提取其中一帧图像进行划分ROI区域,其中通过图像识别技术将第n帧图像中手部区域中21个关键点进行标注,Step 1002: arbitrarily extract one of the frame images to divide the ROI area, and use image recognition technology to mark 21 key points in the hand area in the n-th frame image. 以手掌与手腕交界处中心作为第一个关键点,记为关键点0,以关键点0作为原点,以原点为参考点,以单位长度为间隔构建第一平面直角坐标系,将第n帧图像中手部区域中对应关键点在第一平面直角坐标系中进行标注,并按照顺序进行数字标记;Take the center of the junction of the palm and the wrist as the first key point, recorded as key point 0, take key point 0 as the origin, use the origin as the reference point, construct the first plane rectangular coordinate system with unit length as the interval, and divide the nth frame The corresponding key points in the hand area in the image are marked in the first plane rectangular coordinate system, and digitally marked in order; 步骤1003、在第一平面直角坐标系中分别计算各个关键点与原点构成的线段斜率值以及对应关键点与原点在第一平面直角坐标系中的距离值,并将对应关键点与原点的分析结果作为组合,生成集合B,Step 1003. Calculate the slope value of the line segment formed by each key point and the origin in the first plane Cartesian coordinate system and the distance value between the corresponding key point and the origin in the first plane Cartesian coordinate system, and analyze the corresponding key points and the origin. The result, as a combination, generates set B, B={[(XA(n) 0,1,DA(n) 0,1),(XA(n) 0,2,DA(n) 0,2),...,(XA(n) 0,20,DA(n) 0,20)]},B={[(X A(n) 0,1 , D A(n) 0,1 ), (X A(n) 0,2 , D A(n) 0,2 ),..., (X A(n) 0,20 ,D A(n) 0,20 )]}, 其中XA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点构成的线段斜率值,DA(n) 0,20表示数字标记为20的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值, Among them , _ The distance value of the key point 0 in the first plane Cartesian coordinate system, 其中XA(n) 0,20=(y20-y0)/(x20-x0),DA(n) 0,20=[(y20-y0)2+(x20-x0)2]1/2 Among them , _ _ _ _ _ _ _ _ _ 0 ) 2 ] 1/2 , 步骤1004、重复步骤1002至步骤1003得到采集的视频中每一帧图像对应关键点与原点构成的线段斜率值以及相应关键点与原点在第一平面直角坐标系中的距离值,依次计算每一帧图像中关键点位置相对于标准位置之间的差异情况,记为集合C,Step 1004: Repeat steps 1002 to 1003 to obtain the slope value of the line segment formed by the corresponding key point and the origin of each frame of the video in the collected video, as well as the distance value between the corresponding key point and the origin in the first plane rectangular coordinate system, and calculate each step in turn. The difference between the key point positions in the frame image relative to the standard position is recorded as set C, C=(C1,C2,C3,...,Cn),C=(C 1 , C 2 , C 3 ,..., C n ), 其中Cn表示第n帧图像中关键点位置相对于标准位置之间的差异情况,where Cn represents the difference between the key point position in the nth frame image relative to the standard position, 其中Cn=α·∑20 a=1|XA(n) 0,a-Xa standard|/20+β·∑20 a=1|DA(n) 0,a-Da standard|/20,Where C n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/ 20, α和β均表示比例系数,所述比例系数为数据库预设值,XA(n) 0,a表示第n帧图像中数字标记为a的关键点与数字标记为0的关键点构成的线段斜率值,Xa standard表示数据标记为a的关键点与数字标记为0的关键点构成的线段斜率标准值,所述斜率标准值为数据库预设值,DA (n) 0,a表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离值,Da standard表示数据标记为a的关键点与数字标记为0的关键点在第一平面直角坐标系中的距离标准值,所述距离标准值为数据库预设值; Both α and β represent proportional coefficients, which are preset values in the database. Slope value , _ _ The distance value between the key point marked a and the key point marked 0 in the first plane rectangular coordinate system. D a standard means that the key point marked a and the key point marked 0 are at right angles to the first plane. The distance standard value in the coordinate system, the distance standard value is the database preset value; 步骤1005、将集合C中差异情况最小值对应的图像作为当前采集视频中最佳图像,并将所述最佳图像与Hand Landmarker模型中图像进行匹配,并将匹配结果中对应模型中手部区域的21个关键点在最佳图像中进行定位;Step 1005: Use the image corresponding to the minimum value of the difference in set C as the best image in the currently collected video, match the best image with the image in the Hand Landmarker model, and use the matching result to match the hand area in the model 21 key points are positioned in the best image; 步骤1006、读取最佳图像中G通道像素均值,并将读取的G通道像素均值作为原始rPPG信号,通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,通过背景ROI亮度的变化,校正手部ROI的rPPG信号,其中背景ROI亮度变化情况计算公式为I(t)=[∑w c=1h d=1Gv(c,d,t)]/s,Step 1006: Read the mean G channel pixels in the best image, and use the read mean G channel pixels as the original rPPG signal. Use the canny edge monitoring algorithm to separate the outline of the hand area from the background to obtain the background ROI. Use the background ROI brightness changes, correct the rPPG signal of the hand ROI, where the calculation formula for the background ROI brightness change is I(t)=[∑ w c=1h d=1 G v (c,d,t)]/s, 其中I(t)表示光照强度变化随时间t的变化情况,Gv(c,d,t)表示背景ROI中横坐标为c个像素间距且坐标为d个像素间距的像素点在时间为t时的对应的绿色通道的值,w表示背景ROI的像素宽度,h表示背景ROI的像素高度,s表示背景ROI总的像素点数。Among them, I(t) represents the change of light intensity with time t, and G v (c, d, t) represents the pixel point in the background ROI with an abscissa of c pixel intervals and a coordinate of d pixel intervals at time t. The corresponding green channel value, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, and s represents the total number of pixels in the background ROI. 3.根据权利要求2所述的一种基于rPPG技术的智能检测方法,其特征在于,所述S2中的方法包括以下步骤:3. An intelligent detection method based on rPPG technology according to claim 2, characterized in that the method in S2 includes the following steps: 步骤2001、获取预处理后的rPPG信号,以点o1作为原点,以时间作为横坐标,以幅值作为纵坐标,构建第二平面直角坐标系,并将预处理后的rPPG信号映射到第二平面直角坐标系中;Step 2001: Obtain the preprocessed rPPG signal, use point o1 as the origin, time as the abscissa, and amplitude as the ordinate, construct a second plane rectangular coordinate system, and map the preprocessed rPPG signal to the second In the plane Cartesian coordinate system; 步骤2002、在第二平面直角坐标系中将预处理后的rPPG信号中下降期中所有过零的点进行标记,依次将相邻两个标记点进行组合,将任意组合中相邻两个标记点作为区间端点,提取相应区间中rPPG信号中最小值作为分割点进行rPPG信号分割;Step 2002: Mark all zero-crossing points in the decline period of the preprocessed rPPG signal in the second plane Cartesian coordinate system, combine two adjacent marking points in sequence, and combine two adjacent marking points in any combination. As the endpoint of the interval, extract the minimum value of the rPPG signal in the corresponding interval as the segmentation point to segment the rPPG signal; 步骤2003、循环步骤2002将预处理后的rPPG信号分割成多个单周期信号波形,结合人体标准脉率范围对单周期信号波形进行初步筛选,并结合初步筛选结果通过计算依次对单周期信号波形进行波形校准,表达式为:Step 2003 and loop step 2002 divide the preprocessed rPPG signal into multiple single-cycle signal waveforms, conduct preliminary screening of the single-cycle signal waveforms based on the human body standard pulse rate range, and combine the preliminary screening results to sequentially filter the single-cycle signal waveforms through calculations. To perform waveform calibration, the expression is: P校准={[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]}·ξ·G异常P calibration ={[Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G anomaly , Q1表示所有单周期波形采用点数据的下四分位数,Q3表示所有单周期波形采用点数据的上四分位数,k表示异常系数,所述异常系数为数据库预设值,r表示采样点数量,ξ表示比例系数,所述比例系数为数据库预设值,G异常表示单周期信号波形中采样点异常总个数,若[Q1-k(Q3-Q1)-r]<0且[Q3+k(Q3-Q1)-r]>0,则[Q1-k(Q3-Q1)-r][Q3+k(Q3-Q1)-r]=1,反之均为0。Q 1 represents the lower quartile of point data for all single-cycle waveforms, Q 3 represents the upper quartile of point data for all single-cycle waveforms, k represents the anomaly coefficient, and the anomaly coefficient is the database preset value, r represents the number of sampling points, ξ represents the proportion coefficient, which is the preset value of the database, G anomaly represents the total number of abnormal sampling points in the single-cycle signal waveform, if [Q 1 -k(Q 3 -Q 1 )-r ]<0 and [Q 3 +k(Q 3 -Q 1 )-r]>0, then [Q 1 -k(Q 3 -Q 1 )-r] [Q 3 +k(Q 3 -Q 1 )-r]=1, otherwise it is 0. 4.根据权利要求3所述的一种基于rPPG技术的智能检测方法,其特征在于,所述S3中的方法包括以下步骤:4. An intelligent detection method based on rPPG technology according to claim 3, characterized in that the method in S3 includes the following steps: 步骤3001、将网络参数通过Adam优化器进行训练,将线性整流单元作为激活函数,将均方根误差作为损失函数,将平均绝对误差作为评价指标,设置学习率为δ,并且将数据预处理后得到rPPG信号顺序打乱,划分为训练集和测试集;Step 3001. Train the network parameters through the Adam optimizer, use the linear rectifier unit as the activation function, the root mean square error as the loss function, the average absolute error as the evaluation index, set the learning rate δ, and preprocess the data. The order of the rPPG signals obtained is scrambled and divided into a training set and a test set; 步骤3002、将输入模型的信号送入FNN多层感知机分支进行特征提取;Step 3002: Send the signal of the input model to the FNN multi-layer perceptron branch for feature extraction; 步骤3003、将输入模型的信号同时送入CNN卷积神经网络模块进行多视角的特征提取;Step 3003: Send the input model signals to the CNN convolutional neural network module at the same time for multi-view feature extraction; 步骤3004、将步骤3002与步骤3003分支的输出特征进行融合,并通过两层全连接层进行数据的预测。Step 3004: Fuse the output features of the branches of step 3002 and step 3003, and predict the data through two fully connected layers. 5.根据权利要求4所述的一种基于rPPG技术的智能检测方法,其特征在于,所述S4中的方法实时采集当前重症病房患者信息,并将所述信息通过双通道特征融合数据预测模型进行数据训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。5. An intelligent detection method based on rPPG technology according to claim 4, characterized in that the method in S4 collects current intensive care unit patient information in real time, and passes the information through a dual-channel feature fusion data prediction model Perform data training to obtain a trained dual-channel feature fusion data prediction model, input the preprocessed signal into the trained dual-channel feature fusion data prediction model for prediction, and then obtain feedback results. 6.一种基于rPPG技术的智能检测系统,其特征在于,所述系统包括以下模块:6. An intelligent detection system based on rPPG technology, characterized in that the system includes the following modules: 信息数据预处理模块:所述信息数据预处理模块用于通过摄像头实时采集用户手掌部位信息,结合采集用户手掌部位信息进行ROI区域划分并提取划分区域中图像G通道信号,将G通道信号中平均像素值作为原始rPPG信号进行预处理;Information data preprocessing module: The information data preprocessing module is used to collect the user's palm information in real time through the camera, combine the collected user's palm information to divide the ROI area and extract the image G channel signal in the divided area, and average the G channel signal Pixel values are preprocessed as raw rPPG signals; rPPG信号分割与波形选择模块:所述rPPG信号分割与波形选择模块用于将预处理后的rPPG信号进行针对性单周期分割,根据分割结果单周期信号划分,并对划分结果进行rPPG波形筛选,结合筛选结果进行图基检测处理;rPPG signal segmentation and waveform selection module: The rPPG signal segmentation and waveform selection module is used to perform targeted single-cycle segmentation of the preprocessed rPPG signal, divide the single-cycle signal according to the segmentation result, and perform rPPG waveform screening on the segmentation result. Perform Tukey detection processing based on the screening results; 双通道特征融合数据预测模块:所述双通道特征融合数据预测模块用于结合rPPG信号分割与波形选择模块的分析结果构建双通道特征融合数据预测模型;Dual-channel feature fusion data prediction module: The dual-channel feature fusion data prediction module is used to construct a dual-channel feature fusion data prediction model by combining the analysis results of the rPPG signal segmentation and waveform selection modules; rPPG信号特征提取与数据预测模块:所述rPPG信号特征提取与数据预测模块用于通过训练数据输入双通道特征融合数据预测模型进行训练,将得到训练后的双通道特征融合数据预测模型,将预处理后的信号输入到所述训练后的双通道特征融合数据预测模型进行预测,进而得到反馈结果。rPPG signal feature extraction and data prediction module: The rPPG signal feature extraction and data prediction module is used to input the dual-channel feature fusion data prediction model through training data for training. The trained dual-channel feature fusion data prediction model will be obtained and the pre-trained dual-channel feature fusion data prediction model will be obtained. The processed signal is input to the trained dual-channel feature fusion data prediction model for prediction, and then the feedback result is obtained. 7.根据权利要求6所述的一种基于rPPG技术的智能检测系统,其特征在于,所述信息数据预处理模块包括图像采集单元、ROI区域划分单元、通道数据提取单元以及rPPG信号预处理单元:7. An intelligent detection system based on rPPG technology according to claim 6, characterized in that the information data preprocessing module includes an image acquisition unit, an ROI area division unit, a channel data extraction unit and an rPPG signal preprocessing unit. : 所述图像采集单元用于通过摄像头实时采集用户手掌部位活动视频,并提取采集的视频中每一帧图像;The image acquisition unit is used to collect real-time video of the user's palm movement through the camera, and extract each frame of the image in the collected video; 所述ROI区域划分单元用于结合图像采集单元的分析结果,对采集到的每一帧图像进行手部地标的判定,并根据判定结果对手部区域21个关键点进行定位;The ROI area division unit is used to determine the hand landmarks in each frame of image collected in combination with the analysis results of the image acquisition unit, and locate 21 key points in the hand area based on the determination results; 所述通道数据提取单元用于结合ROI区域划分单元的分析结果进行RGB三色通道获取,并保留G通道,通过计算G通道平均像素值生成原始rPPG信号;The channel data extraction unit is used to acquire RGB three-color channels in combination with the analysis results of the ROI area division unit, retain the G channel, and generate the original rPPG signal by calculating the average pixel value of the G channel; 所述rPPG信号预处理单元用于通过canny边缘监测算法,将手部区域轮廓与背景分离得到背景ROI,并通过背景ROI亮度的变化,校正手部ROI的rPPG信号。The rPPG signal preprocessing unit is used to separate the outline of the hand area from the background to obtain the background ROI through the canny edge monitoring algorithm, and correct the rPPG signal of the hand ROI through changes in the brightness of the background ROI. 8.根据权利要求7所述的一种基于rPPG技术的智能检测系统,其特征在于,所述rPPG信号分割与波形选择模块包括单周期分割单元、rPPG波形筛选单元以及图基检测处理单元:8. A kind of intelligent detection system based on rPPG technology according to claim 7, characterized in that the rPPG signal segmentation and waveform selection module includes a single-cycle segmentation unit, an rPPG waveform screening unit and a base detection processing unit: 所述单周期分割单元用于结合信息数据预处理模块的分析结果,提取原始rPPG信号中下降期所有过零的点,并取相邻两个过零点区间最小值作为一个周期的起始点,对原始rPPG信号进行分割处理;The single-cycle segmentation unit is used to combine the analysis results of the information data preprocessing module to extract all zero-crossing points in the declining period of the original rPPG signal, and take the minimum value of the interval between two adjacent zero-crossing points as the starting point of a cycle. The original rPPG signal is segmented; 所述rPPG波形筛选单元用于根据人体正常脉率范围值进行采样点数据获取,并根据获取的结果对单周期分割单元的分析结果进行初步筛选;The rPPG waveform screening unit is used to obtain sampling point data based on the normal pulse rate range of the human body, and perform preliminary screening of the analysis results of the single-cycle segmentation unit based on the obtained results; 所述图基检测单元用于结合rPPG波形筛选单元的分析结果进行图基基检测,以每个单周期波形采样点的数量作为划分依据,对初步筛选的单周期波形进行判断是否保留。The base detection unit is used to perform base detection in combination with the analysis results of the rPPG waveform screening unit, and uses the number of sampling points of each single-cycle waveform as a basis for division to determine whether to retain the initially screened single-cycle waveform. 9.根据权利要求8所述的一种基于rPPG技术的智能检测系统,其特征在于,所述双通道特征融合数据预测模块包括双通道特征融合单元以及训练双通道特征融合单元:9. A kind of intelligent detection system based on rPPG technology according to claim 8, characterized in that the dual-channel feature fusion data prediction module includes a dual-channel feature fusion unit and a training dual-channel feature fusion unit: 所述双通道特征融合单元用于结合rPPG信号分割与波形选择模块的分析结果,将rPPG信号送入FNN多层感知机分支进行特征提取,并通过多层结构对提取的特征进行组合;The dual-channel feature fusion unit is used to combine the analysis results of the rPPG signal segmentation and waveform selection modules, send the rPPG signal to the FNN multi-layer perceptron branch for feature extraction, and combine the extracted features through the multi-layer structure; 所述训练双通道特征融合单元用于结合rPPG信号分割与波形选择模块的分析结果,将rPPG信号送入CNN卷积神经网络模块进行多视角特征提取。The training dual-channel feature fusion unit is used to combine the analysis results of the rPPG signal segmentation and waveform selection modules, and send the rPPG signal to the CNN convolutional neural network module for multi-view feature extraction. 10.根据权利要求9所述的一种基于rPPG技术的智能检测系统,其特征在于,所述rPPG信号特征提取与数据预测模块包括特征融合单元以及数据预测单元:10. An intelligent detection system based on rPPG technology according to claim 9, characterized in that the rPPG signal feature extraction and data prediction module includes a feature fusion unit and a data prediction unit: 所述特征融合单元用于将双通道特征融合单元以及训练双通道特征融合单元的分析结果进行融合;The feature fusion unit is used to fuse the analysis results of the dual-channel feature fusion unit and the training dual-channel feature fusion unit; 所述数据预测单元用于结合特征融合单元的分析结果进行数据预测。The data prediction unit is used to perform data prediction in combination with the analysis results of the feature fusion unit.
CN202311161515.5A 2023-09-11 2023-09-11 An intelligent detection system and method based on rPPG technology Active CN116889388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311161515.5A CN116889388B (en) 2023-09-11 2023-09-11 An intelligent detection system and method based on rPPG technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311161515.5A CN116889388B (en) 2023-09-11 2023-09-11 An intelligent detection system and method based on rPPG technology

Publications (2)

Publication Number Publication Date
CN116889388A true CN116889388A (en) 2023-10-17
CN116889388B CN116889388B (en) 2023-11-17

Family

ID=88315256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311161515.5A Active CN116889388B (en) 2023-09-11 2023-09-11 An intelligent detection system and method based on rPPG technology

Country Status (1)

Country Link
CN (1) CN116889388B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118348331A (en) * 2024-03-06 2024-07-16 深圳东昇射频技术有限公司 Test marking method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5746698A (en) * 1995-09-28 1998-05-05 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Method and device for determining brachial arterial pressure wave on the basis of nonivasively measured finger blood pressure wave
US20140148664A1 (en) * 2012-02-13 2014-05-29 Marina Borisovna Girina Device and method for assessing regional blood circulation
CN103932686A (en) * 2014-04-22 2014-07-23 北京印刷学院 Method and device for extracting pulse condition signal
US9307928B1 (en) * 2010-03-30 2016-04-12 Masimo Corporation Plethysmographic respiration processor
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
CN109036552A (en) * 2018-07-19 2018-12-18 上海中医药大学 Tcm diagnosis terminal and its storage medium
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN111839489A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact physiological and psychological health detection system
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN113556972A (en) * 2019-02-13 2021-10-26 Viavi科技有限公司 Baseline correction and heartbeat curve extraction
CN114366090A (en) * 2022-01-13 2022-04-19 湖南龙罡智能科技有限公司 Blood component detection method integrating multiple measurement mechanisms
WO2023141404A2 (en) * 2022-01-20 2023-07-27 Jeffrey Thomas Loh Photoplethysmography-based blood pressure monitoring device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5746698A (en) * 1995-09-28 1998-05-05 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Method and device for determining brachial arterial pressure wave on the basis of nonivasively measured finger blood pressure wave
US9307928B1 (en) * 2010-03-30 2016-04-12 Masimo Corporation Plethysmographic respiration processor
US20140148664A1 (en) * 2012-02-13 2014-05-29 Marina Borisovna Girina Device and method for assessing regional blood circulation
CN103932686A (en) * 2014-04-22 2014-07-23 北京印刷学院 Method and device for extracting pulse condition signal
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
CN109036552A (en) * 2018-07-19 2018-12-18 上海中医药大学 Tcm diagnosis terminal and its storage medium
CN113556972A (en) * 2019-02-13 2021-10-26 Viavi科技有限公司 Baseline correction and heartbeat curve extraction
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN111839489A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact physiological and psychological health detection system
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN114366090A (en) * 2022-01-13 2022-04-19 湖南龙罡智能科技有限公司 Blood component detection method integrating multiple measurement mechanisms
WO2023141404A2 (en) * 2022-01-20 2023-07-27 Jeffrey Thomas Loh Photoplethysmography-based blood pressure monitoring device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEIMIN WU, YUNJIE SUN, ZHE LIN, ET AL: "A New LCL-Filter With In-Series Parallel Resonant Circuit for Single-Phase Grid-Tied Inverter", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》, vol. 61, no. 9, XP011543653, DOI: 10.1109/TIE.2013.2293703 *
李炳霖 等: "基于图像光电容积描记法的心率测量", 《长春理工大学学报(自然科学版)》, vol. 45, no. 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118348331A (en) * 2024-03-06 2024-07-16 深圳东昇射频技术有限公司 Test marking method and device

Also Published As

Publication number Publication date
CN116889388B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
Fan et al. Robust blood pressure estimation using an RGB camera
Tasli et al. Remote PPG based vital sign measurement using adaptive facial regions
EP3229676B1 (en) Method and apparatus for physiological monitoring
CN111728602A (en) Non-contact blood pressure measurement device based on PPG
CN111243739A (en) Anti-interference physiological parameter telemetering method and system
Feng et al. Motion artifacts suppression for remote imaging photoplethysmography
CN110276271A (en) Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference
CN107506716A (en) A kind of contactless real-time method for measuring heart rate based on video image
CN106793962A (en) Method and apparatus for continuously estimating human blood-pressure using video image
CN112890792A (en) Cloud computing cardiovascular health monitoring system and method based on network camera
CN111297347B (en) Method and apparatus for generating photoplethysmography signals
CN114246570B (en) Near-infrared heart rate detection method by fusing peak signal-to-noise ratio and Peerson correlation coefficient
CN108937905A (en) A kind of contactless heart rate detection method based on signal fitting
CN112294282A (en) Self-calibration method of emotion detection device based on RPPG
KR20220123376A (en) Methods and systems for determining cardiovascular parameters
CN116889388B (en) An intelligent detection system and method based on rPPG technology
CN116109818A (en) Traditional Chinese medicine pulse condition distinguishing system, method and device based on facial video
CN112200099A (en) A video-based dynamic heart rate detection method
CN116049674A (en) Method and system for generating invasive blood pressure waveform estimation based on countermeasure network
Wiede et al. Signal fusion based on intensity and motion variations for remote heart rate determination
Ben Salah et al. Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis
CN112022131B (en) Long-term continuous non-contact heart rate measurement method and system
Malini Non-Contact Heart Rate Monitoring System using Deep Learning Techniques
CN115245318A (en) Automatic identification method of effective IPPG signal based on deep learning
CN114469036A (en) Remote heart rate monitoring method and system based on video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant