[go: up one dir, main page]

CN118644819B - Video monitoring management method and system for face care - Google Patents

Video monitoring management method and system for face care Download PDF

Info

Publication number
CN118644819B
CN118644819B CN202411109946.1A CN202411109946A CN118644819B CN 118644819 B CN118644819 B CN 118644819B CN 202411109946 A CN202411109946 A CN 202411109946A CN 118644819 B CN118644819 B CN 118644819B
Authority
CN
China
Prior art keywords
image
pyramid
face
matching
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411109946.1A
Other languages
Chinese (zh)
Other versions
CN118644819A (en
Inventor
吴倩
张红
吕秀霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fishing Technology Dalian Co ltd
Original Assignee
Fishing Technology Dalian Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fishing Technology Dalian Co ltd filed Critical Fishing Technology Dalian Co ltd
Priority to CN202411109946.1A priority Critical patent/CN118644819B/en
Publication of CN118644819A publication Critical patent/CN118644819A/en
Application granted granted Critical
Publication of CN118644819B publication Critical patent/CN118644819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本申请涉及人脸匹配领域,具体涉及一种面向人脸护理的视频监控管理方法及系统,所述方法包括:获取患者人脸图像及各模板图像;计算患者人脸图像与各模板图像的偏差因子,并筛选各候选模板图像;获取患者人脸图像及候选模板图像的各层金字塔图像;通过金字塔模板图像的旋转角度范围、自适应旋转步长得到各金字塔旋转模板图像;根据金字塔搜索图像与同层各金字塔旋转模板图像的对应匹配块纹理特征矩阵间的相关性得到金字塔搜索图像的匹配图像,计算金字塔搜索图像与候选图像的匹配度,通过匹配度得到患者人脸图像的最终匹配图像。从而实现对患者人脸信息的准确匹配识别,提高患者状况预警的准确性。

The present application relates to the field of face matching, and specifically to a video monitoring management method and system for face care, the method comprising: obtaining a patient face image and each template image; calculating the deviation factor between the patient face image and each template image, and screening each candidate template image; obtaining each layer of pyramid images of the patient face image and the candidate template image; obtaining each pyramid rotation template image through the rotation angle range and adaptive rotation step of the pyramid template image; obtaining the matching image of the pyramid search image according to the correlation between the texture feature matrix of the corresponding matching block of the pyramid search image and each pyramid rotation template image of the same layer, calculating the matching degree between the pyramid search image and the candidate image, and obtaining the final matching image of the patient face image through the matching degree. Thus, accurate matching and recognition of the patient's face information is achieved, and the accuracy of the patient's condition warning is improved.

Description

一种面向人脸护理的视频监控管理方法及系统A video monitoring management method and system for face care

技术领域Technical Field

本申请涉及人脸匹配领域,具体涉及一种面向人脸护理的视频监控管理方法及系统。The present application relates to the field of face matching, and in particular to a video surveillance management method and system for face care.

背景技术Background Art

目前,智能化医疗走进人们的日常生活,人机交互、智能机器人等已经成为医疗辅助的热点技术。随着人口老龄化趋势的加剧,医疗护理产业迎来了飞速的发展,在医护产业智能化发展的道路上,医院在保证高标准的临床医疗水平的同时,为患者提供一个安全、贴心、周到、人性化的病房护理环境越加重要。At present, intelligent medical care has entered people's daily lives, and human-computer interaction and intelligent robots have become hot technologies for medical assistance. With the intensification of the aging trend of the population, the medical care industry has ushered in rapid development. On the road of intelligent development of the medical care industry, it is increasingly important for hospitals to provide patients with a safe, caring, thoughtful and humane ward care environment while ensuring high standards of clinical medical care.

在重症患者病房中,大多需要医护人员时时看护,以防患者出现异常反应或者严重并发症发作,病房又无人看护的情况。但是,医护人员数量有限且依靠医护人员实时看护会出现疏忽情况,浪费人力物力,严重时将会导致患者可能会失去最佳治疗时机。目前有些技术通过对病患肢体出现的危险动作进行检测,以此判断患者的情况,该过程判断依据不够准确,不适用于重症或肢体无法动作的患者,适用范围较小。In the wards for critically ill patients, most of them need medical staff to watch over them all the time to prevent abnormal reactions or serious complications in the patients, and there is no one to watch over them in the ward. However, the number of medical staff is limited and relying on medical staff to watch over them in real time may lead to negligence, waste of manpower and material resources, and in serious cases, the patients may lose the best time for treatment. At present, some technologies detect dangerous movements of the patient's limbs to judge the patient's condition. The judgment basis of this process is not accurate enough, and it is not suitable for patients with severe illness or limbs that cannot move, and the scope of application is small.

综上所述,本申请提出一种面向人脸护理的视频监控管理方法及系统,通过视频摄像机对患者进行脸部视频采集,获取连续多帧脸部图像,进一步对患者脸部的匹配,为医护人员进行相应的预警提示,适用范围广泛,以便医护人员采取针对性的护理及治疗措施。In summary, the present application proposes a video surveillance management method and system for facial care, which captures facial videos of patients through video cameras, obtains multiple frames of continuous facial images, further matches the patient's face, and provides corresponding early warning prompts to medical staff. It has a wide range of applications so that medical staff can take targeted care and treatment measures.

发明内容Summary of the invention

为了解决上述技术问题,第一方面,本申请提供一种面向人脸护理的视频监控管理方法,所述方法包括:In order to solve the above technical problems, in a first aspect, the present application provides a video monitoring management method for face care, the method comprising:

获取患者人脸图像及人脸护理数据库中各模板图像,根据患者人脸图像质心坐标与模板图像质心坐标的差异得到偏差因子;将偏差因子低于预设阈值的模版图像作为各候选模板图像;分别获取患者人脸图像及候选模板图像的各层金字塔图像;Acquire the patient's face image and each template image in the face care database, and obtain the deviation factor according to the difference between the centroid coordinates of the patient's face image and the centroid coordinates of the template image; use the template images with deviation factors lower than a preset threshold as candidate template images; and respectively obtain the pyramid images of each layer of the patient's face image and the candidate template image;

患者人脸图像的各层金字塔图像依次记为金字塔搜索图像,候选模板图像的各层金字塔图像依次记为金字塔模板图像;根据金字塔搜索图像及金字塔模板图像的脸部方向差异得到金字塔模板图像的旋转角度范围;Each layer of the pyramid image of the patient's face image is recorded as a pyramid search image in turn, and each layer of the pyramid image of the candidate template image is recorded as a pyramid template image in turn; the rotation angle range of the pyramid template image is obtained according to the difference in face directions between the pyramid search image and the pyramid template image;

采用多尺度Gabor滤波器对患者人脸图像滤波获取多张患者人脸特征图;根据各患者人脸特征图的灰度分布情况得到患者人脸图像复杂度,计算金字塔模板图像的自适应旋转步长;A multi-scale Gabor filter is used to filter the patient's face image to obtain multiple patient face feature maps; the complexity of the patient's face image is obtained according to the grayscale distribution of each patient's face feature map, and the adaptive rotation step length of the pyramid template image is calculated;

根据旋转角度范围以及自适应旋转步长结合图像旋转变换算法得到金字塔模板图像的各金字塔旋转模板图像;According to the rotation angle range and the adaptive rotation step size combined with the image rotation transformation algorithm, each pyramid rotation template image of the pyramid template image is obtained;

金字塔搜索图像中的各匹配块依次记为金字塔搜索匹配块,金字塔旋转模板图像中各匹配块依次记为金字塔旋转模板匹配块;Each matching block in the pyramid search image is sequentially recorded as a pyramid search matching block, and each matching block in the pyramid rotation template image is sequentially recorded as a pyramid rotation template matching block;

根据金字塔搜索匹配块中所有像素点海森矩阵的特征值及特征向量得到金字塔搜索匹配块的纹理特征矩阵,相应地,获取金字塔旋转模板匹配块的纹理特征矩阵;根据金字塔搜索图像与同层各金字塔旋转模板图像中对应匹配块纹理特征矩阵之间的相关关系,得到金字塔搜索图像的匹配图像;The texture feature matrix of the pyramid search matching block is obtained according to the eigenvalues and eigenvectors of the Hessian matrix of all pixels in the pyramid search matching block, and accordingly, the texture feature matrix of the pyramid rotation template matching block is obtained; the matching image of the pyramid search image is obtained according to the correlation between the texture feature matrices of the corresponding matching blocks in the pyramid rotation template images of the same layer;

分析所有层金字塔搜索图像与对应匹配图像之间匹配相似度的均值,得到患者人脸图像与候选模板图像的匹配度,获取患者人脸图像的最终匹配图像,根据匹配结果进行预警提示。The mean of the matching similarity between all layers of pyramid search images and the corresponding matching images is analyzed to obtain the matching degree between the patient's face image and the candidate template image, and the final matching image of the patient's face image is obtained, and an early warning prompt is given according to the matching result.

在一些实施例中,所述偏差因子,对应计算公式为:;式中,为患者人脸图像与模板图像k之间的偏差因子,为以自然常数e为底的指数函数,为取绝对值符号,分别为患者人脸图像与模板图像k的质心坐标的x坐标差值、y坐标差值。In some embodiments, the deviation factor corresponds to the calculation formula: ; In the formula, is the deviation factor between the patient's face image and the template image k, is an exponential function with the natural constant e as base, To take the absolute value sign, are the x-coordinate difference and y-coordinate difference of the centroid coordinates of the patient's face image and the template image k, respectively.

在一些实施例中,所述金字塔模板图像的旋转角度范围的获取过程为:In some embodiments, the process of obtaining the rotation angle range of the pyramid template image is as follows:

获取金字塔搜索图像中两个眼睛匹配块中心点的连线与图像坐标系x轴的角度,记为金字塔搜索图像的脸部方向,相应地,获取金字塔模板图像的脸部方向;Obtain the angle between the line connecting the center points of the two eye matching blocks in the pyramid search image and the x-axis of the image coordinate system, which is recorded as the face direction of the pyramid search image. Accordingly, obtain the face direction of the pyramid template image.

通过金字塔搜索图像脸部方向与金字塔模板图像脸部方向的差异计算金字塔模板图像的旋转角度范围,其中,金字塔模版图像的旋转角度范围与金字塔搜索图像和金字塔模板图像的脸部方向之间的差异成正向关联。The rotation angle range of the pyramid template image is calculated by the difference between the face directions of the pyramid search image and the pyramid template image, wherein the rotation angle range of the pyramid template image is positively correlated with the difference between the face directions of the pyramid search image and the pyramid template image.

在一些实施例中,所述患者人脸图像复杂度的获取过程为:In some embodiments, the process of obtaining the complexity of the patient's face image is:

获取每张患者人脸特征图对应的灰度共生矩阵,根据灰度共生矩阵的特征量计算患者人脸特征图的复杂度,其中,患者人脸特征图的复杂度与对应灰度共生矩阵的熵成正向关联,与对应灰度共生矩阵的能量和逆方差成反向关联;Obtain the gray-level co-occurrence matrix corresponding to each patient's facial feature map, and calculate the complexity of the patient's facial feature map according to the feature quantity of the gray-level co-occurrence matrix, wherein the complexity of the patient's facial feature map is positively correlated with the entropy of the corresponding gray-level co-occurrence matrix, and is inversely correlated with the energy and inverse variance of the corresponding gray-level co-occurrence matrix;

将所有患者人脸特征图的复杂度均值作为患者人脸图像复杂度。The complexity mean of all patients' facial feature maps is taken as the complexity of the patient's facial image.

在一些实施例中,所述金字塔模板图像的自适应旋转步长,对应计算公式为:;式中,为金字塔模板图像的自适应旋转步长,为初始旋转步长,为向下取整操作,为旋转步长调控因子,为患者人脸图像复杂度。In some embodiments, the adaptive rotation step size of the pyramid template image corresponds to the calculation formula: ; In the formula, is the adaptive rotation step size of the pyramid template image, is the initial rotation step size, To round down, is the rotation step length control factor, is the complexity of the patient's face image.

在一些实施例中,所述根据金字塔搜索匹配块中所有像素点海森矩阵的特征值及特征向量得到金字塔搜索匹配块的纹理特征矩阵,包括的具体步骤为:In some embodiments, the step of obtaining the texture feature matrix of the pyramid search matching block according to the eigenvalues and eigenvectors of the Hessian matrix of all pixels in the pyramid search matching block comprises the following specific steps:

计算金字塔搜索匹配块中每个像素点的海森矩阵,得到每个像素点海森矩阵的各特征值以及特征向量,将绝对值大的特征值作为每个像素点对应的纹理变化指标,并得到绝对值大特征值对应特征向量的方向,作为每个像素点的纹理走向指标,将所述纹理变化指标以及纹理走向指标组成每个像素点的纹理特征二元组;Calculate the Hessian matrix of each pixel in the pyramid search matching block to obtain the eigenvalues and eigenvectors of the Hessian matrix of each pixel, use the eigenvalue with a large absolute value as the texture change index corresponding to each pixel, and obtain the direction of the eigenvector corresponding to the eigenvalue with a large absolute value as the texture direction index of each pixel, and combine the texture change index and the texture direction index to form a texture feature binary of each pixel;

根据金字塔搜索匹配块中所有像素点的纹理特征构建金字塔搜索匹配块的纹理特征矩阵,其中,将金字塔搜索匹配块中各像素点的纹理特征二元组作为纹理特征矩阵的各行。A texture feature matrix of the pyramid search matching block is constructed according to texture features of all pixels in the pyramid search matching block, wherein the texture feature binary of each pixel in the pyramid search matching block is used as each row of the texture feature matrix.

在一些实施例中,所述金字塔搜索图像的匹配图像的获取过程为:In some embodiments, the process of obtaining the matching image of the pyramid search image is:

将与金字塔搜索图像同层的每张金字塔旋转模板图像依次记为同层金字塔旋转模板图像;计算金字塔搜索图像中各个匹配块的纹理特征矩阵与同层金字塔旋转模板图像中对应的各个匹配块的纹理特征矩阵之间的相关系数,将所有相关系数之和作为金字塔搜索图像与同层金字塔旋转模板图像之间的匹配相似度;Each pyramid rotation template image in the same layer as the pyramid search image is recorded as the same layer pyramid rotation template image in turn; the correlation coefficient between the texture feature matrix of each matching block in the pyramid search image and the texture feature matrix of each corresponding matching block in the same layer pyramid rotation template image is calculated, and the sum of all correlation coefficients is taken as the matching similarity between the pyramid search image and the same layer pyramid rotation template image;

基于所述匹配相似度确定金字塔搜索图像的匹配图像。A matching image of the pyramid search image is determined based on the matching similarity.

在一些实施例中,所述金字塔搜索图像的匹配图像为匹配相似度最高的同层金字塔旋转模板图像。In some embodiments, the matching image of the pyramid search image is a pyramid rotated template image of the same level with the highest matching similarity.

在一些实施例中,所述患者人脸图像的最终匹配图像获取过程为:将匹配度最高的候选模板图像作为患者人脸图像的最终匹配图像。In some embodiments, the process of obtaining the final matching image of the patient's facial image is: taking the candidate template image with the highest matching degree as the final matching image of the patient's facial image.

第二方面,本申请实施例还提供了一种面向人脸护理的视频监控管理系统,所述系统包括存储器、处理器以及存储在所述存储器中并在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意一项所述方法的步骤。In the second aspect, an embodiment of the present application also provides a video surveillance management system for facial care, the system comprising a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor implements the steps of any one of the above methods when executing the computer program.

本申请至少具有如下有益效果:This application has at least the following beneficial effects:

本申请通过患者人脸信息与人脸护理数据库中人脸信息进行匹配,可对患者状况进行监测,适用范围较广,对于重症以及肢体无法动作的患者具有较好的适用性,降低医护人员时间及精力的消耗。This application can monitor the patient's condition by matching the patient's facial information with the facial information in the facial care database. It has a wide range of applications and is particularly applicable to patients with severe illnesses and those who cannot move their limbs, reducing the time and energy consumption of medical staff.

为降低系统计算成本,提高匹配速度,本申请提取人脸各个匹配块,解决了将人脸全部区域参与匹配过程时计算量大、匹配速度慢,同时匹配过程易受无关区域的影响导致匹配准确率低;同时构建金字塔图像,结合多分辨率图像的匹配,提高了患者人脸图像的匹配精度;考虑到采集的患者人脸图像中的患者人脸角度是任意的,为避免直接通过采集的患者人脸图像中各匹配块对患者人脸信息进行匹配分析,导致匹配精度不高,增加匹配难度的问题,本申请将对候选模板图像进行旋转,并根据患者人脸图像复杂度对候选模板图像的旋转角度范围以及旋转步长进行自适应设定,提高匹配准确度,同时降低系统计算量。In order to reduce the system computing cost and improve the matching speed, the present application extracts each matching block of the face, solves the problem of large computational complexity and slow matching speed when the entire face area is involved in the matching process, and the matching process is easily affected by irrelevant areas, resulting in low matching accuracy; at the same time, a pyramid image is constructed, combined with the matching of multi-resolution images, to improve the matching accuracy of the patient's face image; considering that the patient's face angle in the collected patient's face image is arbitrary, in order to avoid directly matching and analyzing the patient's face information through each matching block in the collected patient's face image, resulting in low matching accuracy and increased matching difficulty, the present application will rotate the candidate template image, and adaptively set the rotation angle range and rotation step of the candidate template image according to the complexity of the patient's face image, to improve matching accuracy and reduce the system's computing complexity.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本申请实施例或现有技术中的技术方案和优点,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它附图。In order to more clearly illustrate the technical solutions and advantages in the embodiments of the present application or the prior art, the drawings required for use in the embodiments or the prior art descriptions are briefly introduced below. Obviously, the drawings described below are only some embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.

图1为本申请提供的一种面向人脸护理的视频监控管理方法流程图;FIG1 is a flow chart of a video monitoring management method for face care provided by the present application;

图2为面向人脸护理的视频监控管理具体过程示意图。FIG2 is a schematic diagram of the specific process of video surveillance management for face care.

具体实施方式DETAILED DESCRIPTION

为了更进一步阐述本申请为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本申请提出的一种面向人脸护理的视频监控管理方法及系统,其具体实施方式、结构、特征及其功效,详细说明如下。在下述说明中,不同的“一个实施例”或“另一个实施例”指的不一定是同一实施例。此外,一或多个实施例中的特定特征、结构或特点可由任何合适形式组合。In order to further explain the technical means and effects adopted by the present application to achieve the predetermined invention purpose, the following, in combination with the accompanying drawings and preferred embodiments, describes in detail a video surveillance management method and system for face care proposed in the present application, its specific implementation method, structure, features and effects. In the following description, different "one embodiment" or "another embodiment" does not necessarily refer to the same embodiment. In addition, specific features, structures or characteristics in one or more embodiments may be combined in any suitable form.

除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

下面结合附图具体的说明本申请所提供的一种面向人脸护理的视频监控管理方法及系统的具体方案。The following is a detailed description of a specific solution of a video surveillance management method and system for face care provided by the present application in conjunction with the accompanying drawings.

本申请一个实施例提供的一种面向人脸护理的视频监控管理方法及系统。An embodiment of the present application provides a video surveillance management method and system for face care.

具体的,本实施例的一种面向人脸护理的视频监控管理方法及系统提供了如下的一种面向人脸护理的视频监控管理方法,请参阅图1,该方法包括以下步骤:Specifically, the video surveillance management method and system for face care of this embodiment provide the following video surveillance management method for face care, please refer to FIG1, the method comprises the following steps:

步骤S001,获取患者人脸视频,提取患者人脸视频中的患者人脸图像,并获取人脸护理数据库。Step S001, obtain a patient face video, extract a patient face image from the patient face video, and obtain a face care database.

首先,本实施例通过视频监控设备对患者人脸视频进行采集,获取患者人脸图像,以便对患者人脸信息进行实时监测。本实施例中视频监控设备为监控摄像头,用于对患者人脸进行实时采集,视频监控设备的监控摄像头型号、拍摄范围以及视角实施者可自行设定。为实现对患者人脸信息的分析,本实施例将自动演练面部五官如两边脸颊、眉眼、嘴角、额头等的弧度变化,再使用OpenGL渲染,得到富有各种基本信息的人脸基础图像,人脸信息包含但不限于皱眉、痛苦闭眼、脸颊肌肉松弛下垂、眉头紧锁且眉眼倾斜等,将所有的人脸基础图像存储到人脸护理数据库中,作为患者人脸分析的基准数据,将人脸护理库中的人脸图像均作为模板图像,人脸护理数据库中包含的模板图像的数量实施者自行设定。需要说明的是,具体的人脸护理数据库的建立通过现有技术可以实现,具体过程不在本实施例中一一详细阐述。First, in this embodiment, the patient's face video is collected by a video monitoring device to obtain the patient's face image so as to monitor the patient's face information in real time. In this embodiment, the video monitoring device is a monitoring camera, which is used to collect the patient's face in real time. The monitoring camera model, shooting range and viewing angle of the video monitoring device can be set by the implementer. In order to realize the analysis of the patient's face information, this embodiment will automatically practice the curvature changes of facial features such as cheeks, eyebrows, mouth corners, forehead, etc., and then use OpenGL rendering to obtain a basic face image rich in various basic information. The face information includes but is not limited to frowning, painful closing of eyes, loose and drooping cheek muscles, frowning and tilted eyebrows, etc. All basic face images are stored in the face care database as the benchmark data for patient face analysis, and the face images in the face care library are all used as template images. The number of template images contained in the face care database is set by the implementer. It should be noted that the establishment of a specific face care database can be realized by existing technology, and the specific process is not described in detail one by one in this embodiment.

至此,即可得到患者人脸视频,获取患者人脸图像,作为患者人脸信息实时监测的患者人脸图像,并获取人脸护理数据库,用于对患者人脸信息进行分析匹配。At this point, the patient's face video can be obtained, the patient's face image can be obtained as the patient's face image for real-time monitoring of the patient's face information, and the face care database can be obtained for analysis and matching of the patient's face information.

步骤S002,建立患者人脸图像匹配模型,获取与患者人脸信息匹配的最终匹配图像,实现患者人脸信息的分析。Step S002, establish a patient face image matching model, obtain a final matching image that matches the patient's face information, and implement analysis of the patient's face information.

对于患者人脸图像,本实施例将对患者人脸图像进行处理分析,对患者人脸图像匹配块进行筛选,为提高匹配准确率,在进行匹配过程中,对各个模板图像进行旋转处理,使得模板图像中的人脸与患者人脸图像中的人脸角度一致,保证匹配精度的同时提高匹配块的匹配速度,建立人脸图像处理匹配模型:For the patient's face image, this embodiment will process and analyze the patient's face image, screen the patient's face image matching blocks, and in order to improve the matching accuracy, during the matching process, rotate each template image so that the face in the template image is consistent with the face in the patient's face image. The matching speed of the matching block is improved while ensuring the matching accuracy, and the face image processing matching model is established:

为降低系统匹配过程中的计算量,本实施例将对人脸护理数据库中的模板图像进行初步筛选,自适应获取用于对应患者人脸信息匹配的候选模板图像,降低患者人脸信息匹配识别过程中的计算成本,提高匹配速度。In order to reduce the amount of calculation in the system matching process, this embodiment will perform a preliminary screening of the template images in the face care database, adaptively obtain candidate template images for matching the corresponding patient facial information, reduce the computational cost in the patient facial information matching and identification process, and improve the matching speed.

首先,计算采集的患者人脸图像的质心坐标,并获取每张模板图像的质心坐标,分别通过患者人脸图像质心坐标与每张模板图像质心坐标的差值计算患者人脸图像与每张模板图像之间的偏差因子,表达式为:;式中,为患者人脸图像与模板图像k之间的偏差因子,为以自然常数e为底的指数函数,为取绝对值符号,分别为患者人脸图像与模板图像k的质心坐标的x坐标差值、y坐标差值。First, the centroid coordinates of the collected patient face image are calculated, and the centroid coordinates of each template image are obtained. The deviation factor between the patient face image and each template image is calculated by the difference between the centroid coordinates of the patient face image and the centroid coordinates of each template image. The expression is: ; In the formula, is the deviation factor between the patient's face image and the template image k, is an exponential function with the natural constant e as base, To take the absolute value sign, are the x-coordinate difference and y-coordinate difference of the centroid coordinates of the patient's face image and the template image k, respectively.

可以理解的是,患者人脸图像与模板图像之间的偏差因子越大,则患者人脸图像的脸部信息与模板图像中的人脸信息差别越大。It can be understood that the larger the deviation factor between the patient's facial image and the template image, the greater the difference between the facial information in the patient's facial image and the facial information in the template image.

进一步,设置偏差因子阈值,将与患者人脸图像的偏差因子高于阈值的模板图像筛除,保留与患者人脸图像的偏差因子低于阈值的模板图像,将其作为患者人脸图像匹配的候选模板图像。需要说明的是,偏差因子阈值实施者可自行设定,本实施例将其设置为0.5。Furthermore, a deviation factor threshold is set to screen out template images whose deviation factors with the patient's facial image are higher than the threshold, and retain template images whose deviation factors with the patient's facial image are lower than the threshold as candidate template images for matching the patient's facial image. It should be noted that the deviation factor threshold can be set by the implementer, and in this embodiment, it is set to 0.5.

然后,本实施例将对患者人脸图像与各候选模板图像之间进行匹配,以获取患者人脸图像对应的最终匹配图像。考虑到更加符合人眼观察物体先大致获取物体轮廓再细看物体细节的视觉特点,同时考虑到通过多尺度患者人脸图像对患者人脸信息进行匹配分析,可提高患者人脸匹配精度,本实施例将通过图像金字塔算法获取患者人脸图像的各层金字塔图像及候选模板图像的各层金字塔图像,对图像进行多尺度下采样处理,得到不同分辨率的患者人脸金字塔图像以及不同分辨率的候选模板图像。金字塔层数实施者可自行设定,本实施例中金字塔层数设置为3。构建金字塔图像时下采样的方法有很多,高斯采样法、最近邻法以及均值采样法等,本实施例中采用均值采样法进行金字塔图像的构建,图像金字塔算法以及具体的金字塔图像构建过程为现有公知技术。Then, this embodiment will match the patient's face image with each candidate template image to obtain the final matching image corresponding to the patient's face image. Considering that it is more in line with the visual characteristics of the human eye observing an object, first roughly obtaining the outline of the object and then looking at the details of the object, and considering that matching and analyzing the patient's face information through multi-scale patient face images can improve the accuracy of patient face matching, this embodiment will obtain each layer of the patient's face image and each layer of the candidate template image through the image pyramid algorithm, and perform multi-scale downsampling processing on the image to obtain patient face pyramid images of different resolutions and candidate template images of different resolutions. The implementer can set the number of pyramid layers by himself. In this embodiment, the number of pyramid layers is set to 3. There are many methods for downsampling when constructing a pyramid image, such as Gaussian sampling method, nearest neighbor method and mean sampling method. In this embodiment, the mean sampling method is used to construct the pyramid image. The image pyramid algorithm and the specific pyramid image construction process are existing known technologies.

将患者人脸图像的各层金字塔图像依次记为金字塔搜索图像,候选模板图像的各层金字塔图像依次记为金字塔模板图像。对金字塔搜索图像以及金字塔模板图像进行人脸信息的匹配,若将图像中的所有区域均参与患者人脸信息的匹配分析,系统计算量大,匹配时间较长,同时匹配过程易受无关区域的影响,因此,本实施例设定人脸关键区域,本实施例中人脸关键区域具体包括:额头、眉毛、眼睛、鼻子、嘴巴以及两边脸颊,将该9个人脸关键区域作为匹配块,用于对患者人脸表情进行分析。以金字塔搜索图像为例,本实施例将通过目标检测网络得到金字塔搜索图像中各个匹配块的包围框,目标检测网络的训练过程以及获取金字塔搜索图像中各个匹配块的包围框的过程均为现有公知技术,本实施例不做相关阐述。目标检测网络的输入是金字塔搜索图像,输出是金字塔搜索图像各匹配块的包围框信息,嘴巴包围框信息记为为嘴巴包围框中心点坐标信息,分别为嘴巴包围框的宽、高。Each layer of the pyramid image of the patient's face image is recorded as a pyramid search image, and each layer of the pyramid image of the candidate template image is recorded as a pyramid template image. The pyramid search image and the pyramid template image are matched with facial information. If all areas in the image are involved in the matching analysis of the patient's facial information, the system will have a large amount of calculation and a long matching time. At the same time, the matching process is easily affected by irrelevant areas. Therefore, this embodiment sets key facial areas. The key facial areas in this embodiment specifically include: forehead, eyebrows, eyes, nose, mouth and both cheeks. The 9 key facial areas are used as matching blocks to analyze the patient's facial expressions. For example, this embodiment will obtain a pyramid search image through the target detection network. The bounding boxes of each matching block in the target detection network, the training process of the target detection network, and obtaining the pyramid search image The process of generating the bounding boxes of each matching block in the target detection network is a well-known technique and will not be described in detail in this embodiment. , the output is the pyramid search image The bounding box information of each matching block, the mouth bounding box information is recorded as , The coordinate information of the center point of the mouth bounding box. are the width and height of the mouth bounding box respectively.

重复上述方法,通过目标检测网络可获取每张金字塔搜索图像以及每张金字塔模板图像中各个匹配块的包围框信息。By repeating the above method, the bounding box information of each matching block in each pyramid search image and each pyramid template image can be obtained through the object detection network.

至此,根据上述方法可得到每张金字塔搜索图像以及每张金字塔模板图像中的各个匹配块。So far, each matching block in each pyramid search image and each pyramid template image can be obtained according to the above method.

进一步,考虑到在进行患者人脸图像采集时,采集的患者人脸图像中的患者人脸角度是任意的,若直接通过每张金字塔模板图像中的各匹配块对患者人脸信息进行匹配分析,将会导致匹配精度不高,增加匹配难度,因此,在进行患者人脸各匹配块的匹配时,传统匹配过程大多通过设定固定的旋转步长来旋转模板图像,得到不同角度的旋转模板图像,但是旋转步长的选取大多过于随机,将会影响匹配精度,同时旋转角度范围大多为[0°,360°],旋转角度范围较大,导致得到的旋转模板图像过多,匹配过程中计算量大,降低匹配效率。因此,本实施例将对金字塔模板图像进行自适应旋转处理得到不同角度的金字塔旋转模板图像,以保证在进行患者人脸信息匹配时尽量与患者人脸角度一致,提高匹配速度以及匹配准确性。本实施例对金字塔模板图像的旋转角度范围以及旋转步长进行自适应设定,具体过程为:Further, considering that when collecting patient face images, the angle of the patient face in the collected patient face images is arbitrary, if the patient face information is directly matched and analyzed through each matching block in each pyramid template image, the matching accuracy will be low and the matching difficulty will be increased. Therefore, when matching the matching blocks of the patient face, the traditional matching process mostly rotates the template image by setting a fixed rotation step to obtain rotated template images of different angles. However, the selection of the rotation step is mostly too random, which will affect the matching accuracy. At the same time, the rotation angle range is mostly [0°, 360°], and the rotation angle range is large, resulting in too many rotated template images, a large amount of calculation in the matching process, and reduced matching efficiency. Therefore, in this embodiment, the pyramid template image is adaptively rotated to obtain pyramid rotated template images of different angles to ensure that it is consistent with the patient face angle as much as possible when matching the patient face information, thereby improving the matching speed and matching accuracy. In this embodiment, the rotation angle range and rotation step of the pyramid template image are adaptively set, and the specific process is as follows:

根据金字塔搜索图像以及金字塔模板图像的脸部方向差异得到金字塔模板图像的旋转角度范围,首先,获取金字塔搜索图像中两个眼睛匹配块中心点的连线与图像坐标系x轴的角度,记为金字塔搜索图像的脸部方向;获取金字塔模板图像中两个眼睛匹配块中心点的连线与图像坐标系x轴的角度,记为金字塔模板图像的脸部方向;通过金字塔搜索图像脸部方向与金字塔模板图像脸部方向的差异计算金字塔模板图像的旋转角度范围,其中,金字塔模版图像的旋转角度范围与金字塔搜索图像和金字塔模板图像的脸部方向之间的差异成正向关联。The rotation angle range of the pyramid template image is obtained according to the difference in face directions between the pyramid search image and the pyramid template image. First, the angle between the line connecting the center points of two eye matching blocks in the pyramid search image and the x-axis of the image coordinate system is obtained, which is recorded as the face direction of the pyramid search image; the angle between the line connecting the center points of two eye matching blocks in the pyramid template image and the x-axis of the image coordinate system is obtained, which is recorded as the face direction of the pyramid template image; the rotation angle range of the pyramid template image is calculated by the difference between the face direction of the pyramid search image and the face direction of the pyramid template image, wherein the rotation angle range of the pyramid template image is positively correlated with the difference between the face directions of the pyramid search image and the pyramid template image.

优选的,作为本申请的一个实施例,所述旋转角度范围的表达式可以为:;式中,为金字塔模板图像的旋转角度范围,为金字塔搜索图像S的脸部方向,为金字塔模板图像T的脸部方向,为大于零的角度偏置项,用于适当扩大旋转角度,以防止旋转角度范围小导致无法准确得到与患者人脸图像中人脸角度一致的模板图像,取值实施者自行设定,本申请实施例中设置为Preferably, as an embodiment of the present application, the expression of the rotation angle range can be: ; In the formula, is the rotation angle range of the pyramid template image, Search the face orientation of the pyramid image S, is the face orientation of the pyramid template image T, is an angle bias term greater than zero, which is used to appropriately expand the rotation angle to prevent the small rotation angle range from causing the inability to accurately obtain a template image that is consistent with the facial angle in the patient's facial image. The value is set by the implementer. In the embodiment of this application, it is set to .

需要说明的是,同一候选模板图像的各层金字塔模板图像的旋转角度范围是相同的。It should be noted that the rotation angle ranges of the pyramid template images at each layer of the same candidate template image are the same.

重复上述方法,获取每张候选模板图像的旋转角度范围,也即可获取每张候选模板图像所对应的各金字塔模板图像的旋转角度范围。Repeat the above method to obtain the rotation angle range of each candidate template image, and thus obtain the rotation angle range of each pyramid template image corresponding to each candidate template image.

进一步,对于金字塔模板图像的旋转步长,旋转步长过小,将会导致旋转模板图像较多,降低整体匹配速度;当旋转步长过大时,可能出现无法得到与患者人脸金字塔图像中人脸角度一致的模板图像,导致匹配精度降低,出现误匹配的情况,因此,本实施例将对患者人脸图像的复杂程度进行分析,患者人脸图像越简单,匹配过程越简单,匹配准确性越高,则金字塔模板图像的旋转步长可适当增大,反之,为保证匹配精度,应当减小旋转步长,金字塔模板图像的自适应旋转步长计算过程具体为:Further, for the rotation step of the pyramid template image, if the rotation step is too small, more rotated template images will be generated, which will reduce the overall matching speed; when the rotation step is too large, it may be impossible to obtain a template image that is consistent with the face angle in the patient's face pyramid image, resulting in reduced matching accuracy and mismatching. Therefore, this embodiment analyzes the complexity of the patient's face image. The simpler the patient's face image, the simpler the matching process, the higher the matching accuracy, and the rotation step of the pyramid template image can be appropriately increased. Conversely, in order to ensure matching accuracy, the rotation step should be reduced. The adaptive rotation step calculation process of the pyramid template image is specifically as follows:

对于患者人脸图像,本实施例将对其复杂程度进行分析,首先,本实施例将通过多尺度Gabor滤波器对患者人脸图像进行滤波处理,本实施例中采用4个方向6个尺度的多尺度Gabor滤波器,可得到24个Gabor滤波器,分别将每个Gabor滤波器与患者人脸图像进行卷积处理,分别得到卷积之后的患者人脸特征图,本实施例可获取24张患者人脸特征图,所述患者人脸特征图可对患者人脸图像中患者脸部的纹理细节等特征进行表征,所述多尺度Gabor滤波器的尺度以及方向的设定实施者可自行设定,多尺度Gabor滤波器具体滤波过程为现有公知技术,不在本实施例保护范围内不做详细阐述;For the patient's face image, this embodiment will analyze its complexity. First, this embodiment will filter the patient's face image through a multi-scale Gabor filter. In this embodiment, a multi-scale Gabor filter with 6 scales in 4 directions is used to obtain 24 Gabor filters. Each Gabor filter is convolved with the patient's face image to obtain the patient's face feature map after convolution. In this embodiment, 24 patient's face feature maps can be obtained. The patient's face feature map can characterize the features such as the texture details of the patient's face in the patient's face image. The scale and direction of the multi-scale Gabor filter can be set by the implementer. The specific filtering process of the multi-scale Gabor filter is an existing well-known technology and is not within the protection scope of this embodiment and will not be elaborated in detail.

对每张患者人脸特征图的复杂度进行计算,考虑到当患者人脸特征图中患者脸部信息较为复杂时,对应患者人脸特征图中纹理越不均匀,灰度分布复杂程度越大,因此,获取每张患者人脸特征图对应的灰度共生矩阵,计算灰度共生矩阵的特征量,根据灰度共生矩阵的特征量计算患者人脸特征图的复杂度,其中,患者人脸特征图的复杂度与对应灰度共生矩阵的熵成正向关联,与对应灰度共生矩阵的能量和逆方差成反向关联。The complexity of each patient's facial feature map is calculated. Considering that when the patient's facial information in the patient's facial feature map is more complex, the more uneven the texture in the corresponding patient's facial feature map is, the greater the complexity of the grayscale distribution is. Therefore, the grayscale co-occurrence matrix corresponding to each patient's facial feature map is obtained, and the feature value of the grayscale co-occurrence matrix is calculated. The complexity of the patient's facial feature map is calculated based on the feature value of the grayscale co-occurrence matrix, where the complexity of the patient's facial feature map is positively correlated with the entropy of the corresponding grayscale co-occurrence matrix, and inversely correlated with the energy and inverse variance of the corresponding grayscale co-occurrence matrix.

优选的,作为本申请的一个实施例,患者人脸特征图i的复杂度表达式可以为:;式中,为患者人脸特征图i的复杂度,为患者人脸特征图i对应灰度共生矩阵的熵、能量、逆方差,为避免分母为零的参数,实施者可自行设定,本实施例设置为。将所有患者人脸特征图的复杂度均值作为患者人脸图像复杂度,患者人脸图像复杂度越高,图像中包含的患者脸部信息越复杂;Preferably, as an embodiment of the present application, the complexity expression of the patient's facial feature map i can be: ; In the formula, is the complexity of the patient’s facial feature map i, is the entropy, energy, and inverse variance of the gray-level co-occurrence matrix corresponding to the patient's facial feature map i, To avoid parameters with a denominator of zero, the implementer can set it by himself. In this embodiment, it is set to The complexity mean of all patients’ facial feature maps is taken as the complexity of the patient’s facial image. ,The higher the complexity of the patient’s face image, the more complex the patient’s face information contained in the image;

根据计算得到的患者人脸图像复杂度,金字塔模板图像的旋转步长进行自适应设定,金字塔模板图像的自适应旋转步长表达式具体为:;式中,为金字塔模板图像的自适应旋转步长,为向下取整操作,为初始旋转步长,实施者可自行设定,本实施例将其设置为为旋转步长调控因子,用于调控旋转步长范围,实施者可自行设定,本申请将其设置为According to the calculated complexity of the patient's face image, the rotation step of the pyramid template image is adaptively set. The adaptive rotation step expression of the pyramid template image is specifically: ; In the formula, is the adaptive rotation step size of the pyramid template image, To round down, is the initial rotation step length, which can be set by the implementer. In this embodiment, it is set to , The rotation step length control factor is used to control the rotation step length range. The implementer can set it by himself. This application sets it to .

为避免金字塔模板图像的自适应旋转步长过大导致超出旋转角度范围以及无法准确获取旋转之后的金字塔模板图像的问题,本实施例将设置最大旋转步长:,其中,为最大旋转步长,p为限定系数,用于对最大旋转步长进行限定,实施者可自行设定,本实施例设定为p=3;In order to avoid the problem that the adaptive rotation step of the pyramid template image is too large, which leads to exceeding the rotation angle range and failing to accurately obtain the pyramid template image after rotation, this embodiment sets the maximum rotation step: ,in, is the maximum rotation step length, and p is a limiting coefficient, which is used to limit the maximum rotation step length. The implementer can set it by himself. In this embodiment, p is set to 3;

需要说明的是,同一候选模板图像的各层金字塔模板图像的自适应旋转步长是相同的;It should be noted that the adaptive rotation step sizes of the pyramid template images at each level of the same candidate template image are the same;

重复上述方法,获取每张候选模板图像的自适应旋转步长,也即可获取每张候选模板图像所对应的各金字塔模板图像的自适应旋转步长。Repeat the above method to obtain the adaptive rotation step length of each candidate template image, and thus obtain the adaptive rotation step length of each pyramid template image corresponding to each candidate template image.

至此,根据上述方法即可获取各金字塔模板图像的旋转角度范围以及自适应旋转步长。At this point, the rotation angle range and adaptive rotation step size of each pyramid template image can be obtained according to the above method.

根据旋转角度范围以及自适应旋转步长结合图像旋转变换算法得到金字塔模板图像的各金字塔旋转模板图像,通过图像旋转变换算法根据旋转角度范围以及自适应旋转步长对金字塔模板图像进行旋转变换,可得到金字塔模板图像对应的数张金字塔旋转模板图像,将金字塔模板图像的数张金字塔旋转模板图像记为金字塔模板图像的金字塔旋转模板图像集合。需要说明的是,根据金字塔模板图像的旋转角度范围以及自适应旋转步长通过图像旋转变换算法,每张金字塔模板图像可得到多张金字塔旋转模板图像。According to the rotation angle range and the adaptive rotation step length combined with the image rotation transformation algorithm, each pyramid rotation template image of the pyramid template image is obtained. The pyramid template image is rotated and transformed according to the rotation angle range and the adaptive rotation step length by the image rotation transformation algorithm, and several pyramid rotation template images corresponding to the pyramid template image can be obtained. The several pyramid rotation template images of the pyramid template image are recorded as a pyramid rotation template image set of the pyramid template image. It should be noted that according to the rotation angle range and the adaptive rotation step length of the pyramid template image, through the image rotation transformation algorithm, each pyramid template image can obtain multiple pyramid rotation template images.

重复上述方法,获取每张金字塔模板图像的金字塔旋转模板图像集合。Repeat the above method to obtain a set of pyramid rotation template images for each pyramid template image.

最后,根据每张金字塔模板图像的金字塔旋转模板图像集合,对金字塔搜索图像的匹配图像进行选取,以便对患者人脸信息进行匹配识别。将金字塔搜索图像中的各匹配块依次记为金字塔搜索匹配块,金字塔旋转模板图像中各匹配块依次记为金字塔旋转模板匹配块,对于金字塔搜索图像,依次对各匹配块的纹理特征矩阵进行提取。以金字塔搜索图像匹配块为例详细说明纹理特征矩阵的提取过程:Finally, according to the pyramid rotation template image set of each pyramid template image, the matching image of the pyramid search image is selected to match and identify the patient's facial information. Each matching block in the pyramid search image is recorded as a pyramid search matching block, and each matching block in the pyramid rotation template image is recorded as a pyramid rotation template matching block. For the pyramid search image, the texture feature matrix of each matching block is extracted in turn. The extraction process of the texture feature matrix is described in detail using the pyramid search image matching block as an example:

计算金字塔搜索匹配块中每个像素点的海森矩阵,得到每个像素点海森矩阵的各特征值以及特征向量,进而获取每个像素点海森矩阵绝对值较大的特征值,作为每个像素点的纹理变化指标,并得到绝对值较大特征值对应特征向量的方向,作为每个像素点的纹理走向指标,将金字塔搜索匹配块中每个像素点的纹理变化指标以及纹理走向指标作为每个像素点的纹理特征二元组。The Hessian matrix of each pixel in the pyramid search matching block is calculated to obtain the eigenvalues and eigenvectors of the Hessian matrix of each pixel, and then the eigenvalue with the largest absolute value of the Hessian matrix of each pixel is obtained as the texture change index of each pixel, and the direction of the eigenvector corresponding to the eigenvalue with the largest absolute value is obtained as the texture direction index of each pixel. The texture change index and texture direction index of each pixel in the pyramid search matching block are used as the texture feature binary of each pixel.

然后根据金字塔搜索匹配块中所有像素点的纹理特征构建金字塔搜索匹配块的纹理特征矩阵,其中,将金字塔搜索匹配块中各像素点的纹理特征二元组作为纹理特征矩阵的各行,金字塔搜索匹配块的纹理特征矩阵表达式为:;式中,为金字塔搜索匹配块j的纹理特征矩阵,S代表金字塔搜索图像,为金字塔搜索匹配块j中第一个像素点的纹理特征二元组,分别为金字塔搜索匹配块j中第一个像素点的纹理变化指标、纹理走向指标,为金字塔搜索匹配块j中第n个像素点的纹理特征二元组,n为金字塔搜索匹配块j中像素点数量,分别为金字塔搜索匹配块j中第n个像素点的纹理变化指标、纹理走向指标。Then, the texture feature matrix of the pyramid search matching block is constructed according to the texture features of all pixels in the pyramid search matching block, wherein the texture feature binary of each pixel in the pyramid search matching block is used as each row of the texture feature matrix. The texture feature matrix expression of the pyramid search matching block is: ; In the formula, is the texture feature matrix of the pyramid search matching block j, S represents the pyramid search image, Search for the texture feature binary of the first pixel in the pyramid matching block j, are the texture change index and texture direction index of the first pixel in the pyramid search matching block j, respectively. is the texture feature binary of the nth pixel in the pyramid search matching block j, n is the number of pixels in the pyramid search matching block j, They are the texture change index and texture direction index of the nth pixel in the pyramid search matching block j, respectively.

匹配块的纹理特征矩阵主要用于对匹配块内的人脸纹理细节信息进行表征。需要说明的是,像素点海森矩阵以及海森矩阵特征值特征向量的计算为现有公知技术。The texture feature matrix of the matching block is mainly used to characterize the face texture detail information in the matching block. It should be noted that the calculation of the pixel Hessian matrix and the eigenvalue and eigenvector of the Hessian matrix is a known technology.

重复上述方法,获取金字塔搜索图像各匹配块的纹理特征矩阵,同理,获取金字塔旋转模板图像集合中每张金字塔旋转模板图像各匹配块的纹理特征矩阵。Repeat the above method to obtain the texture feature matrix of each matching block of the pyramid search image. Similarly, obtain the texture feature matrix of each matching block of each pyramid rotation template image in the pyramid rotation template image set.

本实施例将对金字塔搜索图像的匹配图像进行选取,以对患者人脸图像的最终匹配图像进行提取,根据金字塔搜索图像与同层各金字塔旋转模板图像中对应匹配块纹理特征矩阵间的皮尔逊系数得到金字塔搜索图像的匹配图像,具体为:In this embodiment, the matching image of the pyramid search image is selected to extract the final matching image of the patient's face image. The matching image of the pyramid search image is obtained according to the Pearson coefficient between the texture feature matrix of the corresponding matching block in each pyramid rotation template image of the same layer, specifically:

将与金字塔搜索图像同层的每张金字塔旋转模板图像依次记为同层金字塔旋转模板图像,计算金字塔搜索图像中各个匹配块纹理特征矩阵与同层金字塔旋转模板图像中对应的各个匹配块纹理特征矩阵之间的相关系数;将所有相关系数之和作为金字塔搜索图像与同层金字塔旋转模板图像之间的匹配相似度。Each pyramid rotation template image in the same layer as the pyramid search image is recorded as the same-layer pyramid rotation template image in turn, and the correlation coefficient between the texture feature matrix of each matching block in the pyramid search image and the corresponding texture feature matrix of each matching block in the same-layer pyramid rotation template image is calculated; the sum of all correlation coefficients is taken as the matching similarity between the pyramid search image and the pyramid rotation template image in the same layer.

优选的,作为本申请的一个实施例,所述相关系数采用皮尔逊相关系数进行分析,需要说明的是皮尔逊系数的计算方法为现有公知技术;作为其他实施方式,可采用现有技术的其他相关性度量方法来计算矩阵之间的相关系数,本申请不做特殊限制。Preferably, as an embodiment of the present application, the correlation coefficient is analyzed using the Pearson correlation coefficient. It should be noted that the calculation method of the Pearson coefficient is an existing public technology; as other implementation methods, other correlation measurement methods of the existing technology can be used to calculate the correlation coefficient between matrices, and the present application does not impose any special restrictions.

重复上述方法,计算金字塔搜索图像与各同层金字塔旋转模板图像之间的匹配相似度;将匹配相似度最高的同层金字塔旋转模板图像作为金字塔搜索图像的匹配图像。Repeat the above method to calculate the matching similarity between the pyramid search image and each pyramid rotation template image at the same level; and use the pyramid rotation template image at the same level with the highest matching similarity as the matching image of the pyramid search image.

重复上述方法,获取各层金字塔搜索图像的匹配图像,也即得到各层金字塔搜索图像与对应匹配图像之间的匹配相似度。The above method is repeated to obtain matching images of each layer of pyramid search images, that is, to obtain matching similarities between each layer of pyramid search images and corresponding matching images.

计算所有层金字塔搜索图像与对应匹配图像之间的匹配相似度的均值,将该均值作为患者人脸图像与对应候选模板图像的匹配度。The mean of the matching similarities between all layers of pyramid search images and the corresponding matching images is calculated, and the mean is used as the matching degree between the patient's face image and the corresponding candidate template image.

重复上述方法,获取患者人脸图像与每张候选模板图像的匹配度。Repeat the above method to obtain the matching degree between the patient's face image and each candidate template image.

将匹配度最高的候选模板图像作为患者人脸图像的最终匹配图像,进而实现对患者人脸信息的识别。The candidate template image with the highest matching degree is used as the final matching image of the patient's face image, thereby realizing the recognition of the patient's face information.

步骤S003,基于匹配结果,为医护人员进行相应的预警提示,以便医护人员对患者进行针对性护理治疗。Step S003, based on the matching results, corresponding early warning prompts are given to medical staff so that they can provide targeted nursing treatment to the patients.

根据患者人脸图像所对应的最终匹配图像,实现对患者人脸的匹配识别,根据患者人脸匹配的结果进行预警提示,以便医护人员对患者的状况进行准确了解,以采取针对性的护理措施,对患者进行相应的护理治疗等操作。需要说明的是,具体与患者人脸信息相应的预警提示内容实施者可自行设定,比如:当患者人脸图像的最终匹配图像所对应的人脸表情为皱眉、张嘴时,系统将发出患者不适的预警提示,便于医护人员准确掌握患者的状况。According to the final matching image corresponding to the patient's facial image, the matching recognition of the patient's face is realized, and an early warning prompt is issued according to the result of the patient's facial matching, so that medical staff can accurately understand the patient's condition, take targeted nursing measures, and provide corresponding nursing treatment to the patient. It should be noted that the implementer can set the specific early warning prompt content corresponding to the patient's facial information. For example, when the facial expression corresponding to the final matching image of the patient's facial image is frowning or opening the mouth, the system will issue an early warning prompt of the patient's discomfort, so that medical staff can accurately understand the patient's condition.

优选的,作为本申请一个实施例,面向人脸护理的视频监控管理具体过程示意图,请参阅图2。Preferably, as an embodiment of the present application, a schematic diagram of the specific process of video surveillance management for face care is shown in FIG2 .

值得说明的是,本申请中所述正向关联用于表征变量之间的变化趋势相同,也即因变量随着自变量的增大(减小)而增大(减小),具体的计算关系实施者根据实际应用场景决定,可以是相加关系、相乘关系等正相关关系,本申请不做特殊限制。It is worth noting that the positive correlation described in this application is used to characterize that the changing trends between variables are the same, that is, the dependent variable increases (decreases) as the independent variable increases (decreases). The specific calculation relationship is determined by the implementer based on the actual application scenario, and can be an additive relationship, multiplicative relationship or other positive correlation relationship. This application does not impose any special restrictions.

基于与上述方法相同的发明构思,本申请实施例还提供了一种面向人脸护理的视频监控管理系统,包括存储器、处理器以及存储在所述存储器中并在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述一种面向人脸护理的视频监控管理方法中任意一项所述方法的步骤。Based on the same inventive concept as the above method, an embodiment of the present application also provides a video surveillance management system for facial care, including a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, it implements the steps of any one of the above-mentioned video surveillance management methods for facial care.

综上所述,本申请实施例通过患者人脸信息与人脸护理数据库中人脸信息进行匹配,可对患者状况进行监测,适用范围较广,对于重症以及肢体无法动作的患者具有较好的适用性,降低医护人员时间及精力的消耗。To summarize, the embodiment of the present application can monitor the patient's condition by matching the patient's facial information with the facial information in the facial care database. It has a wide range of applications and is particularly suitable for patients with severe illnesses and those who cannot move their limbs, thereby reducing the consumption of time and energy of medical staff.

为降低系统计算成本,提高匹配速度,本申请提取人脸各个匹配块,解决了将人脸全部区域参与匹配过程时计算量大、匹配速度慢,同时匹配过程易受无关区域的影响导致匹配准确率低;同时构建金字塔图像,结合多分辨率图像的匹配,提高了患者人脸图像的匹配精度;考虑到采集的患者人脸图像中的患者人脸角度是任意的,为避免直接通过采集的患者人脸图像中各匹配块对患者人脸信息进行匹配分析,导致匹配精度不高,增加匹配难度的问题,本申请将对候选模板图像进行旋转,并根据患者人脸图像复杂度对候选模板图像的旋转角度范围以及旋转步长进行自适应设定,提高匹配准确度,同时降低系统计算量。In order to reduce the system computing cost and improve the matching speed, the present application extracts each matching block of the face, solves the problem of large computational complexity and slow matching speed when the entire face area is involved in the matching process, and the matching process is easily affected by irrelevant areas, resulting in low matching accuracy; at the same time, a pyramid image is constructed, combined with the matching of multi-resolution images, to improve the matching accuracy of the patient's face image; considering that the patient's face angle in the collected patient's face image is arbitrary, in order to avoid directly matching and analyzing the patient's face information through each matching block in the collected patient's face image, resulting in low matching accuracy and increased matching difficulty, the present application will rotate the candidate template image, and adaptively set the rotation angle range and rotation step of the candidate template image according to the complexity of the patient's face image, to improve matching accuracy and reduce the system's computing complexity.

需要说明的是:上述本申请实施例先后顺序仅仅为了描述,不代表实施例的优劣。且上述对本说明书特定实施例进行了描述。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。It should be noted that the above sequence of the embodiments of the present application is for description only and does not represent the advantages and disadvantages of the embodiments. The above is a description of a specific embodiment of this specification. In addition, the processes depicted in the accompanying drawings do not necessarily require the specific order or continuous order shown to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referenced to each other, and each embodiment focuses on the differences from other embodiments.

以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above description is only a preferred embodiment of the present application and is not intended to limit the present application. Any modifications, equivalent substitutions, improvements, etc. made within the principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. The video monitoring management method for face care is characterized by comprising the following steps:
Obtaining a face image of a patient and each template image in a face nursing database, and obtaining a deviation factor according to the difference between the centroid coordinates of the face image of the patient and the centroid coordinates of the template image; taking template images with deviation factors lower than a preset threshold value as candidate template images; respectively acquiring pyramid images of each layer of face images of a patient and candidate template images;
Each layer of pyramid images of the face image of the patient are sequentially marked as pyramid search images, and each layer of pyramid images of the candidate template images are sequentially marked as pyramid template images; obtaining a rotation angle range of the pyramid template image according to the face direction difference of the pyramid search image and the pyramid template image;
Filtering the face images of the patients by adopting a multi-scale Gabor filter to obtain a plurality of face feature images of the patients; obtaining the complexity of the face image of each patient according to the gray distribution condition of the face feature image of each patient, and calculating the self-adaptive rotation step length of the pyramid template image;
obtaining pyramid rotation template images of the pyramid template images according to the rotation angle range and the self-adaptive rotation step length in combination with an image rotation transformation algorithm;
each matching block in the pyramid search image is sequentially marked as a pyramid search matching block, and each matching block in the pyramid rotation template image is sequentially marked as a pyramid rotation template matching block;
obtaining a texture feature matrix of the pyramid search matching block according to the feature values and feature vectors of all pixel point hessian matrixes in the pyramid search matching block, and correspondingly obtaining the texture feature matrix of the pyramid rotation template matching block; obtaining a matched image of the pyramid search image according to the correlation between the pyramid search image and the texture feature matrix of the corresponding matched block in each pyramid rotation template image of the same layer;
And analyzing the average value of the matching similarity between the pyramid search images of all layers and the corresponding matching images to obtain the matching degree of the face image of the patient and the candidate template image, obtaining the final matching image of the face image of the patient, and carrying out early warning prompt according to the matching result.
2. The video monitoring management method for face care according to claim 1, wherein the deviation factor corresponds to a calculation formula: ; in the method, in the process of the invention, As a deviation factor between the patient face image and the template image k,As an exponential function based on a natural constant e,In order to take the sign of the absolute value,The difference value of the x coordinate and the difference value of the y coordinate of the centroid coordinates of the face image of the patient and the template image k are respectively.
3. The video monitoring management method for face care according to claim 1, wherein the acquiring process of the rotation angle range of the pyramid template image is as follows:
Acquiring the angle between the connecting line of the center points of the two eye matching blocks in the pyramid search image and the x axis of the image coordinate system, marking the angle as the face direction of the pyramid search image, and correspondingly, acquiring the face direction of the pyramid template image;
and calculating a rotation angle range of the pyramid template image through the difference between the face direction of the pyramid search image and the face direction of the pyramid template image, wherein the rotation angle range of the pyramid template image is positively associated with the difference between the face directions of the pyramid search image and the pyramid template image.
4. The video monitoring management method for face care according to claim 1, wherein the obtaining process of the complexity of the face image of the patient is:
Acquiring a gray level co-occurrence matrix corresponding to each patient face feature map, and calculating the complexity of the patient face feature map according to the feature amount of the gray level co-occurrence matrix, wherein the complexity of the patient face feature map is in forward association with the entropy of the corresponding gray level co-occurrence matrix, and in reverse association with the energy and inverse variance of the corresponding gray level co-occurrence matrix;
and taking the complexity average value of all the face feature images of the patient as the complexity of the face images of the patient.
5. The video monitoring management method for face care according to claim 1, wherein the adaptive rotation step length of the pyramid template image corresponds to a calculation formula: ; in the method, in the process of the invention, For the adaptive rotation step of the pyramid template image,For the initial rotation step size,In order to perform the rounding-down operation,In order to rotate the step size regulatory factor,Complexity of face images for patients.
6. The video monitoring management method for face care according to claim 1, wherein the obtaining the texture feature matrix of the pyramid search matching block according to the feature values and feature vectors of all pixel point hessian matrices in the pyramid search matching block comprises the following specific steps:
Calculating a hessian matrix of each pixel point in a pyramid search matching block to obtain characteristic values and characteristic vectors of the hessian matrix of each pixel point, taking the characteristic value with a large absolute value as a texture change index corresponding to each pixel point, obtaining the direction of the characteristic vector corresponding to the characteristic value with the large absolute value as a texture trend index of each pixel point, and forming texture feature binary groups of each pixel point by the texture change index and the texture trend index;
And constructing a texture feature matrix of the pyramid search matching block according to the texture features of all the pixel points in the pyramid search matching block, wherein texture feature doublets of all the pixel points in the pyramid search matching block are used as all rows of the texture feature matrix.
7. The video monitoring management method for face care according to claim 1, wherein the acquiring process of the matching image of the pyramid search image is as follows:
Sequentially marking each pyramid rotation template image on the same layer as the pyramid search image as a pyramid rotation template image on the same layer; calculating correlation coefficients between texture feature matrixes of all matching blocks in the pyramid search image and texture feature matrixes of corresponding matching blocks in the same-layer pyramid rotation template image, and taking the sum of all the correlation coefficients as matching similarity between the pyramid search image and the same-layer pyramid rotation template image;
and determining a matching image of the pyramid search image based on the matching similarity.
8. The video monitoring management method for face care according to claim 1, wherein the matching image of the pyramid search image is a same-layer pyramid rotation template image with highest matching similarity.
9. The video monitoring management method for face care according to claim 1, wherein the final matching image acquisition process of the face image of the patient is: and taking the candidate template image with the highest matching degree as a final matching image of the face image of the patient.
10. A video surveillance management system for face care comprising a memory, a processor and a computer program stored in the memory and running on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-9 when executing the computer program.
CN202411109946.1A 2024-08-14 2024-08-14 Video monitoring management method and system for face care Active CN118644819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411109946.1A CN118644819B (en) 2024-08-14 2024-08-14 Video monitoring management method and system for face care

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411109946.1A CN118644819B (en) 2024-08-14 2024-08-14 Video monitoring management method and system for face care

Publications (2)

Publication Number Publication Date
CN118644819A CN118644819A (en) 2024-09-13
CN118644819B true CN118644819B (en) 2024-10-22

Family

ID=92669580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411109946.1A Active CN118644819B (en) 2024-08-14 2024-08-14 Video monitoring management method and system for face care

Country Status (1)

Country Link
CN (1) CN118644819B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN111563417A (en) * 2020-04-13 2020-08-21 华南理工大学 A Face Expression Recognition Method Based on Pyramid Structure Convolutional Neural Network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298854B (en) * 2021-05-27 2022-02-01 广州柏视医疗科技有限公司 Image registration method based on mark points
US11847811B1 (en) * 2022-07-26 2023-12-19 Nanjing University Of Posts And Telecommunications Image segmentation method combined with superpixel and multi-scale hierarchical feature recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN111563417A (en) * 2020-04-13 2020-08-21 华南理工大学 A Face Expression Recognition Method Based on Pyramid Structure Convolutional Neural Network

Also Published As

Publication number Publication date
CN118644819A (en) 2024-09-13

Similar Documents

Publication Publication Date Title
CN110287805B (en) Micro-expression identification method and system based on three-stream convolutional neural network
CN112308932B (en) Gaze detection method, device, equipment and storage medium
CN112200074B (en) Gesture comparison method and terminal
CN113159227A (en) Acne image recognition method, system and device based on neural network
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
Li et al. Appearance-based gaze estimator for natural interaction control of surgical robots
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
Niri et al. Multi-view data augmentation to improve wound segmentation on 3D surface model by deep learning
CN116831565A (en) A method for monitoring and evaluating human gait
CN118787318B (en) A method, system and device for dynamic monitoring of wound surface for burn treatment
CN105488491A (en) Human body sleep posture detection method based on pyramid matching histogram intersection kernel
Yuan et al. Pain intensity recognition from masked facial expressions using swin-transformer
Jin et al. Simulated multimodal deep facial diagnosis
Perez et al. Real-time iris detection on coronal-axis-rotated faces
CN116128814A (en) Standardized acquisition method and related device for tongue diagnosis image
Reale et al. Facial action unit analysis through 3d point cloud neural networks
Xu et al. Application of artificial intelligence technology in medical imaging
CN118644819B (en) Video monitoring management method and system for face care
CN117056786B (en) Non-contact stress state recognition method and system
CN111709492A (en) Dimensionality reduction visualization method, device and storage medium for high-dimensional electronic medical record table
CN115205750B (en) Motion real-time counting method and system based on deep learning model
CN114724221A (en) RASS sedation score discrimination method based on depth map neural network
Chiranjeevi et al. Surveillance based suicide detection system using deep learning
Coelho et al. Periocular EfficientNet: A Deep Model for Periocular Recognition
Joshi et al. Riemannian structures on shape spaces: A framework for statistical inferences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant