[go: up one dir, main page]

CN112001280A - Real-time online optimization face recognition system and method - Google Patents

Real-time online optimization face recognition system and method Download PDF

Info

Publication number
CN112001280A
CN112001280A CN202010812277.XA CN202010812277A CN112001280A CN 112001280 A CN112001280 A CN 112001280A CN 202010812277 A CN202010812277 A CN 202010812277A CN 112001280 A CN112001280 A CN 112001280A
Authority
CN
China
Prior art keywords
face
data
detection
features
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010812277.XA
Other languages
Chinese (zh)
Other versions
CN112001280B (en
Inventor
李百成
张翊
黎嘉朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haojing Intelligent Technology Co.,Ltd.
Whale Cloud Technology Co Ltd
Original Assignee
Whale Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whale Cloud Technology Co Ltd filed Critical Whale Cloud Technology Co Ltd
Priority to CN202010812277.XA priority Critical patent/CN112001280B/en
Publication of CN112001280A publication Critical patent/CN112001280A/en
Application granted granted Critical
Publication of CN112001280B publication Critical patent/CN112001280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

一种实时、可在线优化的人脸识别方法和系统,其中所述方法包括如下步骤:获取待识别的图像数据,并对所述图像数据进行解析,得到输入数据;基于人脸检测网络对所述输入数据进行人脸检测,其中,在每次检测前判断所述输入数据的应用类型,并基于应用类型选择对应的推理分支进行人脸检测,得到输入数据中的人脸区域数据;至少根据人脸清晰度、人脸亮度、人脸角度、人脸可见度对所述人脸区域数据的人脸质量进行审核;对通过人脸质量审核的所述人脸区域数据进行人脸特征提取和基于HNSW的人脸匹配,以实现人脸识别。本方法实现了基于场景类型优化人脸检测推理路线、具有在线扩充人脸库提升识别准确率的能力。

Figure 202010812277

A real-time, online-optimizable face recognition method and system, wherein the method comprises the steps of: acquiring image data to be recognized, and analyzing the image data to obtain input data; The input data is used for face detection, wherein the application type of the input data is judged before each detection, and the corresponding reasoning branch is selected based on the application type to perform face detection to obtain the face area data in the input data; at least according to Face clarity, face brightness, face angle, and face visibility are reviewed for the face quality of the face area data; face feature extraction is performed on the face area data that has passed the face quality review and based on Face matching of HNSW to realize face recognition. The method realizes the optimization of the face detection inference route based on the scene type, and has the ability to expand the face database online to improve the recognition accuracy.

Figure 202010812277

Description

一种实时、可在线优化的人脸识别系统和方法A real-time, online-optimizable face recognition system and method

技术领域technical field

本发明属于技术领域,具体涉及一种实时、可在线优化的人脸识别系统和方法。The invention belongs to the technical field, and in particular relates to a real-time, online-optimized face recognition system and method.

背景技术Background technique

人脸识别是一种基于人脸脸部特征进行身份识别的方法,由于具有非接触、直观等优点已经广泛应用到现实生活中的各个领域中。但是根据应用场景的不同,设备获取的人脸图片质量会存在较大的差别,导致人脸识别效率、准确率下降的情况。Face recognition is a method of identification based on facial features. It has been widely used in various fields in real life due to its non-contact and intuitive advantages. However, according to different application scenarios, the quality of the face images obtained by the device will vary greatly, resulting in a decrease in the efficiency and accuracy of face recognition.

发明内容SUMMARY OF THE INVENTION

针对于上述现有技术的不足,本发明的目的之一是提供一种实时、可在线优化的人脸识别系统和方法。In view of the above-mentioned deficiencies of the prior art, one of the objectives of the present invention is to provide a real-time, online-optimizable face recognition system and method.

本发明实施例公开了一种实时、可在线优化的人脸识别方法,包括如下步骤:获取待识别的图像数据,并对所述图像数据进行解析,得到输入数据;基于人脸检测网络对所述输入数据进行人脸检测,其中,在每次检测前判断所述输入数据的应用类型,并基于应用类型选择对应的推理分支进行人脸检测,得到输入数据中的人脸区域数据;至少根据人脸清晰度、人脸亮度、人脸角度、人脸可见度对所述人脸区域数据的人脸质量进行审核;对通过人脸质量审核的所述人脸区域数据进行人脸特征提取和基于HNSW的人脸匹配,以实现人脸识别。The embodiment of the present invention discloses a real-time, online-optimizable face recognition method, comprising the following steps: acquiring image data to be recognized, and analyzing the image data to obtain input data; The input data is used for face detection, wherein the application type of the input data is judged before each detection, and the corresponding reasoning branch is selected based on the application type to perform face detection to obtain the face area data in the input data; at least according to Face clarity, face brightness, face angle, and face visibility are reviewed for the face quality of the face area data; face feature extraction is performed on the face area data that has passed the face quality review and based on Face matching of HNSW to realize face recognition.

在一个可能的实施例中,还包括,将满足预设入库条件的人脸特征添加到人脸特征底库,并在获取人脸特征确认入库后,按照扩充人脸特征的概率规则将人脸特征添加到HNSW的多层图数据结构。In a possible embodiment, the method further includes: adding the face features that meet the preset storage conditions to the face feature base database, and after the face features are obtained and confirmed to be stored in the database, according to the probability rule of expanding the face features Facial features are added to HNSW's multi-layer graph data structure.

在一个可能的实施例中,所述预设入库条件包括:设相似度为similarity,相似度阈值为tsim,扩充相似度阈值为text,且text>tsim,及当前特征已扩展记录数为cntext,当similarity>text且cntext<5时,将所述人脸特征入库并标记为待确定。In a possible embodiment, the preset storage conditions include: setting similarity as similarity, similarity threshold as t sim , expanded similarity threshold as text , and text >t sim , and the current feature has been expanded The number of records is cnt ext . When similarity > text ext and cnt ext < 5, the facial features are stored in the library and marked as pending determination.

在一个可能的实施例中,所述扩充人脸特征的概率规则为:设一共有M层图,从浅到深排序为:layer1、layer2…layerM,对应插入概率为p1、p2…pM,插入概率有如下关系:In a possible embodiment, the probability rule for extending face features is: suppose there are M layers in total, and the order from shallow to deep is: layer 1 , layer 2 ... layer M , and the corresponding insertion probabilities are p 1 , p 2 …p M , the insertion probability has the following relationship:

Figure BDA0002631389880000021
Figure BDA0002631389880000021

Figure BDA0002631389880000022
Figure BDA0002631389880000022

在一个可能的实施例中,所述人脸检测包括:判断所述输入数据的应用类型;在不同尺度的特征图下检测不同大小的人脸,将这些尺度划分成small、meduim、large三个等级,将所述等级对应到人脸检测下不同推理分支上;基于所述应用类型,有选择性地执行人脸检测网络的对应推理分支,在人脸距离较远的场景下运行small、meduim分支,在人脸距离较近的场景下运行small、large分支,以获得第一人脸区域数据。In a possible embodiment, the face detection includes: judging the application type of the input data; detecting faces of different sizes under feature maps of different scales, and dividing these scales into three sizes: small, medium, and large level, which corresponds to different reasoning branches under face detection; based on the application type, selectively executes the corresponding reasoning branches of the face detection network, and runs small and meduim in scenarios where the distance between faces is relatively long. Branch, run the small and large branches in the scene where the face distance is relatively close to obtain the first face area data.

在一个可能的实施例中,还包括:基于置信度对所述第一人脸区域数据进行筛选,以获得第二人脸区域数据。In a possible embodiment, the method further includes: screening the first face region data based on the confidence to obtain second face region data.

在一个可能的实施例中,将所述人脸区域数据划分成多块分别统计灰度值;依据所述灰度值计算人脸区域的灰暗度和明亮度,以这两个指标判断人脸区域的亮度。In a possible embodiment, the data of the face area is divided into multiple blocks to count gray values respectively; the grayness and brightness of the face area are calculated according to the gray values, and the face is judged by these two indicators Brightness of the area.

在一个可能的实施例中,通过人脸关键点提取算法提取人脸关键点,关键点信息包括关键点水平方向坐标、关键点垂直方向坐标以及关键点可见度;基于关键点信息获取在水平方向上和垂直方向上的人脸角度;或者,统计满足预设值的关键点可见度的数量,通过关键点可见度的数量对人脸质量进行审核。In a possible embodiment, face key points are extracted through a face key point extraction algorithm, and the key point information includes the horizontal direction coordinates of the key points, the vertical direction coordinates of the key points, and the visibility of the key points; and the face angle in the vertical direction; or, count the number of key point visibility that meet the preset value, and review the face quality based on the number of key point visibility.

在一个可能的实施例中,至少提取21个人脸关键点。In a possible embodiment, at least 21 face key points are extracted.

在一个可能的实施例中,基于人脸特征库构造HNSW图搜索的数据结构;采用HNSW向量匹配算法查找相似度最大的人脸特征。In a possible embodiment, the data structure of HNSW graph search is constructed based on the face feature library; the HNSW vector matching algorithm is used to find the face feature with the greatest similarity.

一种实时、可在线优化的人脸识别系统,包括:输入模块,用于获取待识别的图像数据,并对所述图像数据进行解析,得到输入数据;人脸检测模块,用于基于人脸检测网络对所述输入数据进行人脸检测,其中,在每次检测前判断所述输入数据的应用类型,并基于应用类型选择对应的推理分支进行人脸检测,得到输入数据中的人脸区域数据;人脸质量审核模块,用于至少根据人脸清晰度、人脸亮度、人脸角度、人脸可见度对所述人脸区域数据的人脸质量进行审核;人脸识别模块,用于对通过人脸质量审核的所述人脸区域数据进行人脸特征提取和基于HNSW的人脸匹配,以实现人脸识别。A real-time, online-optimizable face recognition system, comprising: an input module for acquiring image data to be recognized, and analyzing the image data to obtain input data; a face detection module for The detection network performs face detection on the input data, wherein the application type of the input data is judged before each detection, and the corresponding reasoning branch is selected based on the application type to perform face detection, and the face area in the input data is obtained. data; a face quality review module, used for reviewing the face quality of the face area data based on at least face clarity, face brightness, face angle, and face visibility; a face recognition module, used for Face feature extraction and HNSW-based face matching are performed on the face region data reviewed by face quality to realize face recognition.

在一个可能的实施例中,还包括,人脸库在线扩充模块,用于将满足预设入库条件的人脸特征添加到人脸特征底库,并在获取人脸特征确认入库后,按照扩充人脸特征的概率规则将人脸特征添加到HNSW的多层图数据结构。In a possible embodiment, it also includes an online expansion module of the face library, configured to add the face features that meet the preset storage conditions to the face feature base library, and after the face features are acquired and confirmed to be stored in the library, The face features are added to the multi-layer graph data structure of HNSW according to the probabilistic rules of augmenting face features.

在一个可能的实施例中,所述预设入库条件包括:设相似度为simjlarity,相似度阈值为tsim,扩充相似度阈值为text,且text>tsim,及当前特征已扩展记录数为cntext,当similarity>text且cntext<5时,将所述人脸特征入库并标记为待确定。In a possible embodiment, the preset storage conditions include: setting the similarity as simjlarity , the similarity threshold as t sim , the expanded similarity threshold as text , and text >t sim , and the current feature has been expanded The number of records is cnt ext . When similarity > text ext and cnt ext < 5, the facial features are stored in the database and marked as pending.

在一个可能的实施例中,所述扩充人脸特征的概率规则为:设一共有M层图,从浅到深排序为:layer1、layer2…layerM,对应插入概率为p1、p2…pM,插入概率有如下关系:In a possible embodiment, the probability rule for extending face features is: suppose there are M layers in total, and the order from shallow to deep is: layer 1 , layer 2 ... layer M , and the corresponding insertion probabilities are p 1 , p 2 …p M , the insertion probability has the following relationship:

Figure BDA0002631389880000041
Figure BDA0002631389880000041

Figure BDA0002631389880000042
Figure BDA0002631389880000042

在一个可能的实施例中,所述人脸检测模块还用于:判断所述输入数据的应用类型;在不同尺度的特征图下检测不同大小的人脸,将这些尺度划分成small、meduim、large三个等级,将所述等级对应到基于anchor box方式的人脸检测下不同推理分支上;基于所述应用类型,有选择性地执行人脸检测网络的对应分支,在人脸距离较远的场景下运行small、meduim分支,在人脸距离较近的场景下运行small、large分支,以获得第一人脸区域数据。In a possible embodiment, the face detection module is further used for: judging the application type of the input data; detecting faces of different sizes under the feature maps of different scales, and dividing these scales into small, meduim, There are three levels of large, and the levels correspond to different inference branches under the face detection based on the anchor box method; based on the application type, the corresponding branches of the face detection network are selectively executed, and the distance between the faces is farther. Run the small and meduim branches in the scene where the face distance is relatively close, and run the small and large branches in the scene where the face distance is relatively close to obtain the first face area data.

在一个可能的实施例中,所述人脸检测模块还用于:基于置信度对所述第一人脸区域数据进行筛选,以获得第二人脸区域数据。In a possible embodiment, the face detection module is further configured to: screen the first face region data based on the confidence to obtain second face region data.

在一个可能的实施例中,人脸质量审核模块还用于:将所述人脸区域数据划分成多块分别统计灰度值;依据所述灰度值计算人脸区域的灰暗度和明亮度,以这两个指标判断人脸区域的亮度。In a possible embodiment, the face quality auditing module is further configured to: divide the face area data into multiple blocks to count gray values respectively; calculate the grayness and brightness of the face area according to the gray values , and use these two indicators to judge the brightness of the face area.

在一个可能的实施例中,所述人脸质量审核模块还用于:通过人脸关键点提取算法提取人脸关键点,关键点信息包括关键点水平方向坐标、关键点垂直方向坐标以及关键点可见度;基于关键点信息获取在水平方向上和垂直方向上的人脸角度;或者,统计满足预设值的关键点可见度的数量,通过关键点可见度的数量对人脸质量进行审核。In a possible embodiment, the face quality review module is further configured to: extract face key points through a face key point extraction algorithm, and the key point information includes the horizontal direction coordinates of the key points, the vertical direction coordinates of the key points, and the key points. Visibility; obtain the face angle in the horizontal and vertical directions based on the key point information; or, count the number of key point visibility that meet the preset value, and review the face quality based on the number of key point visibility.

在一个可能的实施例中,所述人脸识别模块还用于:基于人脸特征库构造HNSW图搜索的数据结构;采用HNSW向量匹配算法查找相似度最大的人脸特征。In a possible embodiment, the face recognition module is further used for: constructing a data structure for HNSW graph search based on a face feature library; and finding the face feature with the greatest similarity by using the HNSW vector matching algorithm.

一种计算机存储介质,其存储计算机程序,在所述计算机程序被执行时,实施前述方法。A computer storage medium storing a computer program which, when executed, implements the aforementioned method.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本方案采用专用的人脸关键点检测网络,提取人脸关键点,不仅输出关键点位置,还包含了关键点的可见度;采用21个关键点计算人脸角度,输出角度更加精确。本方法实现了基于场景类型优化人脸检测推理路线、具有在线扩充人脸库提升识别准确率的能力。This scheme uses a dedicated face key point detection network to extract face key points, not only outputting key point positions, but also including the visibility of key points; using 21 key points to calculate the face angle, the output angle is more accurate. The method realizes the optimization of the face detection inference route based on the scene type, and has the ability to expand the face database online to improve the recognition accuracy.

附图说明Description of drawings

图1为本发明实施例的一种方法流程图;1 is a flow chart of a method according to an embodiment of the present invention;

图2为本发明实施例的一种系统结构示意图;2 is a schematic diagram of a system structure according to an embodiment of the present invention;

图3为本发明实施例的一种人脸检测结构示意图;FIG. 3 is a schematic structural diagram of a face detection according to an embodiment of the present invention;

图4为本发明实施例的一种人脸检测模型的推理分支选择结构示意图;4 is a schematic diagram of a structure of inference branch selection of a face detection model according to an embodiment of the present invention;

图5为本发明实施例的一种人脸亮度计算流程示意图。FIG. 5 is a schematic flowchart of a face brightness calculation process according to an embodiment of the present invention.

具体实施方式Detailed ways

为了便于本领域技术人员的理解,下面结合实施例与附图对本发明作进一步的说明,实施方式提及的内容并非对本发明的限定。In order to facilitate the understanding of those skilled in the art, the present invention will be further described below with reference to the embodiments and the accompanying drawings, and the contents mentioned in the embodiments are not intended to limit the present invention.

如图1所示,本发明实施例公开了一种实时、可在线优化的人脸识别方法,通过获取待识别的图像数据、人脸检测、人脸质量审核、基于深度学习的人脸特征识别等步骤,可以实现基于场景类型优化人脸检测推理路线、具有在线扩充人脸库提升识别准确率的能力。本发明适用于门禁、监控、VIP识别等多个应用场景。所述方法包括如下步骤:As shown in FIG. 1 , an embodiment of the present invention discloses a real-time, online-optimized face recognition method. By acquiring image data to be recognized, face detection, face quality review, and deep learning-based face feature recognition and other steps, it can realize the optimization of the face detection inference route based on the scene type, and have the ability to expand the face database online to improve the recognition accuracy. The present invention is suitable for multiple application scenarios such as access control, monitoring, and VIP identification. The method includes the following steps:

S100,获取待识别的图像数据,并对所述图像数据进行解析,得到输入数据。在具体的实施里中,可以接收来自前端应用的图片序列或视频流输入、离线视频等图像数据。S100: Acquire image data to be identified, and analyze the image data to obtain input data. In a specific implementation, image data such as picture sequence or video stream input, offline video, etc. from the front-end application can be received.

S102,基于人脸检测网络对所述输入数据进行人脸检测,其中,在每次检测前判断所述输入数据的应用类型,并基于应用类型选择对应的推理分支进行人脸检测,得到输入数据中的人脸区域数据。S102, performing face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding reasoning branch is selected based on the application type to perform face detection to obtain the input data face area data in .

在进行人脸检测前先判断所述输入数据的应用类型,应用类型判断方法,可包括但不限于以下手段:1)前端应用直接传递应用标识;2)基于图片或视频分辨率判断类型;3)在线运行一段时间,统计不同尺寸的人脸的出现次数及人脸质量,统计、判别应用输入适用的检测分支。The application type of the input data is judged before face detection, and the application type judgment method may include but not limited to the following means: 1) the front-end application directly transmits the application identifier; 2) the type is judged based on the resolution of the picture or video; 3 ) runs online for a period of time, counts the occurrences and quality of faces of different sizes, and counts and discriminates the applicable detection branch for application input.

在不同尺度的特征图下检测不同大小的人脸,将这些尺度划分成small、meduim、large三个等级,将所述等级对应到基于anchor box方式的人脸检测下不同推理分支上;基于所述应用类型,有选择性地执行人脸检测网络的对应推理分支,在人脸距离较远的场景下运行small、meduim分支,在人脸距离较近的场景下运行small、large分支,以获得第一人脸区域数据。从而达到降低计算量及基于人脸大小粗筛选的目的。Detect faces of different sizes under feature maps of different scales, divide these scales into three levels of small, medium, and large, and map the levels to different inference branches under the face detection based on the anchor box method; According to the application type described above, the corresponding reasoning branch of the face detection network is selectively executed, and the small and meduim branches are run in the scene where the face is far away, and the small and large branches are run in the scene where the face is close. The first face area data. In order to achieve the purpose of reducing the amount of calculation and coarse screening based on the size of the face.

基于置信度对所述第一人脸区域数据进行筛选,以获得第二人脸区域数据。The first face region data is screened based on the confidence to obtain second face region data.

S103,至少根据人脸清晰度、人脸亮度、人脸角度、人脸可见度对所述人脸区域数据的人脸质量进行审核。这里的人脸检测区域数据可以是第二人脸区域数据。S103, at least review the face quality of the face region data according to the face definition, face brightness, face angle, and face visibility. The face detection area data here may be the second face area data.

其中,人脸亮度的审核可以是将所述人脸区域数据划分成多块分别统计灰度值;依据所述灰度值计算人脸区域的灰暗度和明亮度,以这两个指标判断人脸区域的亮度。Among them, the review of the face brightness may be to divide the face area data into multiple blocks to count the gray value respectively; calculate the grayness and brightness of the face area according to the gray value, and use these two indicators to judge the person The brightness of the face area.

将人脸区域图片转换为灰度图并分成m×n相同大小的区域,第i行第j列的区域xij大小为h×w,按照以下规则分别统计灰度值:Convert the image of the face area into a grayscale image and divide it into m×n areas of the same size. The size of the area x ij in the i-th row and the j-th column is h×w, and the gray values are counted according to the following rules:

Figure BDA0002631389880000071
Figure BDA0002631389880000071

区域xij的两个亮度统计指标为:The two brightness statistics of the area x ij are:

Figure BDA0002631389880000072
Figure BDA0002631389880000072

区域的亮度统计指标进行加权求和,求得两个人脸亮度指标scorebright_dark、scorebright_lightThe brightness statistical indicators of the region are weighted and summed to obtain two face brightness indicators score bright_dark , score bright_light ,

Figure BDA0002631389880000073
Figure BDA0002631389880000073

scorebright_dark=sum(element_wise(W,summaryd))score bright_dark = sum(element_wise(W, summary d ))

scorebright_light=sum(element_wise(W,summaryl))。score bright_light = sum(element_wise(W, summary l )).

人脸可见度的审核可以是通过人脸关键点提取算法提取人脸关键点,关键点信息包括关键点水平方向坐标、关键点垂直方向坐标以及关键点可见度;基于关键点信息获取在水平方向上和垂直方向上的人脸角度;或者,统计满足预设值的关键点可见度的数量,通过关键点可见度的数量对人脸质量进行审核。The review of face visibility can be to extract the key points of the face through the face key point extraction algorithm. The key point information includes the horizontal coordinates of the key points, the vertical coordinates of the key points, and the visibility of the key points; based on the key point information, the horizontal and The face angle in the vertical direction; or, count the number of key point visibility that meet the preset value, and review the face quality through the number of key point visibility.

S104,对通过人脸质量审核的所述人脸区域数据进行人脸特征提取和基于HNSW的人脸匹配,以实现人脸识别。S104 , perform face feature extraction and HNSW-based face matching on the face region data that has passed the face quality audit, so as to realize face recognition.

其中,人脸特征提取的方法可以是基于前述的人脸关键点对齐人脸,并送入特征提取网络提取人脸特征,将得到的特征归一化处理。The method for extracting face features may be to align the face based on the aforementioned key points of the face, send it to a feature extraction network to extract face features, and normalize the obtained features.

人脸特征的匹配方法可以是基于人脸特征库构造HNSW图搜索的数据结构,采用HNSW向量匹配算法查找相似度最大的人脸特征。The matching method of face features may be to construct the data structure of HNSW graph search based on the face feature database, and use the HNSW vector matching algorithm to find the face features with the greatest similarity.

在一个实施例中,本发明方法还包括:In one embodiment, the method of the present invention further comprises:

S105,将满足预设入库条件的人脸特征添加到人脸特征底库,并在获取人脸特征确认入库后,按照扩充人脸特征的概率规则将人脸特征添加到HNSW的多层图数据结构。S105, add the face features that meet the preset storage conditions to the face feature base database, and after the face features are acquired and confirmed to be stored, add the face features to the multi-layer HNSW according to the probability rule of expanding the face features Graph data structure.

其中,所述预设入库条件包括:设相似度为similarity,相似度阈值为tsim,扩充相似度阈值为text,且text>tsim,及当前特征已扩展记录数为cntext,当similarity>text且cntext<5时,将所述人脸特征入库并标记为待确定。Wherein, the preset storage conditions include: setting the similarity as similarity, the similarity threshold as t sim , the expanded similarity threshold as text , and text >t sim , and the number of current feature expanded records as cnt ext , When similarity>t ext and cnt ext < 5, the face features are stored in the library and marked as to be determined.

其中,扩充人脸特征的概率规则为:设一共有M层图,从浅到深排序为:layer1、layer2…layerM,对应插入概率为p1、p2…pM,插入概率有如下关系:Among them, the probability rule for expanding face features is: suppose there are M layers in total, and the order from shallow to deep is: layer 1 , layer 2 ... layer M , the corresponding insertion probabilities are p 1 , p 2 ... p M , and the insertion probability is The following relationship:

Figure BDA0002631389880000091
Figure BDA0002631389880000091

Figure BDA0002631389880000092
Figure BDA0002631389880000092

该方法还包括,数据输出,即将检测、识别人脸数据以请求或消息队列等方式返回到前端应用。The method also includes: data output, that is, returning the detected and recognized face data to the front-end application in the form of a request or a message queue.

对应于前述方法,如图2,本发明实施例还公开了一种实时、可在线优化的人脸识别系统10,包括输入模块101、人脸检测模块102、人脸质量审核模块103、人脸识别模块104、人脸库在线扩充模块105及数据输出模块106。Corresponding to the aforementioned method, as shown in FIG. 2 , an embodiment of the present invention further discloses a real-time, online-optimizable face recognition system 10, including an input module 101, a face detection module 102, a face quality review module 103, a face The recognition module 104 , the face library online expansion module 105 and the data output module 106 .

其中,输入模块101用于接收来自前端应用的图片序列或视频流输入、离线视频等,并解析成后续流程能处理的数据结构。Among them, the input module 101 is used for receiving the picture sequence or video stream input, offline video, etc. from the front-end application, and parsing it into a data structure that can be processed by the subsequent process.

数据输出模块106用于将经过人脸识别模块的人脸检测、人脸识别结果进行整合,并返回到前端应用或消息队列中。The data output module 106 is used to integrate the face detection and face recognition results obtained by the face recognition module, and return them to the front-end application or message queue.

人脸检测模块102接收数据输入模块101的数据,判断输入数据的应用类型,采用基于anchor box的人脸检测网络,并基于应用类型选择特定的推理分支进行人脸检测,得到图片中的人脸区域。The face detection module 102 receives the data of the data input module 101, determines the application type of the input data, adopts the face detection network based on the anchor box, and selects a specific reasoning branch based on the application type to perform face detection, and obtains the face in the picture. area.

人脸质量审核模块103,基于人脸区域清晰度、明亮度、人脸角度及人脸遮挡四方面衡量人脸质量是否及格,放弃不及格的人脸。The face quality review module 103 measures whether the face quality is up to standard based on the four aspects of face area clarity, brightness, face angle and face occlusion, and discards unqualified faces.

人脸识别模块104,对质量及格的人脸进行特征提取,并将得到的特征向量与人脸库特征,采用基于HNSW向量匹配算法查找与查询特征最相似的人脸。其中,人脸特征以高维特征向量表示;在系统运行前要先基于人脸库构建HNSW使用的多层图数据结构、或从固化好的文件中加载数据结构。The face recognition module 104 extracts the features of the qualified faces, and uses the HNSW vector matching algorithm based on the obtained feature vectors and the features of the face database to find the most similar faces with the query features. Among them, the face features are represented by high-dimensional feature vectors; before the system runs, the multi-layer graph data structure used by HNSW should be constructed based on the face database, or the data structure should be loaded from a cured file.

人脸库在线扩充模块105,基于人脸识别模块104的识别结果、相似度信息、人脸质量评估值,自动判定是否将查询人脸入库。The face database online expansion module 105, based on the recognition result, similarity information, and face quality evaluation value of the face recognition module 104, automatically determines whether to store the queried face into the database.

该系统对应于前述的方法实施例,在此不做赘述。The system corresponds to the foregoing method embodiments, and details are not described here.

以下将以视频流作为图像数据进行说明,具体方法步骤如下。The following will describe the video stream as the image data, and the specific method steps are as follows.

将多路监控视频的视频流数据输入到输入模块101,输入模块101解析视频流数据成帧数据,可以以每隔5帧识别一次的频率,将帧图像、帧时间、视频流唯一识别ID输入到人脸检测模块,其余帧不进行识别处理。Input the video stream data of the multi-channel surveillance video into the input module 101, and the input module 101 parses the video stream data into frame data, and can input the frame image, frame time, and video stream unique identification ID at the frequency of every 5 frames. To the face detection module, the remaining frames are not processed for recognition.

人脸检测模块102采用基于anchor box的人脸检测模型进行人脸检测,本实施方法可以采用Retinface人脸识别算法,并设置六种anchor box大小,分别是small:16x16、32x32,medium:64x64、128x128,large:256x256、512x512,这里的anchor box大小表示最小能检测到的大小。Retinaface人脸检测结构参考图3。The face detection module 102 uses the anchor box-based face detection model for face detection. This implementation method can use the Retinface face recognition algorithm, and set six anchor box sizes, namely small: 16x16, 32x32, medium: 64x64, 128x128, large: 256x256, 512x512, where the anchor box size represents the smallest detectable size. Refer to Figure 3 for the Retinaface face detection structure.

对人脸检测的结果基于输出的置信度对检测结果进行筛选,剩下的检测结果输入到人脸质量审核模块103。过滤置信度小于0.9的人脸检测结果,即scoredet≥0.9时,为最终检测到的人脸检测结果。The results of face detection are screened based on the output confidence, and the remaining detection results are input to the face quality review module 103 . Filter the face detection results with confidence less than 0.9, that is, when the score det ≥ 0.9, is the final detected face detection result.

本实施方法可以在人脸检测的推理分支选择上采用系统自动统计判别的方法,分为系统运行初期阶段和启用推理分支选择阶段。This implementation method can adopt the method of automatic statistical discrimination of the system in the selection of inference branches of face detection, which is divided into the initial stage of system operation and the stage of enabling inference branch selection.

在系统运行初期需要统计不同人脸尺寸大小的分布,需要在启动后一段时间内均执行small、medium、large的人脸检测分支,一般情况下,选用一段人流高峰期,分别统计由不同分支检测到的人脸数量。In the early stage of system operation, it is necessary to count the distribution of different face sizes. It is necessary to execute small, medium, and large face detection branches within a certain period of time after startup. In general, a period of peak flow of people is selected, and statistics detected by different branches are separately calculated. the number of faces received.

当统计达到一定数量时开始启用人脸检测模型的推理分支选择,当统计结果存在较多尺寸小的人脸则启用small、medium分支,当统计结果存在较多尺寸大的人脸则启用medium、large分支。推理分支的选择参考图4。When the statistics reach a certain number, the inference branch selection of the face detection model is enabled. When there are many faces with small sizes in the statistical results, the small and medium branches are enabled. When there are many faces with large sizes in the statistical results, the medium and medium branches are enabled. large branch. Refer to Figure 4 for the selection of inference branches.

将检测到的人脸送入人脸质量审核模块103。人脸质量审核包括清晰度、亮度、人脸角度、人脸遮挡四个审核环节,当四个条件都满足,人脸图片才能进入识别环节。The detected face is sent to the face quality review module 103 . The face quality review includes four review links: clarity, brightness, face angle, and face occlusion. When all four conditions are met, the face picture can enter the recognition link.

人脸清晰度使用边沿检测算子量化,可以采用Laplacian算子提取人脸边沿信息,并计算其方差作为人脸清晰度度量值scoresharp,当scoresharp>360时,认为人脸满足清晰度要求。The face sharpness is quantified by the edge detection operator. The Laplacian operator can be used to extract the face edge information, and its variance can be calculated as the face sharpness measurement value score sharp . When the score sharp > 360, it is considered that the face meets the sharpness requirements. .

人脸亮度可以使用两个指标scorebright_dark、scorebright_light,分别衡量人脸的灰暗度和明亮度,计算前将人脸图像转化为灰度图,并将图像划分成3×3相同大小的区域,设第i行第j列的区域xij大小为h×w,每个区域进行如下处理:Face brightness can use two indicators score bright_dark and score bright_light to measure the grayness and brightness of the face respectively. Before the calculation, the face image is converted into a grayscale image, and the image is divided into 3×3 areas of the same size, Let the size of the area x ij in the i-th row and the j-th column be h×w, and each area is processed as follows:

Figure BDA0002631389880000111
Figure BDA0002631389880000111

区域xij的两个亮度统计指标为:The two brightness statistics of the area x ij are:

Figure BDA0002631389880000121
Figure BDA0002631389880000121

即统计灰度值小于50的比例作为区域的灰暗度,统计灰度值大于200的比例作为区域的明亮度,得到各个区域的灰暗度和明亮度后,组成矩阵:That is, the proportion of the statistical gray value less than 50 is used as the grayness of the region, and the proportion of the statistical gray value greater than 200 is used as the brightness of the region. After obtaining the grayness and brightness of each region, a matrix is formed:

Figure BDA0002631389880000122
Figure BDA0002631389880000122

Figure BDA0002631389880000123
Figure BDA0002631389880000123

采用3×3的二维高斯核进行加权求和,得到scorebright_dark、scorebright_lightA 3×3 two-dimensional Gaussian kernel is used for weighted summation to obtain score bright_dark and score bright_light .

Figure BDA0002631389880000124
Figure BDA0002631389880000124

scorebright_dark=sum(element_wise(GuassKernel3×3,summaryd))score bright_dark = sum(element_wise(GuassKernel 3×3 , summary d ))

scorebright_light=sum(element_wise(GuassKernel3×3,summaryl))score bright_light = sum(element_wise(GuassKernel 3×3 , summary l ))

当满足scorebright_dark<0.4、scorebright_light<0.5时,认为人脸满足亮度要求,亮度计算流程参考图5。When score bright_dark < 0.4 and score bright_light < 0.5 are satisfied, the face is considered to meet the brightness requirements. Refer to Figure 5 for the brightness calculation process.

人脸角度及人脸遮挡的判断要先得到人脸关键点信息,使用人脸关键点提取算法提取人脸21个关键点,关键点信息包括(x,y,scorevis),分别为关键点水平方向坐标、关键点垂直方向坐标,关键点可见度,其中,scorevis∈[0,1],1表示关键点完全可见,21个关键点包括左眼、右眼、鼻子、嘴角等部位。To judge the face angle and face occlusion, first obtain the face key point information, and use the face key point extraction algorithm to extract 21 key points of the face. The key point information includes (x, y, score vis ), which are the key points respectively. Horizontal coordinates, vertical coordinates of key points, and visibility of key points. Among them, score vis ∈ [0, 1], 1 means that the key points are completely visible, and 21 key points include the left eye, right eye, nose, mouth corners and other parts.

基于21个关键点及其邻近关系可以建立关键点的拓扑结构图,基于位置信息能估计在水平方向、垂直方向的人脸角度angleh、anglevBased on the 21 key points and their adjacent relationships, the topological structure map of the key points can be established, and the face angles angle h and angle v in the horizontal and vertical directions can be estimated based on the position information.

当|angleh|<30、|anglev|<45时,认为人脸满足角度要求。When |angle h |<30, |angle v |<45, the face is considered to meet the angle requirements.

人脸遮挡基于关键点的可见度判断,统计scorevis<0.3的关键点数量cntunvis,当cntunvis<9时,认为人脸没有明显遮挡。Face occlusion is judged based on the visibility of key points. The number of key points with score vis <0.3 cnt unvis is counted. When cnt unvis < 9, it is considered that there is no obvious occlusion of the face.

满足上述4个质量条件的人脸可以进行识别流程,质量审核不通过的人脸则在输出模块106中输出人脸位置及质量审核不通过的原因。The face that meets the above four quality conditions can be recognized in the process, and the face that fails the quality review will output the position of the face and the reason for the failure to pass the quality review in the output module 106 .

将质量审核通过的人脸送入人脸识别模块104,并基于关键点信息对人脸进行对齐操作,人脸识别分为两个步骤:人脸特征提取、人脸特征匹配。The face that has passed the quality review is sent to the face recognition module 104, and the face is aligned based on the key point information. The face recognition is divided into two steps: face feature extraction and face feature matching.

人脸特征提取基于深度学习方式提取,采用se-resnet50作为特征提取网络,训练阶段对人脸按照id进行精分类训练,在特征提取阶段,选择分类层前一层的特征作为人脸特征,特征维度为256。The face feature extraction is based on deep learning extraction, and se-resnet50 is used as the feature extraction network. In the training stage, the face is classified and trained according to the id. In the feature extraction stage, the features of the previous layer of the classification layer are selected as the face features. The dimension is 256.

人脸特征匹配采用HNSW向量匹配算法查找相似度最大的人脸记录,相似度计算采用余弦相似度。余弦相似度计算公式为:Face feature matching uses HNSW vector matching algorithm to find the face record with the greatest similarity, and cosine similarity is used for similarity calculation. The formula for calculating cosine similarity is:

Figure BDA0002631389880000131
Figure BDA0002631389880000131

a、b为人脸特征向量,即越接近0,人脸特征越相似。a and b are the face feature vectors, that is, the closer to 0, the more similar the face features are.

在运用HNSW向量匹配算法前需要先基于人脸特征库构造HNSW算法中的多层图查询建构。本实施中可以采用三层图结构,从浅到深分为layer1、layer2、layer3Before using the HNSW vector matching algorithm, it is necessary to construct the multi-layer graph query construction in the HNSW algorithm based on the face feature database. In this implementation, a three-layer graph structure may be used, which is divided into layer 1 , layer 2 , and layer 3 from shallow to deep.

由于在本发明中人脸库包含了在运行期间增加的扩充人脸特征,它们有别于正常录入的人脸特征,在构建三层图结构时规则有所不同。Since the face library in the present invention includes the expanded face features added during the operation, they are different from the normal input face features, and the rules are different when constructing the three-layer graph structure.

人脸特征命中人脸库时,满足以下条件可进行人脸库在线扩充:1.similara,b<0.1,2.cntext<5,cntext为当前特征已扩展记录数。当满足这些条件时,将查询特征写入人脸库并标记为待确定,暂不更新到HNSW的多层图结构。然后通过前端用户确认身份,当识别正确时,将该特征标记为已确定,在满足下面概率规则,将特征插入到多层图结构中:When a face feature hits the face database, the online expansion of the face database can be performed if the following conditions are met: 1. similar a, b < 0.1, 2. cnt ext < 5, cnt ext is the number of expanded records of the current feature. When these conditions are met, the query features are written into the face database and marked as pending, and are not updated to the multi-layer graph structure of HNSW for the time being. The identity is then confirmed by the front-end user, and when the identification is correct, the feature is marked as determined, and the feature is inserted into the multi-layer graph structure when the following probability rules are satisfied:

一共有3层图,从浅到深排序为:layer1、layer2、layer3,对应插入概率为p1、p2、p3,插入概率有如下关系:There are a total of 3 layers of graphs, sorted from shallow to deep as: layer 1 , layer 2 , layer 3 , and the corresponding insertion probabilities are p 1 , p 2 , p 3 , and the insertion probabilities are as follows:

Figure BDA0002631389880000141
Figure BDA0002631389880000141

令p1=0.1、p2=0.3、p3=0.6,根据这三个概率将特征插入到多层图结构中。Let p 1 =0.1, p 2 =0.3, p 3 =0.6, and insert features into the multi-layer graph structure according to these three probabilities.

数据输出模块106将检测、识别人脸数据以请求或消息队列等方式返回到前端应用。The data output module 106 returns the detected and recognized face data to the front-end application in the form of a request or a message queue.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described embodiments are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or integrated into Another system, or some features can be ignored, or not implemented.

作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solutions in the embodiments of the present invention.

另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

本发明具体应用途径很多,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以作出若干改进,这些改进也应视为本发明的保护范围。There are many specific application ways of the present invention, and the above are only the preferred embodiments of the present invention. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, several improvements can be made. These Improvements should also be considered as the protection scope of the present invention.

Claims (20)

1. A real-time online optimization-based face recognition method is characterized by comprising the following steps:
acquiring image data to be identified, and analyzing the image data to obtain input data;
carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data;
checking the face quality of the face region data at least according to face definition, face brightness, face angle and face visibility;
and performing face feature extraction and HNSW-based face matching on the face region data which passes face quality audit so as to realize face recognition.
2. The method of claim 1, further comprising adding the face features meeting the preset warehousing conditions to a face feature base, and after obtaining the face features and confirming warehousing, adding the face features to a multi-layer graph data structure of the HNSW according to a probability rule for expanding the face features.
3. The method of claim 2, wherein the pre-determined binning conditions comprise: let similarity be similarity, and similarity threshold be tsimWith the extended similarity threshold textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
4. A method as claimed in claim 2 or 3, wherein the probability rule for augmenting the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure FDA0002631389870000021
Figure FDA0002631389870000022
5. the method of claim 1, wherein the face detection comprises:
judging the application type of the input data;
detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection; and selectively executing corresponding reasoning branches of the face detection network based on the application type, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
6. The method of claim 5, further comprising: and screening the first face region data based on the confidence coefficient to obtain second face region data.
7. The method of claim 1, wherein the face region data is divided into a plurality of blocks of respective statistical gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
8. The method of claim 1, wherein the face key points are extracted by a face key point extraction algorithm, and the key point information includes key point horizontal direction coordinates, key point vertical direction coordinates, and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
9. The method of claim 8, wherein at least 21 face key points are extracted.
10. The method of claim 1, wherein a data structure for HNSW graph search is constructed based on a face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
11. A real-time, online-optimizable face recognition system, comprising:
the input module is used for acquiring image data to be identified and analyzing the image data to obtain input data;
the face detection module is used for carrying out face detection on the input data based on a face detection network, wherein the application type of the input data is judged before each detection, and a corresponding inference branch is selected based on the application type to carry out face detection so as to obtain face region data in the input data;
the face quality auditing module is used for auditing the face quality of the face area data at least according to face definition, face brightness, face angle and face visibility;
and the face recognition module is used for extracting the face features of the face region data which passes the face quality audit and carrying out face matching based on HNSW so as to realize face recognition.
12. The system of claim 11, further comprising a face library online expansion module, configured to add face features meeting preset warehousing conditions to the face feature base library, and after obtaining the face features and confirming warehousing, add the face features to the multi-layer graph data structure of HNSW according to probability rules of the expanded face features.
13. The system of claim 12, wherein the pre-defined binning conditions comprise: let similarity be similarity, and similarity threshold be tsimExpanding, expandingThe threshold value of the charging similarity is textAnd t isext>tsimAnd the current feature extended record number is cntextWhen similarity > textAnd cntextAnd when the number is less than 5, the face features are stored in a storage and marked as to be determined.
14. The system of claim 12 or 13, wherein the probability rule for augmenting the face features is: let a total M-level graph, ordered from shallow to deep as: layer1、layer2…layerMCorresponding insertion probability of p1、p2…pMThe insertion probability has the following relationship:
Figure FDA0002631389870000041
Figure FDA0002631389870000042
15. the system of claim 11, wherein the face detection module is further to: judging the application type of the input data;
detecting human faces with different sizes under characteristic images with different scales, dividing the scales into three levels of small, meduim and large, and corresponding the levels to different inference branches under the human face detection based on an anchor box mode;
and based on the application type, selectively executing corresponding branches of the face detection network, operating small and meduim branches in a scene with a longer face distance, and operating small and large branches in a scene with a shorter face distance to obtain first face region data.
16. The system of claim 15, wherein the face detection module is further configured to: and screening the first face region data based on the confidence coefficient to obtain second face region data.
17. The system of claim 11, wherein the face quality audit module is further configured to: dividing the face region data into a plurality of blocks, and respectively counting gray values; and calculating the gray-dark degree and the brightness of the face area according to the gray value, and judging the brightness of the face area according to the two indexes.
18. The system of claim 11, wherein the face quality audit module is further to: extracting face key points by a face key point extraction algorithm, wherein key point information comprises key point horizontal direction coordinates, key point vertical direction coordinates and key point visibility; acquiring human face angles in the horizontal direction and the vertical direction based on the key point information; or counting the number of the visibility of the key points meeting the preset value, and auditing the face quality through the number of the visibility of the key points.
19. The system of claim 11, wherein the face recognition module is further configured to: constructing a data structure for HNSW graph search based on the face feature library; and searching the face features with the maximum similarity by adopting an HNSW vector matching algorithm.
20. A computer storage medium storing a computer program which, when executed, implements the method of claims 1-10.
CN202010812277.XA 2020-08-13 2020-08-13 Real-time and online optimized face recognition system and method Active CN112001280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010812277.XA CN112001280B (en) 2020-08-13 2020-08-13 Real-time and online optimized face recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010812277.XA CN112001280B (en) 2020-08-13 2020-08-13 Real-time and online optimized face recognition system and method

Publications (2)

Publication Number Publication Date
CN112001280A true CN112001280A (en) 2020-11-27
CN112001280B CN112001280B (en) 2024-07-09

Family

ID=73463116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010812277.XA Active CN112001280B (en) 2020-08-13 2020-08-13 Real-time and online optimized face recognition system and method

Country Status (1)

Country Link
CN (1) CN112001280B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743308A (en) * 2021-09-06 2021-12-03 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
WO2023040480A1 (en) * 2021-09-15 2023-03-23 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN107491767A (en) * 2017-08-31 2017-12-19 广州云从信息科技有限公司 End to end without constraint face critical point detection method
CN108090406A (en) * 2016-11-23 2018-05-29 浙江宇视科技有限公司 Face identification method and system
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN110751043A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Face recognition method and device based on face visibility and storage medium
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN110866500A (en) * 2019-11-19 2020-03-06 上海眼控科技股份有限公司 Face detection alignment system, method, device, platform, mobile terminal and storage medium
CN111241345A (en) * 2020-02-18 2020-06-05 腾讯科技(深圳)有限公司 A video retrieval method, device, electronic device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605993A (en) * 2013-12-04 2014-02-26 康江科技(北京)有限责任公司 Image-to-video face identification method based on distinguish analysis oriented to scenes
CN108090406A (en) * 2016-11-23 2018-05-29 浙江宇视科技有限公司 Face identification method and system
CN107491767A (en) * 2017-08-31 2017-12-19 广州云从信息科技有限公司 End to end without constraint face critical point detection method
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN110751043A (en) * 2019-09-19 2020-02-04 平安科技(深圳)有限公司 Face recognition method and device based on face visibility and storage medium
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN110866500A (en) * 2019-11-19 2020-03-06 上海眼控科技股份有限公司 Face detection alignment system, method, device, platform, mobile terminal and storage medium
CN111241345A (en) * 2020-02-18 2020-06-05 腾讯科技(深圳)有限公司 A video retrieval method, device, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAN XIA ET AL.: ""Face Recognition and Application of Film and Television Actors Based on Dlib"", 《IEEE》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743308A (en) * 2021-09-06 2021-12-03 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
CN113743308B (en) * 2021-09-06 2023-12-12 汇纳科技股份有限公司 Face recognition method, device, storage medium and system based on feature quality
WO2023040480A1 (en) * 2021-09-15 2023-03-23 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN112001280B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN109344787B (en) A specific target tracking method based on face recognition and pedestrian re-identification
CN110502965B (en) Construction safety helmet wearing monitoring method based on computer vision human body posture estimation
CN112614187B (en) Loop detection method, loop detection device, terminal equipment and readable storage medium
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN108009473A (en) Based on goal behavior attribute video structural processing method, system and storage device
CN108229674B (en) Training method and device of neural network for clustering, and clustering method and device
CN112001932B (en) Face recognition method, device, computer equipment and storage medium
CN110827432B (en) Class attendance checking method and system based on face recognition
JP2014232533A (en) System and method for ocr output verification
CN107133607B (en) Crowd counting method and system based on video surveillance
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN111126122B (en) Face recognition algorithm evaluation method and device
CN111091057A (en) Information processing method and device and computer readable storage medium
CN114663835B (en) Pedestrian tracking method, system, equipment and storage medium
JP2016099835A (en) Image processor, image processing method, and program
CN111539257A (en) Personnel re-identification method, device and storage medium
CN112651996B (en) Target detection tracking method, device, electronic equipment and storage medium
CN112001280B (en) Real-time and online optimized face recognition system and method
CN108537223A (en) A kind of detection method of license plate, system and equipment and storage medium
CN114463808B (en) Face recognition method, device, terminal and computer-readable storage medium
CN113657169B (en) Gait recognition method, device and system and computer readable storage medium
CN115527147A (en) A multi-modal target re-identification method
TWI728655B (en) Convolutional neural network detection method and system for animals
CN116071569A (en) Image selection method, computer equipment and storage device
CN114429605A (en) System for identifying, positioning and tracking dynamic random target in wide-area environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250124

Address after: 210000 6th floor, block B, 50 Andemen street, Yuhuatai District, Nanjing City, Jiangsu Province

Patentee after: WHALE CLOUD TECHNOLOGY Co.,Ltd.

Country or region after: China

Patentee after: Haojing Intelligent Technology Co.,Ltd.

Address before: 210000 6th floor, block B, 50 Andemen street, Yuhuatai District, Nanjing City, Jiangsu Province

Patentee before: WHALE CLOUD TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right