[go: up one dir, main page]

CN110096958B - Method and device for recognizing front face image and computing equipment - Google Patents

Method and device for recognizing front face image and computing equipment Download PDF

Info

Publication number
CN110096958B
CN110096958B CN201910239957.4A CN201910239957A CN110096958B CN 110096958 B CN110096958 B CN 110096958B CN 201910239957 A CN201910239957 A CN 201910239957A CN 110096958 B CN110096958 B CN 110096958B
Authority
CN
China
Prior art keywords
face
image
detected
coefficient
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910239957.4A
Other languages
Chinese (zh)
Other versions
CN110096958A (en
Inventor
马啸
王宏
汪显方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN201910239957.4A priority Critical patent/CN110096958B/en
Publication of CN110096958A publication Critical patent/CN110096958A/en
Application granted granted Critical
Publication of CN110096958B publication Critical patent/CN110096958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention relates to the technical field of face recognition, and particularly discloses a method, a device and computing equipment for recognizing a front face image, wherein the method comprises the following steps: acquiring an image to be detected; extracting pose parameters of a face in an image to be detected; judging whether the face in the image to be detected meets the preset face condition or not by combining the gesture parameters; if yes, determining the image to be detected as a front face image; if not, determining that the image to be detected is a non-positive face image. Therefore, by utilizing the scheme of the embodiment of the invention, whether the image to be detected is the front face image can be judged through the gesture parameters of the face in the image to be detected, so that the recognition of the front face image is realized.

Description

Method and device for recognizing front face image and computing equipment
Technical Field
The embodiment of the invention relates to the technical field of face recognition, in particular to a method, a device and computing equipment for recognizing a face image.
Background
In the video monitoring system, after the image processing device obtains the image to be detected, facial features can be extracted from the image to be detected, and identity recognition and verification can be performed through the facial features.
In the process of identity recognition and verification, when the face of the image to be detected is inclined or deflected, the face matching is easy to deviate, so that errors in identity recognition and verification are caused, and the recognition accuracy is low. In order to improve the accuracy of face recognition, it is necessary to screen out a front face image from the images to be detected, and perform identity recognition and verification by using the front face image.
However, the inventors of the embodiments of the present invention found in the process of implementing the embodiments of the present invention: currently, methods for face image recognition are lacking.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention are directed to a method, apparatus, and computing device for recognizing a frontal image that overcome or at least partially solve the foregoing problems.
In order to solve the technical problems, one technical scheme adopted by the embodiment of the invention is as follows: there is provided a method of recognizing a face image, comprising: acquiring an image to be detected; extracting pose parameters of a face in an image to be detected; judging whether the face in the image to be detected meets the preset face condition or not by combining the gesture parameters; if yes, determining the image to be detected as a front face image; if not, determining that the image to be detected is a non-positive face image.
Optionally, the gesture parameters include a face deviation angle, a face turning coefficient and a face lifting coefficient; extracting pose parameters of a face in an image to be detected, including: identifying characteristic information of a face part in an image to be detected; and calculating the face deviation angle, the face rotation coefficient and the face lifting coefficient of the face according to the characteristic information of the face part.
Optionally, calculating the face bias angle of the face in the image to be detected according to the feature information of the face part includes: constructing an image central axis of an image to be detected; constructing a facial central axis of a face in the image to be detected according to the characteristic information of the face part; and calculating an included angle between the central axis of the face and the central axis of the image, and taking the included angle as a face deviation angle.
Optionally, determining the face rotation coefficient of the face according to the feature information of the face part includes: constructing a facial central axis of the face in the image to be detected according to the characteristic information of the face part; based on the central axis of the face, dividing the face part in the image to be detected into a left face area and a right face area; and combining the left face area and the right face area to determine the face turning coefficient.
Optionally, identifying a left width of the left face region and a right width of the right face region; and calculating the face turning coefficient according to the left width and the right width.
Optionally, determining the face-transfer coefficient by combining the left face region and the right face region includes: acquiring the left width of the same face part in a left face area and the right width of the same face part in a right face area; face coefficients are calculated from the left width and the right width.
Optionally, according to the left width and the width, a calculation formula for calculating the face coefficient is:
Figure BDA0002009363030000021
wherein C is p For face-transfer coefficients, E l For left width E r Right width.
Optionally, calculating the lifting coefficient of the face according to the feature information of the face part includes: determining a first distance between the first part and the second part; determining a second distance between the second part and the third part, wherein the first part, the second part and the third part belong to the face part, the first part is positioned above the second part, and the second part is positioned above the third part; and calculating a face lifting coefficient according to the first distance and the second distance.
Optionally, according to the first distance and the second distance, a calculation formula for calculating the face lifting coefficient is as follows:
Figure BDA0002009363030000022
wherein C is r To raise the face factor, H 1 At a first distance, H 2 Is the second distance.
Optionally, the first part is eyes, the second part is a nose, and the third part is a mouth or a lower jaw.
Optionally, the facial part characteristic information includes position information and shape information of the facial part.
Optionally, in combination with the pose parameter, determining whether the face in the image to be detected meets a preset positive face condition includes: judging whether the face deviation angle is in a preset face deviation angle range, whether the face turning coefficient is in a preset face turning coefficient range and whether the face lifting coefficient is in a preset face lifting coefficient range; if the face deviation angle is in the preset face deviation angle range, the face rotation coefficient is in the preset face rotation coefficient range and the face lifting coefficient is in the preset face lifting coefficient range, determining that the face in the image to be detected meets the preset face condition, otherwise, determining that the face in the image to be detected does not meet the preset face condition.
In order to solve the technical problems, another technical scheme adopted by the embodiment of the invention is as follows: there is provided an apparatus for recognizing a front face image, including: the acquisition module is used for: the method is used for acquiring an image to be detected; and an extraction module: the gesture parameters are used for extracting the face in the image to be detected; and a judging module: the method comprises the steps of combining the gesture parameters and judging whether the face in the image to be detected meets the preset positive face condition; a first determination module: the method comprises the steps of determining the image to be detected as a front face image when the face in the image to be detected meets the preset front face condition; a second determination module: and determining that the image to be detected is a non-positive face image when the face in the image to be detected does not meet the preset positive face condition.
In order to solve the above technical problems, a further technical solution adopted by the embodiment of the present invention is: there is provided a computing device comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the method for identifying the face image.
In order to solve the above technical problems, another technical solution adopted by the embodiment of the present invention is: there is provided a computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the method of recognizing a face image as described above.
The embodiment of the invention has the beneficial effects that: in contrast to the situation in the prior art, the embodiment of the invention judges whether the face in the image to be detected meets the preset positive face condition by extracting the gesture parameters of the face in the image to be detected, and if so, the image to be detected is determined to be the positive face image.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the technical means of the present invention, as it is embodied in accordance with the present invention, and is intended to provide a better understanding of the above and other objects, features and advantages of the present invention, as it is embodied in the following specific examples.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for the purpose of illustrating preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of an embodiment of a method of identifying a face image in accordance with the present invention;
FIG. 2 is a flow chart of pose parameter extraction in an embodiment of a method of recognizing a frontal image according to the present invention;
FIG. 3 is a flowchart of face bias angle calculation in an embodiment of a method of recognizing a front face image according to the present invention;
FIG. 4 is a schematic illustration of a central axis of a face in an embodiment of a method of recognizing a frontal image in accordance with the present invention;
FIG. 5 is a flow chart of face coefficient calculation in an embodiment of a method of recognizing a face image according to the present invention;
FIG. 6 is a flow chart of face coefficient calculation in another embodiment of a method of recognizing a face image according to the present invention;
FIG. 7 is a flowchart of face lifting coefficient calculation in another embodiment of a method of recognizing a frontal image according to the present invention;
FIG. 8 is a flowchart of a preset frontal face condition judgment in an embodiment of a method for recognizing a frontal face image according to the present invention;
FIG. 9 is a functional block diagram of an embodiment of an apparatus for recognizing a frontal image in accordance with the present invention;
FIG. 10 is a schematic diagram of a computing device embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart of an embodiment of a method for recognizing a front face image according to the present invention, as shown in fig. 1, the method includes the following steps:
step S1: and acquiring an image to be detected.
The image to be detected refers to an image containing a face, and the face may be a face of a human body or a face of other animals, for example: cats, dogs, etc.
Step S2: and extracting the gesture parameters of the face in the image to be detected.
The pose parameters refer to parameters of a face pose, wherein the face pose includes a biased face pose in which the face is deflected to the left or right, a turned face pose in which the face is turned to the left or right, and a raised face pose in which the face is turned up or down. In some embodiments, the pose parameter corresponding to the face-off pose is a face-off angle, the pose parameter corresponding to the face-on pose is a face-on coefficient, and thus the pose parameter includes the face-off angle, the face-on coefficient, and the face-on coefficient.
Specifically, as shown in fig. 2, extracting pose parameters of a face in an image to be detected includes the following steps:
step S21: feature information of a face part in the image to be inspected is identified.
The face part means a part included in the face, for example: eyes, ears, nose, cheeks, lower jaw, etc. The feature information of the face parts includes position information and shape information of each face part in the face.
In some embodiments, the face part may be obtained by recognition through a preset neural recognition network, specifically, after the preset neural recognition network receives the image to be detected, the preset neural recognition network detects and locates the key points of the pre-defined face in the image to be detected through a face key point location model, and determines the face part according to the key points of the face. Of course, the preset neural recognition network is trained through a large number of images in advance, so that recognition accuracy is guaranteed.
In other embodiments, before the face part is identified by the preset neural identification network, denoising processing may be performed on the image to be detected, so as to reduce noise interference on the image to be detected and improve the accuracy of identification, where denoising processing includes light compensation, gray level transformation, homogenization, filtering, and the like.
Step S22: and calculating the face deviation angle, the face rotation coefficient and the face lifting coefficient of the face according to the characteristic information of the face part.
Specifically, as shown in fig. 3, calculating the face deviation angle of the face based on the feature information of the face part includes the steps of:
step S201: and constructing an image central axis of the image to be detected.
The image central axis refers to an axis which divides the image to be inspected left and right into a left image and a right image with the same area.
Step S202: and constructing the central axis of the face in the image to be detected according to the characteristic information of the face part.
The central axis of the face is an axis that symmetrically divides the face part, as shown by the L line in fig. 4.
Step S203: and calculating an included angle between the central axis of the face and the central axis of the image, and taking the included angle as a face deviation angle.
When the face is a positive face, the face does not deflect left and right, the central axis of the face coincides with the central axis of the image, and the included angle between the central axis of the face and the central axis of the image is zero degrees, but rather, whether left and right head deflection occurs can be judged through the included angle between the central axis of the face and the central axis of the image.
Of course, in some embodiments, the direction of the offset may also be determined based on the direction of the face's medial axis relative to the medial axis of the graphic, e.g., when the face's medial axis is offset to the left relative to the medial axis of the graphic, then the face is determined to be offset to the left; when the central axis of the face is deflected to the right with respect to the central axis of the figure, it is determined that the face is deflected to the right. In real-time face recognition, for example, customs face detection, face payment and the like, an adjustment prompt can be sent out according to the face deviation direction and the face deviation angle, so that a user can conveniently and quickly correct the face posture.
As shown in fig. 5, the face rotation coefficient of the face is calculated based on the feature information of the face part, and the method comprises the steps of:
step S210: constructing a facial central axis of a face in the image to be detected according to the characteristic information of the face part;
step S211: based on the central axis of the face, the face part in the image to be detected is divided into a left face area and a right face area.
Step S212: and combining the left face area and the right face area to determine the face turning coefficient.
When the image is a positive face image, the left face area and the right face area are symmetrical relative to the central axis of the face, and when the image is not the positive face image, the left face area and the right face area are in an asymmetric relation, and the change amounts of the left face area and the right face area are different. Instead, the amount of change in the detected left and right face regions may be compared to determine the turn-around coefficient, for example: determining left width of the left face region and right width of the right face region, combining a calculation formula according to the left width and the right width
Figure BDA0002009363030000071
Calculating a turning coefficient, wherein C p For face-transfer coefficients, E l For left width E r Right width; or, determining the left area of the left face area and the right area of the right face area, and calculating the turning coefficient according to the left area and the right area.
When a face is turned, the face in the image is not a positive face, the changes presented by the same face part in the left face area and the right face area are also different, so that the turning coefficient can be calculated through the condition that the same face part is in the left face area and the right face area, specifically, as shown in fig. 6, the turning coefficient is determined, and the method comprises the following steps:
step S2121: the left width of the same face part in the left face region and the right width in the right face region are acquired.
The facial part includes eyes, nose and mouth. Left width E when the face part is an eye l And right width E r Representing the width of the left eye in the left face region and the width of the right eye in the right face region, respectively, as shown in fig. 4, wherein the width of the left eye refers to the furthest point of the left eye from the central axis of the face and the closest point of the left eye from the central axis of the faceThe distance, likewise, the width of the right eye refers to the distance between the furthest point of the right eye from the central axis of the face and the closest point of the right eye from the central axis of the face. When the face part represents a nose, the left width and the right width represent the width of the nose in the left face region and in the right face region, respectively. When the face part represents the mouth, the left width and the right width represent the width of the mouth corner in the left face region and in the right face region.
Step S2122: face coefficients are calculated from the left width and the right width.
In this step, the calculation formula for calculating the face coefficient is:
Figure BDA0002009363030000072
wherein C is p For face-transfer coefficients, E l For left width E r Right width.
From the above formula, it can be seen that: when the image to be detected is a positive face image, the left width of the same face part in the left face area is the same as the right width in the right face area, the face turning coefficient is 0, when the face is turned left and right, the widths of the same face part in the left face area and the right face area are changed, the face turning coefficient calculated by the above formula is not 0, for example, when the face part is an eye, the value of the left width in the left face area and the value of the right width in the right face area are both reduced, but the width of the left width is larger than the width of the right width, the face turning coefficient is a negative value and is not zero, and when the face is turned right, the width of the left width is smaller than the width of the right width and the face turning coefficient is a positive value and is not zero. Therefore, when the face-turning system is calculated, it is possible to determine whether or not there is a face-turning by judging the face-turning coefficient. Of course, the direction of the face may also be determined by the positive and negative of the face system.
As shown in fig. 7, calculating a face lifting coefficient of a face in an image to be detected according to feature information of the face part includes the following steps:
step S2131: a first distance between the first location and the second location is determined.
The first part and the second part are both located on the face and belong to the face part, for example: the first part is eyes, the second part is nose, the first distance is distance from nose tip to left eye and right eye line along central axis of face, as shown in figure 4, H 1
Step S2132: a second distance is determined between the second location and the third location.
The third part is also positioned on the face and belongs to the face part, wherein the first part is positioned above the second part, and the second part is positioned above the third part. When the first part is an eye and the second part is a nose, then the third part may be the mouth or the lower jaw. The second distance is the distance from the nose tip to the lowest point of the lower jaw along the central axis of the face, as shown in FIG. 4H 2
Step S2133: and calculating a face lifting coefficient according to the first distance and the second distance.
In this step, the calculation formula of the face lifting coefficient is:
Figure BDA0002009363030000081
wherein C is r To raise the face factor, H 1 At a first distance, H 2 Is the second distance.
For the same face, when the image to be detected is a positive face image, the face lifting coefficient is a certain value, and when head lifting or head lowering occurs, the first distance and the second distance are both changed, for example: when the head is lifted, the first distance and the second distance are reduced, the reduced amplitude of the first distance is larger than the reduced amplitude of the second distance, the face lifting coefficient is smaller than the corresponding face lifting coefficient when the front face image is lifted, when the head is lowered, the first distance and the second distance are reduced, the reduced amplitude of the first distance is smaller than the reduced amplitude of the second distance, the face lifting coefficient is larger than the corresponding face lifting coefficient when the front face image is lifted, and therefore when the face lifting coefficient is calculated, whether the head lifting exists can be determined through the face lifting coefficient.
In other embodiments, the first, second and third parts may be three detection points of the face, where the straight line connecting the detection points is parallel to the central axis of the face, for example: the first part is positioned at the detection point of the forehead, and the second part and the third part are both detection points of the cheek.
Step S3: and (5) judging whether the face in the image to be detected meets the preset face condition or not by combining the gesture parameters, if so, executing the step S4, and if not, executing the step S5.
Specifically, as shown in fig. 8, determining whether the face in the image to be detected meets the preset positive face condition includes the following steps:
step S31: and judging whether the face deviation angle is in a preset face deviation angle range, whether the face rotation coefficient is in a preset face rotation coefficient range and whether the face lifting coefficient is in a preset face lifting coefficient range, if yes, executing step S32, otherwise, executing step S33.
When the face deviation angle, the face rotation coefficient and the face lifting coefficient are all located in the corresponding ranges, determining that the face in the image to be detected meets the preset positive face condition, and if any one of the faces is not located in the corresponding range, determining that the face in the image to be detected does not meet the preset positive face condition.
Step S32: and determining that the face in the image to be detected meets the preset face condition.
And determining that the face in the image to be detected meets the preset face condition, and determining that the image to be detected is a face image.
Step S33: and determining that the face in the image to be detected does not meet the preset face condition.
And when the preset face condition is not met, the image to be detected is not the face image.
And S4, determining the image to be detected as a front face image.
Step S5: and determining the image to be detected as a non-positive face image.
Noteworthy are: when the facial organs selected by the first part, the second part and the third part are different, or the selected detection points are different, the preset front face lifting coefficient range also changes correspondingly.
In the embodiment of the invention, the gesture parameters of the face in the image to be detected are identified, whether the face in the image to be detected meets the preset positive face condition is judged according to the gesture parameters, and if yes, the image to be detected is determined to be the positive face image, so that whether the image to be detected is the positive face image is judged according to the gesture parameters.
Fig. 9 is a functional block diagram of an apparatus for recognizing a front face image according to the present invention, as shown in fig. 9, the apparatus comprising: an acquisition module 801, an extraction module 802, a judgment module 803, a first determination module 804, and a second determination module 805. The acquiring module 801 is configured to acquire an image to be inspected. The extracting module 802 is configured to extract pose parameters of a face in the image to be detected. The judging module 803 is configured to judge whether the face in the image to be detected meets a preset positive face condition by combining the pose parameters. The first determining module 804 is configured to determine that the image to be detected is a front face image when the face in the image to be detected meets a preset front face condition. The second determining module 805 is configured to determine that the image to be detected is a non-frontal image when the face in the image to be detected does not meet the preset frontal condition.
The gesture parameters comprise a face deviation angle, a face turning coefficient and a face lifting coefficient. The extraction module 802 includes: the recognition unit 8021 and the calculation unit 8022. The recognition unit 8021 is configured to recognize feature information of a face part in the image to be inspected, the feature information of the face part including position information and shape information of the face part. A calculating unit 8022, configured to calculate a face deviation angle, a face rotation coefficient, and a face lifting coefficient according to the feature information of the face part.
In some embodiments, the calculating unit 8022 is configured to calculate, according to feature information of the face part, a face bias angle of the face in the image to be detected, including: constructing an image central axis of an image to be detected; constructing a facial central axis of a face in the image to be detected according to the characteristic information of the face part; and calculating an included angle between the central axis of the face and the central axis of the image, and taking the included angle as a face deviation angle.
In some embodiments, the calculating unit 8022 is configured to calculate a face rotation coefficient of the face according to the feature information of the face part, and includes: constructing a facial central axis of a face in the image to be detected according to the characteristic information of the face part; based on the central axis of the face, dividing the face part in the image to be detected into a left face area and a right face area; and combining the left face area and the right face area to determine the face turning coefficient.
Specifically, combining the left face region and the right face region, determining the face-transfer coefficient further includes: acquiring the left width of the same face part in a left face area and the right width of the same face part in a right face area; face coefficients are calculated from the left width and the right width.
Wherein, according to left width and right width, calculate the formula of face coefficient and be:
Figure BDA0002009363030000101
wherein C is p For face-transfer coefficients, E l For left width E r Right width.
Wherein the calculating unit 8022 is configured to calculate a face lifting coefficient of the face according to the feature information of the face part, and includes: determining a first distance between the first part and the second part; determining a second distance between the second part and the third part, wherein the first part, the second part and the third part are all positioned on the face and belong to the face part, the first part is positioned above the second part, and the second part is positioned above the third part; and calculating a face lifting coefficient according to the first distance and the second distance.
The calculation formula for calculating the face lifting coefficient according to the first distance and the second distance is as follows:
Figure BDA0002009363030000111
wherein C is r To raise the face factor, H 1 At a first distance, H 2 Is the second distance.
Wherein the first part is eyes, the second part is nose, and the third part is mouth or lower jaw.
The judging module 803 includes: the face-off angle determining unit 8031 is configured to determine whether the face-off angle is within a preset face-off angle range, whether the face-off coefficient is within a preset face-on coefficient range, and whether the face-on coefficient is within a preset face-on coefficient range. The determining unit 8032 is configured to determine that the face in the image to be detected meets a preset face condition when the face rotation coefficient is located in a preset face rotation coefficient range, the face bias angle is located in a preset face bias angle range, and the face lifting coefficient is located in a preset face lifting coefficient range.
In the embodiment of the invention, the extraction module 802 extracts the pose parameters of the face in the image to be detected, the judging module 803 judges whether the face in the image to be detected meets the preset positive face condition according to the pose parameters, and if so, the first determining module 804 determines that the image to be detected is a positive face image, so as to judge whether the image to be detected is a positive face image according to the pose parameters of the face in the image to be detected.
Embodiments of the present application provide a non-volatile computer storage medium having stored thereon at least one executable instruction that is capable of performing a method for recognizing a frontal image according to any of the above-described method embodiments.
FIG. 10 is a schematic diagram of a computing device embodiment of the present invention, and the embodiment of the present invention is not limited to a specific implementation of the computing device.
As shown in fig. 10, the computing device may include: a processor 902, a communication interface (Communications Interface), a memory 906, and a communication bus 908.
Wherein:
processor 902, communication interface 904, and memory 906 communicate with each other via a communication bus 908.
A communication interface 904 for communicating with network elements of other devices, such as clients or other servers.
The processor 902 is configured to execute the program 910, and may specifically perform relevant steps in one of the foregoing method embodiments for recognizing a front face image.
In particular, the program 910 may include program code including computer-operating instructions.
The processor 902 may be a central processing unit, CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 906 for storing a program 910. Memory 906 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 910 may be specifically configured to cause the processor 902 to perform operations corresponding to steps S1 to S5 in fig. 1, steps S21 to S22 in fig. 2, steps S201 to S203 in fig. 3, steps S210 to S212 in fig. 5, steps S2121 to S2122 in fig. 6, steps S131 to S2133 in fig. 7, and steps S31 to S33 in fig. 8.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various generic coefficients may also be used with the teachings herein. The structure required to construct such coefficients is apparent from the above description. In addition, the present invention is not directed to any particular programming language. It should be appreciated that the teachings of the present invention as described herein may be implemented in a variety of programming languages and that the foregoing description with respect to the particular languages is provided for disclosure of preferred embodiments of the invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus for recognizing a frontal image according to an embodiment of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (13)

1. A method of recognizing a frontal image, comprising:
acquiring an image to be detected;
extracting pose parameters of the face in the image to be detected; the gesture parameters are parameters of a face gesture, wherein the face gesture comprises a face deflection gesture of deflecting the face to the left or the right, a face turning gesture of turning the face to the left or the right, and a face lifting gesture of turning the face upwards or downwards; the gesture parameters corresponding to the face deviation gesture are face deviation angles, the gesture parameters corresponding to the face turning gesture are face turning coefficients, and the gesture parameters corresponding to the face lifting gesture are face lifting coefficients; the extracting the gesture parameters of the face in the image to be detected comprises the following steps: identifying characteristic information of a face part in the image to be detected; calculating the face deviation angle, the face turning coefficient and the face lifting coefficient of the face according to the characteristic information of the face part;
judging whether the face in the image to be detected meets the preset face condition or not according to the gesture parameters;
and judging whether the face in the image to be detected meets the preset positive face condition by combining the gesture parameters, wherein the method comprises the following steps of: judging whether the face deviation angle is in a preset face deviation angle range, whether the face turning coefficient is in a preset face turning coefficient range and whether the face lifting coefficient is in a preset face lifting coefficient range; if the face deviation angle is in a preset face deviation angle range, the face transfer coefficient is in a preset face transfer coefficient range and the face lifting coefficient is in a preset face lifting coefficient range, determining that the face in the image to be detected meets a preset face condition; otherwise, determining that the face in the image to be detected does not meet a preset face condition;
if yes, determining the image to be detected as a front face image;
if not, determining that the image to be detected is a non-positive face image.
2. The method according to claim 1, wherein calculating the face bias angle of the face in the image to be detected based on the feature information of the face part includes:
constructing an image central axis of the image to be detected;
constructing a facial central axis of the face in the image to be detected according to the characteristic information of the face part;
and calculating an included angle between the central axis of the face and the central axis of the image, and taking the included angle as the face deviation angle.
3. The method according to claim 1, wherein calculating a face rotation coefficient of the face based on the feature information of the face part includes:
constructing a facial central axis of the face in the image to be detected according to the characteristic information of the face part;
based on the central axis of the face, dividing the face part in the image to be detected into a left face area and a right face area;
and combining the left face area and the right face area to determine the face turning coefficient.
4. The method of claim 3, wherein the combining the left and right face regions to determine the face coefficients comprises:
identifying a left width of the left face region and a right width of the right face region;
and calculating the face turning coefficient according to the left width and the right width.
5. The method of claim 3, wherein the step of,
the combining the left face region and the right face region to determine the face-turning coefficient further includes:
acquiring the left width of the same face part in the left face area and the right width of the same face part in the right face area;
and calculating the face turning coefficient according to the left width and the right width.
6. The method according to claim 4 or 5, wherein a calculation formula for calculating the face-turning coefficient from the left width and the right width is:
Figure QLYQS_1
wherein C is p For the face-turning coefficient E l For the left width E r Is the right width.
7. The method according to claim 1, wherein calculating a face lifting coefficient of the face based on the feature information of the face part includes:
determining a first distance between the first part and the second part;
determining a second distance between a second part and a third part, wherein the first part, the second part and the third part belong to the face part, the first part is positioned above the second part, and the second part is positioned above the third part;
and calculating the face lifting coefficient according to the first distance and the second distance.
8. The method of claim 7, wherein the calculating the face lifting coefficient according to the first distance and the second distance is calculated by the following formula:
Figure QLYQS_2
wherein C is r For the face lifting coefficient, H 1 For the first distance H 2 Is the second distance.
9. The method of claim 7, wherein the step of determining the position of the probe is performed,
the first part is eyes, the second part is a nose, and the third part is a mouth or a lower jaw.
10. The method according to any one of claims 1 to 9, wherein the feature information of the face part includes position information and shape information of the face part.
11. An apparatus for recognizing a frontal image, comprising:
the acquisition module is used for: the method is used for acquiring an image to be detected;
and an extraction module: the gesture parameters are used for extracting the face in the image to be detected; the gesture parameters are parameters of a face gesture, wherein the face gesture comprises a face deflection gesture of deflecting the face to the left or the right, a face turning gesture of turning the face to the left or the right, and a face lifting gesture of turning the face upwards or downwards; the gesture parameters corresponding to the face deviation gesture are face deviation angles, the gesture parameters corresponding to the face turning gesture are face turning coefficients, and the gesture parameters corresponding to the face lifting gesture are face lifting coefficients; the extracting the gesture parameters of the face in the image to be detected comprises the following steps: identifying characteristic information of a face part in the image to be detected; calculating the face deviation angle, the face turning coefficient and the face lifting coefficient of the face according to the characteristic information of the face part;
and a judging module: the method comprises the steps of combining the gesture parameters and judging whether the face in the image to be detected meets the preset positive face condition;
and judging whether the face in the image to be detected meets the preset positive face condition by combining the gesture parameters, wherein the method comprises the following steps of: judging whether the face deviation angle is in a preset face deviation angle range, whether the face turning coefficient is in a preset face turning coefficient range and whether the face lifting coefficient is in a preset face lifting coefficient range; if the face deviation angle is in a preset face deviation angle range, the face transfer coefficient is in a preset face transfer coefficient range and the face lifting coefficient is in a preset face lifting coefficient range, determining that the face in the image to be detected meets a preset face condition; otherwise, determining that the face in the image to be detected does not meet a preset face condition;
a first determination module: the method comprises the steps of determining the image to be detected as a front face image when the face in the image to be detected meets the preset front face condition;
a second determination module: and determining that the image to be detected is a non-positive face image when the face in the image to be detected does not meet the preset positive face condition.
12. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations corresponding to a method of recognizing a frontal image according to any one of claims 1-10.
13. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to a method of identifying a face image as claimed in any one of claims 1 to 10.
CN201910239957.4A 2019-03-27 2019-03-27 Method and device for recognizing front face image and computing equipment Active CN110096958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910239957.4A CN110096958B (en) 2019-03-27 2019-03-27 Method and device for recognizing front face image and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239957.4A CN110096958B (en) 2019-03-27 2019-03-27 Method and device for recognizing front face image and computing equipment

Publications (2)

Publication Number Publication Date
CN110096958A CN110096958A (en) 2019-08-06
CN110096958B true CN110096958B (en) 2023-05-12

Family

ID=67444010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910239957.4A Active CN110096958B (en) 2019-03-27 2019-03-27 Method and device for recognizing front face image and computing equipment

Country Status (1)

Country Link
CN (1) CN110096958B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801066B (en) * 2021-04-12 2022-05-17 北京圣点云信息技术有限公司 Identity recognition method and device based on multi-posture facial veins
CN113901930A (en) * 2021-10-13 2022-01-07 杭州萤石软件有限公司 A face detection method, electronic device and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408943A (en) * 2007-10-09 2009-04-15 三星电子株式会社 Method for generating a training set for human face detection
JP4577410B2 (en) * 2008-06-18 2010-11-10 ソニー株式会社 Image processing apparatus, image processing method, and program
CN105046245B (en) * 2015-08-28 2018-08-03 深圳英飞拓科技股份有限公司 Video human face method of determination and evaluation
CN105528584B (en) * 2015-12-23 2019-04-12 浙江宇视科技有限公司 A kind of detection method and device of face image
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN106791365A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 Facial image preview processing method and processing device
CN108921148A (en) * 2018-09-07 2018-11-30 北京相貌空间科技有限公司 Determine the method and device of positive face tilt angle

Also Published As

Publication number Publication date
CN110096958A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
US20200210702A1 (en) Apparatus and method for image processing to calculate likelihood of image of target object detected from input image
US9842247B2 (en) Eye location method and device
CN109389135B (en) Image screening method and device
CN105608448B (en) A method and device for extracting LBP features based on facial key points
WO2015165365A1 (en) Facial recognition method and system
CN112384127B (en) Eyelid ptosis detection method and system
CN108230293A (en) Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN108229301B (en) Eyelid line detection method and device and electronic equipment
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN106709450A (en) Recognition method and system for fingerprint images
CN111598038B (en) Facial feature point detection method, device, equipment and storage medium
CN108229268A (en) Expression recognition and convolutional neural network model training method and device and electronic equipment
CN110598647B (en) Head posture recognition method based on image recognition
CN109993021A (en) Face detection method, device and electronic device
CN111652082A (en) Face liveness detection method and device
CN112396050B (en) Image processing method, device and storage medium
WO2019205633A1 (en) Eye state detection method and detection apparatus, electronic device, and computer readable storage medium
JP6956986B1 (en) Judgment method, judgment device, and judgment program
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN110096958B (en) Method and device for recognizing front face image and computing equipment
CN110119720B (en) A Real-time Blink Detection and Human Pupil Center Location Method
WO2017061106A1 (en) Information processing device, image processing system, image processing method, and program recording medium
WO2021026281A1 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
CN108875556A (en) Method, apparatus, system and the computer storage medium veritified for the testimony of a witness
CN113971841A (en) A living body detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200408

Address after: 1706, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen shuliantianxia Intelligent Technology Co.,Ltd.

Address before: 518000, building 10, building ten, building D, Shenzhen Institute of Aerospace Science and technology, 6 hi tech Southern District, Nanshan District, Shenzhen, Guangdong 1003, China

Applicant before: SHENZHEN H & T HOME ONLINE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant