CN111626161A - Face recognition method and device, terminal and readable storage medium - Google Patents
Face recognition method and device, terminal and readable storage medium Download PDFInfo
- Publication number
- CN111626161A CN111626161A CN202010415103.XA CN202010415103A CN111626161A CN 111626161 A CN111626161 A CN 111626161A CN 202010415103 A CN202010415103 A CN 202010415103A CN 111626161 A CN111626161 A CN 111626161A
- Authority
- CN
- China
- Prior art keywords
- face
- face image
- image
- matching degree
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000001815 facial effect Effects 0.000 claims description 34
- 230000007613 environmental effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- -1 falling Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application discloses a face recognition method, which comprises the steps of obtaining a first face image; matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and when the first matching degree of the second face image and the registered face image is greater than the first preset matching degree, determining that the face recognition is successful. The application also discloses a face recognition device, a terminal and a nonvolatile computer readable storage medium. According to the face recognition method, the second face image in the face feature database is matched with the registered face image, so that the influence of environmental factors on face recognition can be avoided, and the accuracy of face recognition can be improved.
Description
Technical Field
The present application relates to the field of biometric identification technologies, and in particular, to a face recognition method and apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
The face recognition technology is a biological recognition mode which is developed earlier, and is widely applied to scenes such as smart phone unlocking, mobile payment, attendance system, entrance guard recognition and the like. The acquired face image is mainly matched with the registered face image during face recognition, but in this case, the accuracy image of the face recognition is large under the environment where the user is located, and when the environment where the user is located is not ideal, the acquired face image may be unclear, and the face characteristic points of the acquired face image are easy to be lost, so that the accuracy of the face recognition is reduced.
Disclosure of Invention
The embodiment of the application provides a face recognition method and device, a terminal and a nonvolatile computer readable storage medium.
The face recognition method comprises the steps of obtaining a first face image; matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree, determining that the face recognition is successful.
The face recognition device comprises a first acquisition module, a matching module and a determination module, wherein the first acquisition module is used for acquiring a first face image; the matching module is used for matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; the determining module is used for determining that the face recognition is successful when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree.
The terminal comprises a processor, a first face image acquisition module and a second face image acquisition module, wherein the processor is used for acquiring the first face image; matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree, determining that the face recognition is successful.
A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the above-described face recognition method of embodiments of the present application. The face recognition method comprises the steps of obtaining a first face image; matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree, determining that the face recognition is successful.
In the face recognition method, the face recognition apparatus, the terminal, and the nonvolatile computer-readable storage medium according to the embodiments of the present application, matching the acquired first face image with a preset face database to acquire a second face image matched with the first face image in the face database, when the first matching degree of the second face image and the registered face image is larger than the first preset matching degree, the face recognition is determined to be successful, because the facial feature database is built based on the local gallery, it can be understood that the second facial image is obtained from the local gallery, and the second face image contains more face feature points than the first face image, the second face image is used for matching with the registered face image, the influence of external environment factors on face recognition can be effectively avoided, and the accuracy of face recognition can be improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a face recognition method according to some embodiments of the present application;
FIG. 2 is a block diagram of a face recognition apparatus according to some embodiments of the present application;
FIG. 3 is a schematic block diagram of a terminal according to some embodiments of the present application;
FIG. 4 is a schematic block diagram of a terminal according to some embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of a face recognition method according to some embodiments of the present application;
FIG. 6 is a block diagram of a face recognition apparatus according to some embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of a face recognition method according to some embodiments of the present application;
FIG. 8 is a block diagram of a face recognition apparatus according to some embodiments of the present application;
FIG. 9 is a schematic plan view of a people classification database according to some embodiments of the present application;
FIG. 10 is a schematic flow chart diagram of a face recognition method according to some embodiments of the present application;
FIG. 11 is a schematic flow chart diagram of a face recognition method according to some embodiments of the present application;
FIG. 12 is a block diagram of a face recognition apparatus according to some embodiments of the present application;
FIG. 13 is a schematic plan view of a library of character pictures according to some embodiments of the present application;
FIG. 14 is a schematic plan view of a feature picture of certain embodiments of the present application;
FIG. 15 is a schematic flow chart diagram of a face recognition method according to some embodiments of the present application;
FIG. 16 is a block diagram of a matching block of certain embodiments of the present application; and
FIG. 17 is a schematic diagram of a connection between a computer-readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout.
In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "above," and "over" a second feature may mean that the first feature is directly above or obliquely above the second feature, or that only the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Referring to fig. 1 to 3, a face recognition method according to an embodiment of the present application includes the following steps:
011: acquiring a first face image;
012: matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database; and
013: and when the first matching degree of the second face image and the registered face image is greater than the first preset matching degree, determining that the face recognition is successful.
The face feature database is established based on the local image library, and the second face image comprises more face feature points than the first face image.
In some embodiments, the face recognition apparatus 10 includes a first obtaining module 11, a matching module 12 and a determining module 13, and the first obtaining module 11, the matching module 12 and the determining module 13 can be respectively used to implement step 011, step 012 and step 013. Namely, the first obtaining module 11 is configured to obtain a first face image; the matching module 12 is configured to match the first face image with a preset face feature database to obtain a second face image in the face feature database, where the second face image is matched with the first face image; the determining module 13 is configured to determine that the face recognition is successful when a first matching degree between the second face image and the registered face image is greater than a first predetermined matching degree.
In some embodiments, the terminal 100 further includes a processor 20, the processor 20 being configured to obtain a first face image; matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database; and when the first matching degree of the second face image and the registered face image is greater than the first preset matching degree, determining that the face recognition is successful. That is, step 011, step 012, and step 013 can be implemented by processor 20.
In the face recognition method, the face recognition apparatus 10, the terminal 100 and the non-volatile computer-readable storage medium according to the embodiments of the present application, matching the acquired first face image with a preset face database to acquire a second face image matched with the first face image in the face database, when the first matching degree of the second face image and the registered face image is larger than the first preset matching degree, the face recognition is determined to be successful, because the facial feature database is built based on the local gallery, it can be understood that the second facial image is obtained from the local gallery, and the second face image contains more face feature points than the first face image, the second face image is used for matching with the registered face image, the influence of external environment factors on face recognition can be effectively avoided, and the accuracy of face recognition can be improved.
Specifically, the terminal 100 includes a housing 30 and a processor 20. The processor 20 is mounted within the housing 30. More specifically, the terminal 100 may be a mobile phone, a tablet computer, a display, a notebook computer, a teller machine, a gate, an entrance guard, a cash register, a vending machine, or the like. In the embodiment of the present application, the terminal 100 is a mobile phone as an example, and it is understood that the specific form of the terminal 100 is not limited to the mobile phone. The housing 30 may also be used to mount functional modules of the terminal 100, such as an imaging device (i.e., the camera 40), a power supply device, and a communication device, so that the housing 30 provides protection for the functional modules against dust, falling, water, and the like.
Referring to fig. 3 and 4, in step 010, a first face image is acquired. Specifically, the user may obtain a first face image including a face through the camera 40, and when the user needs to perform face recognition unlocking of the terminal 100, the camera 40 automatically obtains the first face image of the user, where the first face image may be a picture of the user face taken by the camera 40, the first face image may also be a video of the user face taken by the camera 40, and a specific form of the first face image is not limited here. The camera 40 includes a front camera 41 and a rear camera 42, and the front camera 41 of the terminal 100 may acquire the first face image, or the rear camera 42 of the terminal 100 may acquire the first face image.
In step 020, the first face image is matched with a preset face feature database to obtain a second face image matched with the first face image in the face feature database. Specifically, the terminal 100 has a local gallery, and the preset face feature database is established based on the local gallery, and it can be understood that the local gallery includes a plurality of pictures, the face feature points of the plurality of pictures in the local gallery are extracted, and then the face feature database is established. There may be a plurality of face images in the face feature database, and the first face image obtained in step 010 is matched with the plurality of face images in the face feature database, and a face image with a higher matching degree with the first face image may be selected from the face feature database as a second face image, so that the second face image is also obtained based on the local gallery, and the number of face feature points of the second face image is greater than that of the first face image, and the second face image is less susceptible to the influence of the external environment than the first face image.
Further, the local gallery includes multiple pictures, which may include a picture taken by the front camera 41 and a picture taken by the rear camera 42. In most of the terminals 100 such as mobile phones, the pixels of the rear camera 42 are higher than those of the front camera 41, and it can be understood that the picture taken by the rear camera 42 is clearer than that taken by the front camera 41. In one embodiment, the pictures in the local gallery are all the pictures of the person shot by the rear camera 42, so that the pictures in the local gallery will be clearer, that is, the second face image is also the picture shot by the rear camera 42, and the accuracy of the face recognition method of this embodiment will be higher when compared with the case of performing face recognition by directly using the first face image shot by the front camera 41.
In step 030, when the first matching degree of the second face image and the registered face image is greater than the first predetermined matching degree, it is determined that the face recognition is successful. Specifically, the registered face image is registered in the terminal 100 in advance, after the second face image is acquired, the second face image is matched with the registered face image, a relationship between a first matching degree and a first predetermined matching degree of the second face image and the registered face image is determined, if the first matching degree is greater than the first predetermined matching degree, face recognition is determined to be successful, if the first matching degree is equal to the first predetermined matching degree, face recognition is also determined to be successful, and if the first matching degree is less than the first predetermined matching degree, face recognition is determined to be failed. The first predetermined matching degree may be 85%, 88%, 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, or the like, and the greater the first predetermined matching degree, the higher the accuracy of face recognition. In one example, the first predetermined degree of match is 95%.
Compared with the mode of directly matching the first face image with the registered face image, the face recognition method firstly matches the first face image with the preset face feature database to obtain the second face image, and then matches the second face image with the registered face image.
Referring to fig. 5 and 6, in some embodiments, the face recognition method further includes the following steps:
014: acquiring a third face image;
015: calculating a second matching degree of the third face image and the face feature database;
016: when the second matching degree is greater than a second preset matching degree, generating a registered face image according to the face characteristic points matched with the third face image in the face characteristic database; and
017: and when the second matching degree is smaller than the second preset matching degree, generating a registered face image according to the face characteristic points in the third face image.
In some embodiments, the face recognition apparatus 10 further includes a second obtaining module 14, a calculating module 15, a first generating module 16, and a second generating module 17, where the second obtaining module 14 may further be configured to obtain a third face image, and the calculating module 15 is configured to calculate a second matching degree between the third face image and the face feature database; the first generating module 16 is configured to generate a registered face image according to the face feature points in the face feature database, which are matched with the third face image, when the second matching degree is greater than a second predetermined matching degree; the second generating module 17 may be further configured to generate a registered face image according to the face feature points in the face feature database, which are matched with the third face image, when the second matching degree is greater than a second predetermined matching degree. That is, the second obtaining module 14 may be further configured to implement step 014, the calculating module 15 may be further configured to implement step 015, the first generating module 16 may be configured to implement step 016, and the second generating module 17 may be configured to implement step 017.
In some embodiments, the processor 20 may be further configured to acquire a third face image; calculating a second matching degree of the third face image and the face feature database; when the second matching degree is greater than a second preset matching degree, generating a registered face image according to the face characteristic points matched with the third face image in the face characteristic database; and when the second matching degree is smaller than the second preset matching degree, generating a registered face image according to the face characteristic points in the third face image. That is, the processor 20 may also be configured to implement step 014, step 015, step 016 and step 017.
Specifically, the third face image is acquired by the camera 40, and may be acquired by the front camera 41 or acquired by the rear camera 42. The third face image may be a picture or a video including a face, which is taken by the camera 40 in the current environment of the user. And then extracting the face characteristic points in the third face image by means of an algorithm and the like, and then calculating a second matching degree of the face characteristic points and a face characteristic database.
When the second matching degree is greater than the second preset matching degree, the face characteristic points matched with the third face image in the face characteristic database are determined, and it can be understood that the face characteristic points common to the third face image in the face characteristic database are determined, and then the registered face image is generated according to the common face characteristic points, so that the registered face image is more accurate. Alternatively, when the second matching degree is greater than the second predetermined matching degree, face image data in the face feature database that matches the third face image may be determined, and then the registered face image may be generated from the face image data that matches the third face image. Because the local gallery comprises the picture shot by the front camera 41 and the picture shot by the rear camera 42, the pictures in the local gallery are clearer compared with the third face image, the corresponding face characteristic points are more accurate and more in number compared with the third face image, the generated registered face image is more accurate, and the face recognition accuracy can be improved.
And when the second matching degree is less than or equal to the second preset matching degree, generating the registered face image by using the face characteristic points in the third face image, wherein the registered face image at the moment only comprises the face characteristic points in the third face image.
The second predetermined matching degree is a preset value, and the second predetermined matching degree may be equal to the first predetermined matching degree, may be greater than the first predetermined matching degree, and may also be smaller than the first predetermined matching degree, which is not limited herein. The second predetermined degree of match may be a value of 60%, 65%, 70%, 75%, 80%, 85%, 90%, 95%, etc. In one example, the second predetermined matching degree is 70%, since the facial feature database is established based on a local gallery having a plurality of pictures of the user, when the second matching degree is greater than 70%, it may be basically determined that the currently operating user is the user himself.
Compared with the mode of directly using the third face image to generate the registered face image, the method and the device have the advantages that the second matching degree of the face image and the face feature database is calculated, when the second matching degree is larger than the second preset matching degree, the registered face image is generated according to the face feature points matched with the third face image in the face feature database, and when the second matching degree is smaller than the second preset matching degree, the third face image is selected to be used to generate the registered face image.
In some embodiments, when the second matching degree is greater than the second predetermined matching degree, whether the registered face image is generated by using the feature point matched with the third face image in the face feature database is approved or not may be prompted on a display interface of the terminal, if the user selects disagreement, the registered face image is generated according to the feature point of the third face image, and if the user selects agreement, the registered face image is generated according to the feature point matched with the third face image in the face feature database.
Referring to fig. 7 to 9, in some embodiments, the face recognition method further includes the following steps:
018: identifying pictures which accord with preset conditions in the local image library; and
019: the pictures are classified to form a character classification database K10, the character classification database K10 includes a library K11 of at least one character corresponding to a character.
In some embodiments, the face recognition device 10 further comprises a recognition module 18 and a classification module 19, wherein the recognition module 18 can be configured to recognize pictures in the local gallery that meet preset conditions, and the classification module 19 can be configured to classify the pictures to form a person classification database K10, and the person classification database K10 includes at least one person picture library K11 corresponding to persons. That is, identification module 18 may be used to implement step 018, and classification module 19 may be used to implement step 019.
In some embodiments, the processor 20 may be further configured to identify a picture in the local gallery that meets a preset condition; and classifying the pictures to form a character classification database K10, the character classification database K10 including a library K11 of at least one character corresponding to a character. That is, the processor 20 may also implement step 018 and step 019.
Specifically, the local gallery includes a plurality of pictures, and the content of each picture may have a certain difference, for example, the content of some pictures is mainly flowers, plants and trees, the content of some pictures is mainly sea, the content of some pictures is mainly tall buildings, and the content of some pictures is mainly characters. Firstly, pictures meeting preset conditions need to be identified from a local gallery, the preset conditions can be that the pictures need to include people and the people need to have face data, and the preset conditions can also include other conditions, for example, the pictures need to be within a certain time, for example, the pictures need to be clear.
After the pictures meeting the preset conditions in the local image library are identified, the pictures are classified to form a character classification database K10. Specifically, there may be differences in the characters between the pictures meeting the preset conditions, for example, there may be differences in the facial feature points of different characters, such as character 1, character 2, character 3, etc., so that it is necessary to classify the pictures meeting the preset conditions, for example, classifying the pictures of all characters 1 into one category, generating a character 1 picture library, classifying the pictures of all characters 2 into one category, generating a character 2 picture library, and classifying the pictures of all characters 3 into one category, generating a character 3 picture library. All the character picture libraries are combined to form a character classification database K10, and the character classification database K10 includes a character picture library K11 corresponding to at least one character.
Further, referring to fig. 10, in some embodiments, step 018 comprises the steps of:
0181: acquiring a picture in a local gallery;
0182: identifying the figure information included in the picture; and
0183: and when only one person exists in the picture, identifying whether the picture comprises the face of the person.
In some embodiments, the identification module 16 is further configured to obtain a picture in the local gallery; identifying the figure information included in the picture; and when only one person exists in the picture, identifying whether the picture comprises the face of the person. That is, the identification module 16 is also used to implement step 0181, step 0182 and step 0183.
In some embodiments, the processor 20 is further configured to obtain a picture in the local gallery; identifying the figure information included in the picture; and when only one person exists in the picture, identifying whether the picture comprises the face of the person. That is, processor 20 is also configured to implement step 0181, step 0182, and step 0183.
Specifically, a picture is selected from the local gallery, then the personal information in the picture is identified, if the picture does not have the personal information, the picture is discarded, the next picture is continuously extracted for identification, if the next picture includes the personal information, whether the picture includes only one piece of personal information is identified, if the picture includes only one piece of personal information, whether the picture includes a face is identified, if the picture includes the face, the step 019 is executed on the picture, and if the picture does not include the face, the picture is discarded. Therefore, the method can avoid the inaccuracy of the extracted face characteristic points when a plurality of characters exist in the picture.
Further, pictures including one person and including a face of a person are classified. Specifically, if the picture is the first picture, a first character picture library is established, if the picture is not the first picture, the picture is matched with the formed character classification databases, namely, the matching degree between the picture and all the character picture libraries is calculated, if the matching degree between the picture and one of the character picture libraries is greater than the preset matching degree, the picture is classified into the corresponding character picture library, if the matching degree between the picture and one of the character picture libraries is less than the preset matching degree, the picture and all the character picture libraries are not successfully matched, a character picture library is newly established, and the picture is stored in the character picture library. The preset matching degree can be 75%, 78%, 80%, 83%, 85%, 88%, 90% or the like.
Referring to fig. 11 to 14, in some embodiments, the face recognition method further includes the following steps:
020: acquiring feature pictures K111 in which the number of the face feature points in the person picture library K11 is greater than the preset number;
021: integrating the face characteristic points of the characteristic picture K111 to form a characteristic face image corresponding to the character picture library K11; and
022: the characteristic face images of all the person picture library K11 in the person classification database are acquired to form a face characteristic database.
In some embodiments, the face recognition apparatus 10 further includes a third obtaining module 20, an integrating module 21, and a fourth obtaining module 22, where the third obtaining module 20 may be configured to obtain feature pictures K111 in which the number of face feature points in the person picture library K11 is greater than a preset number; the integration module 21 may be configured to integrate the face feature points of the feature picture K111 to form a feature face image corresponding to the feature picture library K11; the fourth acquiring module 22 may acquire the characteristic face images of all the person picture libraries K11 in the person classification database to form a face characteristic database. That is, the third acquiring module 20 may be used to implement step 020, the integrating module 21 may be used to implement step 021, and the fourth acquiring module 22 may be used to implement step 022.
In some embodiments, the processor 20 may be further configured to obtain feature pictures K111 in which the number of face feature points in the person picture library K11 is greater than a preset number; integrating the face characteristic points of the characteristic picture K111 to form a characteristic face image corresponding to the character picture library K11; and acquiring the characteristic face images of all the person picture libraries K11 in the person classification database to form a face characteristic database. That is, the processor 20 can also be used to implement step 020, step 021 and step 022.
Specifically, the character classification database includes a plurality of character image libraries, each of which corresponds to a character, and the character image libraries may include a plurality of pictures, such as the pictures P1, P2, P3 to Pn in fig. 13, each of which includes a certain number of face feature points, the face feature points of each of the pictures in the character image libraries are obtained, the number of face feature points of each of the pictures is calculated, and the pictures having the number of face feature points greater than a preset number are saved as the feature pictures K111, such as the pictures t1, t2, and t3 in fig. 14. The preset number may be set or may be changed. In one embodiment, the pictures are arranged from the 1 st to the nth according to the number of the face feature points, and the first M pictures are taken as feature pictures K111, that is, the number of the face feature points of the mth picture is a preset number, where N and M are positive integers.
Furthermore, the number of the face feature points of the feature picture K111 is at least one, two, three, four, and the like, and the face feature points of each picture may be different, and the number of the face feature points in these pictures is large, so as to enrich the face feature data, the face feature points of the feature picture K111 need to be integrated, that is, the face feature points of the multiple pictures in the feature picture K111 are overlapped, and then the feature face image corresponding to the feature picture K111 can be obtained. Referring to fig. 14, if fig. 14 corresponds to the feature picture K111 in the person 1 picture library, the feature picture K111 in the person 1 picture library includes the picture t1, the picture t2 and the picture t3, and the feature points of the face in the picture t1, the picture t2 and the picture t3 are superimposed to obtain the feature face image in the person 1 picture library.
Further, please refer to fig. 9, the feature pictures K111 of all the person picture libraries K11 in the person classification database K10 are respectively obtained, that is, the feature pictures of the person 1 picture library, the person 2 picture library to the person N picture library are respectively obtained, each person picture library K11 corresponds to one feature picture K111, then the feature face image of the corresponding person picture library K11 can be obtained by integrating the face feature points of the corresponding feature pictures K111, and the plurality of feature face images form a face feature database. Because the number of face feature points of the pictures in the feature picture K11 is large, the feature face image formed by integrating the face feature points of the feature picture K11 can represent the face data of the corresponding user, so that the calculation amount can be reduced, and the accuracy of face recognition can be ensured to be high.
Of course, before the step 020 is performed, that is, before the feature pictures K111 in which the number of the face feature points in the person picture library K11 is greater than the preset number are acquired, the pictures in the person picture library K11 may be filtered, so that the situation that the face feature points in the pictures before years are inaccurate is avoided. For example, pictures with a shooting time longer than two years in the person picture library K11 are filtered, pictures with a shooting time longer than five hundred days in the person picture library K11 are filtered, and pictures with a shooting time longer than three hundred sixty-five days in the person picture library K11 are filtered, so that the time of the pictures in the person picture library K11 can be ensured to be newer, the pictures are more fit with the actual face of the existing person, and the finally obtained feature pictures and feature face images are more accurate.
Referring to fig. 15 and 16, in some embodiments, step 012 includes the following steps:
0121: extracting face characteristic points of the first face image;
0122: matching the face characteristic points with a plurality of characteristic face images in a face characteristic database; and
0123: and acquiring a second face image from the figure picture library corresponding to the successfully matched characteristic face image.
In some embodiments, the matching module 12 includes an extracting unit 121, a matching unit 122, and an obtaining unit 123, where the extracting unit 121 may be configured to extract a face feature point of the first face image; the matching unit 122 may be configured to match the facial feature points with a plurality of feature facial images in a facial feature database; the obtaining unit 123 may be configured to obtain a second face image from the person picture library corresponding to the successfully matched feature face image. That is, the extracting unit 121 may be used to implement step 0121, the matching unit 122 may be used to implement step 0122, and the matching unit 122 may be used to implement step 0123.
In some embodiments, the processor 20 is further configured to extract facial feature points of the first facial image; matching the face characteristic points with a plurality of characteristic face images in a face characteristic database; and acquiring a second face image from the figure picture library corresponding to the successfully matched characteristic face image. That is, the processor 20 is also used to implement step 0121, step 0122 and step 0123.
Specifically, the first face image is acquired in step 011, the face feature points in the first face image need to be extracted, then the extracted face feature points of the first face image are matched with a plurality of feature face images in a face feature database, the face feature database comprises a plurality of feature face images, each feature face image corresponds to a person, and the step mainly comprises selecting the feature face image matched with the first face image from the plurality of feature face images. For example, if the facial feature point of the first facial image matches the feature facial image corresponding to the person 2, it indicates that the camera 40 of the terminal 100 acquires the facial image of the person 2 at this time. Of course, the facial feature database may include only one facial feature image.
More specifically, a third matching degree of the extracted facial feature points of the first facial image and each feature facial image in the facial feature database is calculated, when the third matching degree of the facial feature points of the first facial image and one of the feature facial images is greater than or equal to a third preset matching degree, the first facial image and the feature facial image are successfully matched, wherein the feature facial image with the third matching degree less than the third preset matching degree is a feature facial image with failed matching. The third predetermined matching degree may be the same as the second predetermined matching degree and/or the first predetermined matching degree, or may be different from the second predetermined matching degree and/or the first predetermined matching degree, and the third predetermined matching degree may be 75%, 78%, 80%, 82%, 85%, 88%, 90% equivalent.
Further, a second face image is obtained from the figure picture library corresponding to the successfully matched characteristic face image, and since the figure picture library comprises a plurality of pictures, it can be understood that one picture is selected from the local picture library as the second face image, therefore, when face recognition is carried out, the picture matched with the registered face image is the picture in the local picture library, and since the picture in the local picture library is not influenced by the external environment at that time, the accuracy of face recognition is high.
Specifically, the character picture library includes feature pictures, and the number of face feature points in the feature pictures is more than that of face feature points in other pictures, and the second face image is obtained from the feature pictures, that is, the second face image is one of the feature pictures. The feature picture includes at least one picture, and may be a picture whose shooting time is closest to the current time in the feature picture is selected as the second face image, or a picture whose face feature points are the most in the feature picture is selected as the second face image, or the second face image may be acquired from the feature picture in another manner, which is not limited herein. And selecting the picture in the characteristic picture as a second face image, so that the accuracy in face recognition can be higher.
In one embodiment, the number of the feature pictures is three, and the second face image is the picture with the latest shooting time in the feature pictures, so that the second face image is closest to the real face of the current user, and the face recognition accuracy can be higher. Meanwhile, when face recognition is carried out, the human face recognition method is not easily influenced by factors such as the current environment, light and the like.
Referring to fig. 1, in step 030, when the first matching degree between the second face image and the registered face image is greater than the first predetermined matching degree, it is determined that the face recognition is successful. Specifically, extracting a face feature point of a second face image, matching the face feature point of the second face image with a registered face image, calculating a first matching degree between the face feature point of the second face image and the registered face image, when the first matching degree is greater than a first preset matching degree, the face recognition is successful, and when the first matching degree is less than the first preset matching degree, the face recognition is failed.
In some embodiments, when the first face feature point fails to match all the feature face images in the face feature database, the first face image is directly matched with the registered face image, the face feature point of the first face image is matched with the registered face image, a fourth matching degree of the face feature point of the first face image and the registered face image is calculated, whether the fourth matching degree is greater than the first preset matching degree or not is judged, if the fourth matching degree is greater than the first preset matching degree, face recognition is successful, and if the fourth matching degree is less than the first preset matching degree, face recognition fails.
Referring to fig. 2 and 17, and to fig. 2 and 17, one or more non-transitory computer-readable storage media 300 containing computer-executable instructions 302 according to embodiments of the present application, when the computer-executable instructions 302 are executed by one or more processors 20, cause the processors 20 to perform the face recognition method according to any of the embodiments.
For example, referring to fig. 1 and 3 in conjunction, the computer-executable instructions 302, when executed by the one or more processors 20, cause the processors 20 to perform the steps of:
011: acquiring a first face image;
012: matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database; and
013: and when the first matching degree of the second face image and the registered face image is greater than the first preset matching degree, determining that the face recognition is successful.
As another example, referring to fig. 3 and 5 in conjunction, when the computer-executable instructions 302 are executed by one or more processors 20, the processors 20 may further perform the steps of:
014: acquiring a third face image;
015: calculating a second matching degree of the third face image and the face feature database;
016: when the second matching degree is greater than a second preset matching degree, generating a registered face image according to the face characteristic points matched with the third face image in the face characteristic database; and
017: and when the second matching degree is smaller than the second preset matching degree, generating a registered face image according to the face characteristic points in the third face image.
In the description herein, reference to the description of the terms "certain embodiments," "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "a plurality" means at least two, e.g., two, three, unless specifically limited otherwise.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.
Claims (12)
1. A face recognition method is characterized by comprising the following steps:
acquiring a first face image;
matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and
and when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree, determining that the face recognition is successful.
2. The face recognition method of claim 1, further comprising:
acquiring a third face image;
calculating a second matching degree of the third face image and the face feature database;
when the second matching degree is greater than a second preset matching degree, generating the registered face image according to the face characteristic points matched with the third face image in the face characteristic database; and
and when the second matching degree is smaller than the second preset matching degree, generating the registered face image according to the face characteristic points in the third face image.
3. The face recognition method of claim 1, further comprising:
identifying pictures which accord with preset conditions in the local image library; and
and classifying the pictures to form a character classification database, wherein the character classification database comprises a character picture library corresponding to at least one character.
4. The face recognition method of claim 3, further comprising:
acquiring feature pictures with the number of the face feature points larger than a preset number in the figure picture library;
integrating the human face characteristic points of the characteristic pictures to form characteristic human face images corresponding to the human figure picture library; and
and acquiring the characteristic face images of all the character picture libraries in the character classification database to form the face characteristic database.
5. The method according to claim 4, wherein the matching the first face image with a preset face feature database to obtain a second face image in the face feature database, which matches the first face image, comprises:
extracting face feature points of the first face image;
matching the facial feature points with a plurality of characteristic facial images in the facial feature database; and
and acquiring the second face image from the figure picture library corresponding to the successfully matched characteristic face image.
6. A face recognition apparatus, characterized in that the face recognition apparatus comprises:
the first acquisition module is used for acquiring a first face image;
the matching module is used for matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, the face feature database is established based on a local image library, and the number of face feature points contained in the second face image is more than that contained in the first face image; and
and the determining module is used for determining that the face recognition is successful when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree.
7. A terminal, characterized in that the terminal comprises a processor configured to:
acquiring a first face image;
matching the first face image with a preset face feature database to obtain a second face image matched with the first face image in the face feature database, wherein the face feature database is established based on a local image library, and the second face image contains more face feature points than the first face image; and
and when the first matching degree of the second face image and the registered face image is greater than a first preset matching degree, determining that the face recognition is successful.
8. The terminal of claim 7, wherein the processor is further configured to:
acquiring a third face image;
calculating a second matching degree of the third face image and the face feature database;
when the second matching degree is greater than a second preset matching degree, generating the registered face image according to the face characteristic points matched with the third face image in the face characteristic database; and
and when the second matching degree is smaller than the second preset matching degree, generating the registered face image according to the face characteristic points in the third face image.
9. The terminal of claim 7, wherein the processor is further configured to:
identifying pictures which accord with preset conditions in the local image library; and
and classifying the pictures to form a character classification database, wherein the character classification database comprises a character picture library corresponding to at least one character.
10. The terminal of claim 9, wherein the processor is further configured to:
acquiring feature pictures with the number of the face feature points larger than a preset number in the figure picture library;
integrating the human face characteristic points of the characteristic pictures to form characteristic human face images corresponding to the human figure picture library; and
and acquiring the characteristic face images of all the character picture libraries in the character classification database to form the face characteristic database.
11. The terminal of claim 10, wherein the processor is further configured to:
extracting face feature points of the first face image;
matching the facial feature points with a plurality of characteristic facial images in the facial feature database; and
and acquiring the second face image from the figure picture library corresponding to the successfully matched characteristic face image.
12. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the face recognition method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010415103.XA CN111626161A (en) | 2020-05-15 | 2020-05-15 | Face recognition method and device, terminal and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010415103.XA CN111626161A (en) | 2020-05-15 | 2020-05-15 | Face recognition method and device, terminal and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111626161A true CN111626161A (en) | 2020-09-04 |
Family
ID=72259805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010415103.XA Pending CN111626161A (en) | 2020-05-15 | 2020-05-15 | Face recognition method and device, terminal and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111626161A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101200A (en) * | 2020-09-15 | 2020-12-18 | 北京中合万象科技有限公司 | A face anti-recognition method, system, computer equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598138A (en) * | 2014-12-24 | 2015-05-06 | 三星电子(中国)研发中心 | Method and device for controlling electronic map |
CN105183156A (en) * | 2015-08-31 | 2015-12-23 | 小米科技有限责任公司 | Screen control method and apparatus |
CN105407201A (en) * | 2015-11-24 | 2016-03-16 | 小米科技有限责任公司 | Contact person information matching method and device, as well as terminal |
CN106067013A (en) * | 2016-06-30 | 2016-11-02 | 美的集团股份有限公司 | Embedded system face identification method and device |
CN107609508A (en) * | 2017-09-08 | 2018-01-19 | 深圳市金立通信设备有限公司 | A kind of face identification method, terminal and computer-readable recording medium |
CN109977765A (en) * | 2019-02-13 | 2019-07-05 | 平安科技(深圳)有限公司 | Facial image recognition method, device and computer equipment |
CN110795584A (en) * | 2019-09-19 | 2020-02-14 | 深圳云天励飞技术有限公司 | User identifier generation method and device and terminal equipment |
-
2020
- 2020-05-15 CN CN202010415103.XA patent/CN111626161A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598138A (en) * | 2014-12-24 | 2015-05-06 | 三星电子(中国)研发中心 | Method and device for controlling electronic map |
CN105183156A (en) * | 2015-08-31 | 2015-12-23 | 小米科技有限责任公司 | Screen control method and apparatus |
CN105407201A (en) * | 2015-11-24 | 2016-03-16 | 小米科技有限责任公司 | Contact person information matching method and device, as well as terminal |
CN106067013A (en) * | 2016-06-30 | 2016-11-02 | 美的集团股份有限公司 | Embedded system face identification method and device |
CN107609508A (en) * | 2017-09-08 | 2018-01-19 | 深圳市金立通信设备有限公司 | A kind of face identification method, terminal and computer-readable recording medium |
CN109977765A (en) * | 2019-02-13 | 2019-07-05 | 平安科技(深圳)有限公司 | Facial image recognition method, device and computer equipment |
CN110795584A (en) * | 2019-09-19 | 2020-02-14 | 深圳云天励飞技术有限公司 | User identifier generation method and device and terminal equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101200A (en) * | 2020-09-15 | 2020-12-18 | 北京中合万象科技有限公司 | A face anti-recognition method, system, computer equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11074436B1 (en) | Method and apparatus for face recognition | |
CN110443016B (en) | Information leakage prevention method, electronic device and storage medium | |
CN106203297B (en) | A kind of personal identification method and device | |
CN107093066B (en) | Service implementation method and device | |
CN111914775B (en) | Living body detection method, living body detection device, electronic equipment and storage medium | |
US10558851B2 (en) | Image processing apparatus and method of generating face image | |
US20100329568A1 (en) | Networked Face Recognition System | |
CN112199530B (en) | Multi-dimensional face library picture automatic updating method, system, equipment and medium | |
CN104798103B (en) | Face recognition device, face recognition method, program for the same, and information apparatus | |
US11989975B2 (en) | Iris authentication device, iris authentication method, and recording medium | |
CN110991231B (en) | Living body detection method and device, server and face recognition equipment | |
CN112364827A (en) | Face recognition method and device, computer equipment and storage medium | |
EP3139308A1 (en) | People search system and people search method | |
CN106886774A (en) | The method and apparatus for recognizing ID card information | |
JP6969663B2 (en) | Devices and methods for identifying the user's imaging device | |
CN107346419B (en) | Iris recognition method, electronic device, and computer-readable storage medium | |
CN108805005A (en) | Auth method and device, electronic equipment, computer program and storage medium | |
CN110162462A (en) | Test method, system and the computer equipment of face identification system based on scene | |
CN109635625B (en) | Intelligent identity verification method, equipment, storage medium and device | |
CN110717428A (en) | Identity recognition method, device, system, medium and equipment fusing multiple features | |
CN111626161A (en) | Face recognition method and device, terminal and readable storage medium | |
CN112818874B (en) | Image processing method, device, equipment and storage medium | |
CN111767845B (en) | Certificate identification method and device | |
CN113469135A (en) | Method and device for determining object identity information, storage medium and electronic device | |
CN110929583A (en) | High-detection-precision face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200904 |