CN110991431A - Face recognition method, device, equipment and storage medium - Google Patents
Face recognition method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110991431A CN110991431A CN202010140394.6A CN202010140394A CN110991431A CN 110991431 A CN110991431 A CN 110991431A CN 202010140394 A CN202010140394 A CN 202010140394A CN 110991431 A CN110991431 A CN 110991431A
- Authority
- CN
- China
- Prior art keywords
- user
- face image
- cloud server
- face
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Collating Specific Patterns (AREA)
Abstract
The embodiment of the specification relates to a face recognition method, a face recognition device, face recognition equipment and a storage medium. One of the methods comprises: the method comprises the steps of obtaining a face image to be recognized, retrieving user characteristics corresponding to the face image in a local terminal, retrieving user characteristics corresponding to the face image in a cloud server, and finally recognizing a user corresponding to the face image according to a local retrieval result of the terminal and a retrieval result of the cloud server.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computers, and in particular, to a face recognition method, a face recognition apparatus, a face recognition device, and a computer-readable storage medium.
Background
With the rapid development of computer internet, more and more services need to identify the user in order to provide personalized services for the user and ensure the security of the user service information.
At present, in order to ensure the convenience and safety of user identity identification, the mode of identifying the user identity is continuously changed, a new identity identification mode is continuously generated, the user identity is identified by an account number and a password, the user identity is identified by a fingerprint, the user identity is identified by an iris, the user identity is identified by a face, and corresponding services, such as payment services, are provided for the user after the user identity is identified by the mode.
Further, the existing face recognition mode mainly collects face images of users, and only recognizes users corresponding to the face images in a local terminal, or only recognizes users corresponding to the face images in a cloud server.
However, in the case of a method of recognizing a user corresponding to a face image only in a local terminal, since the local terminal relies on user information stored in the local terminal in advance, the situation that the recognized face image is not stored in the local terminal in advance may occur, and the use of the user can only be satisfied in a small range, whereas in the case of a method of recognizing a user corresponding to a face image only in a cloud server, since the cloud server is greatly influenced by a network, the time consumed for face recognition is long under the conditions that the network is fluctuated, the network signal is weak, or the network is disconnected.
Disclosure of Invention
The embodiment of the specification provides a new technical scheme of face recognition.
According to a first aspect of the present description, there is provided an embodiment of a face recognition method, including:
acquiring a face image to be recognized;
simultaneously retrieving user characteristics corresponding to the face image in a terminal local area and a cloud server respectively;
and identifying the user corresponding to the face image according to the user characteristic retrieval results in the local terminal and the cloud server.
Optionally, acquiring a face image to be recognized includes:
collecting an image of a user;
determining a quality score corresponding to the image through a face detection algorithm;
and taking the image with the quality score exceeding a preset threshold value as a face image to be recognized.
Optionally, identifying a user corresponding to the face image according to a user feature retrieval result in the local terminal and the cloud server includes:
when the user characteristics corresponding to the face image are retrieved in the local terminal first, identifying the user corresponding to the face image according to the user characteristics corresponding to the face image retrieved in the local terminal;
and when the user characteristics corresponding to the face image are retrieved in the cloud server firstly, identifying the user corresponding to the face image according to the user characteristics corresponding to the face image retrieved by the cloud server.
Optionally, before acquiring the face image to be recognized, the method further includes:
collecting environmental parameters of face recognition to be executed;
and when the environmental parameters meet preset conditions, determining to execute simultaneous recognition based on the local terminal and the cloud server.
Optionally, when the environmental parameter does not satisfy a preset condition, the method further includes:
determining to perform local identification based on the terminal according to the environment parameter; or
Determining to perform identification based on the cloud server; or
Stopping performing identification based on the terminal locality, and stopping performing identification based on the cloud server.
Optionally, the environmental parameters include: at least one of a system security state, an algorithm version state, a terminal local storage state, and a network operational state.
Optionally, the method further comprises:
returning a service page containing user service data to the identified user;
acquiring the operation data of the user aiming at the service page;
and executing corresponding services according to the operation data.
According to a second aspect of the present specification, there is also provided a face recognition apparatus comprising:
the acquisition module is used for acquiring a face image to be recognized;
the retrieval module is used for simultaneously retrieving the user characteristics corresponding to the face image in the local terminal and the cloud server respectively;
and the identification module is used for identifying the user corresponding to the face image according to the user characteristic retrieval results in the local terminal and the cloud server.
According to a third aspect of the present specification, there is also provided an embodiment of a face recognition apparatus, including the face recognition device according to the third aspect of the present specification, or the apparatus includes:
a memory for storing executable commands;
a processor for executing the face recognition method according to the first aspect of the present specification under the control of the executable command.
According to a fourth aspect of the present description, there is also provided an embodiment of a computer-readable storage medium, which stores executable instructions that, when executed by a processor, perform the face recognition method according to the first aspect of the present description.
In one embodiment, by retrieving the user features corresponding to the face images in the local terminal and the cloud server at the same time, compared with retrieving the user features corresponding to the face images only in the local terminal, the face recognition of all users can be effectively covered, and compared with retrieving the user features corresponding to the face images only in the cloud server, the time consumption of the face recognition can be reduced.
Other features of the present description and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description, serve to explain the principles of the specification.
FIG. 1a is a schematic diagram of a scene that may be used to implement a face recognition method of an embodiment;
FIG. 1b is a block diagram of a hardware configuration of a face recognition device that can be used to implement the face recognition method of one embodiment;
FIG. 2 is a flow chart of a face recognition method according to a first embodiment;
FIG. 3 is a functional block diagram of a face recognition apparatus according to one embodiment;
FIG. 4 is a functional block diagram of a face recognition device according to one embodiment.
Detailed Description
Various exemplary embodiments of the present specification will now be described in detail with reference to the accompanying drawings.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Hardware configuration >
Referring to fig. 1a, the face recognition apparatus 1000 displays a page a including a face recognition interface of a certain business system to a user, when the user aims the face at the camera, the face recognition apparatus 1000 acquires a face image to be recognized, the face recognition device 1000 locally retrieves the user features corresponding to the face image, and sends a retrieval request to the cloud server 1900, the cloud server 1900 retrieves the user features corresponding to the face image, and returns the retrieval result to the face recognition apparatus 1000, the face recognition apparatus 1000 recognizes the user corresponding to the face image according to the user characteristic retrieval result in the local and cloud servers 1900, the face recognition apparatus 1000 returns and displays a service page B containing user service data to the recognized user, therefore, the face recognition of all users can be effectively covered, and the time consumption of the face recognition can be reduced.
Fig. 1b is a block diagram of a hardware configuration of a face recognition device to which a face recognition method according to an embodiment of the present specification can be applied.
The face recognition device 1000 may be a virtual machine or a physical machine. The face recognition apparatus 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and the like. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, and the like. A user can input/output voice information through the speaker 1700 and the microphone 1800.
As applied to this embodiment, the memory 1200 is used to store computer program instructions for controlling the processor 1100 to operate to perform a face recognition method according to any embodiment of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor 1100 is well known in the art and will not be described in detail herein.
Although a plurality of devices are shown for the face recognition apparatus 1000 in fig. 1b, the present invention may only relate to some of the devices, for example, the face recognition apparatus 1000 only relates to the memory 1200 and the processor 1100.
In the above description, the skilled person will be able to design instructions in accordance with the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Method example >
The present embodiment provides a face recognition method, as shown in fig. 2, the method includes the following steps:
s201: and acquiring a face image to be recognized.
In practical applications, in order to ensure convenience and security of user identity identification, a face is generally used to identify a user identity, and after the user identity is identified, a corresponding service, such as a payment service, is provided for the user.
Further, in the process of identifying the user identity by using the face, the identity of the user needs to be identified according to the face image, and therefore, in the embodiment of the present specification, the face image to be identified needs to be acquired.
It should be noted that, since a face image to be recognized needs to be acquired, in this embodiment of the present specification, an image of a user needs to be acquired through a camera.
In the process of identifying the user identity through the face, in order to ensure the accuracy of face identification, the face images used have specific requirements, and each collected face image cannot be used, for example, whether the face in the collected image is complete, whether the glasses are open, whether the face angle meets the requirements, whether the light meets the requirements, and the like.
It should be further noted that, in the embodiment of the present specification, the acquired images are comprehensively evaluated according to a preset specific requirement, and an image meeting the specific requirement is selected from the images as a face image to be recognized, where a quality score corresponding to the image is determined by a face detection algorithm, and the image whose quality score exceeds a preset threshold is used as the face image to be recognized.
S202: and simultaneously retrieving the user characteristics corresponding to the face image in the local terminal and the cloud server respectively.
S203: and identifying the user corresponding to the face image according to the user characteristic retrieval results in the local terminal and the cloud server.
Further, in the embodiments of the present description, after a face image to be recognized is obtained, user features corresponding to the face image may be retrieved in the terminal local and the cloud server at the same time, and a user corresponding to the face image may be recognized according to user feature retrieval results in the terminal local and the cloud server.
It should be noted that the user characteristics are generated according to a face image acquired in advance, and the corresponding relationship between the user and the user characteristics are stored in the local terminal and the cloud server in advance; the user characteristics corresponding to the face image are retrieved in the terminal local and the cloud server, and the user characteristics to be compared can be generated according to the face image to be identified, and the user characteristics to be compared are compared with the user characteristics stored in advance, so that the user characteristics corresponding to the face image are determined.
Further, although in a normal situation, the retrieval of the user features corresponding to the face image is performed in the terminal local and the cloud server at the same time, since the amount of data stored in the cloud server is larger than that in the terminal local and the cloud server needs to receive a retrieval instruction sent by the terminal through the network to perform the retrieval, in this embodiment of the present disclosure, the retrieval in the terminal local is faster than that in the cloud server, that is, the terminal local may have a retrieval result earlier than that in the cloud server.
Since the retrieval speed is faster in the local terminal than in the cloud server, in the embodiment of the present specification, two situations may occur, where the first situation is that the user feature corresponding to the face image is retrieved first in the local terminal, and the second situation is that the user feature corresponding to the face image is not retrieved in the local terminal, but the user feature corresponding to the face image is retrieved in the cloud server.
In this embodiment of the present specification, for the first case, since the user feature corresponding to the face image is already retrieved in the local terminal first, it is not necessary to wait for the retrieval result of the cloud server, and the user corresponding to the face image can be directly identified according to the user feature corresponding to the face image retrieved in the local terminal; for the second case, since the user corresponding to the face image is not retrieved locally in the terminal, the retrieval result of the cloud server needs to be waited, and the user corresponding to the face image is identified according to the user feature corresponding to the face image retrieved by the cloud server.
In summary, according to the user feature retrieval results in the terminal local and the cloud server, the specific steps for identifying the user corresponding to the face image may be as follows:
when the user characteristics corresponding to the face image are retrieved firstly in the local terminal, the user corresponding to the face image is identified according to the user characteristics corresponding to the face image retrieved locally by the terminal, and when the user characteristics corresponding to the face image are retrieved firstly in the cloud server, namely the user corresponding to the face image is not retrieved in the local terminal, the user corresponding to the face image is identified according to the user characteristics corresponding to the face image retrieved by the cloud server.
By the method, the user characteristics corresponding to the face images are retrieved in the local terminal and the cloud server simultaneously, so that face recognition of all users can be effectively covered compared with the case that the user characteristics corresponding to the face images are retrieved only in the local terminal, and the time consumption of face recognition can be reduced compared with the case that the user characteristics corresponding to the face images are retrieved only in the cloud server.
In practical application, a situation that user features corresponding to face images cannot be retrieved in a terminal local and a cloud server at the same time due to the influence of a scene environment may occur, and in order to avoid causing unnecessary resource waste, when the user features corresponding to the face images cannot be retrieved in the terminal local and the cloud server at the same time, the step S201 needs to be stopped in advance.
It should be noted that, since the system is in danger and information of the user is stolen, that is, the step S201 is executed only when the system is in a secure state, in this embodiment of the present specification, the environmental parameter may be a system secure state, when the acquired system secure state is in a secure state, it is determined that the simultaneous identification based on the terminal local and the cloud server is executed, and when the acquired system secure state is in an insecure state, the execution of the identification based on the terminal local and the execution of the identification based on the cloud server are stopped;
because the algorithm version is not updated in time, which may cause inaccuracy or inefficiency of face recognition when the algorithm version is not the latest version, that is, the step S201 is executed only when the algorithm version is the latest version, in this embodiment of the present specification, the environmental parameter may be an algorithm version state, when the acquired algorithm version state is the latest version, it is determined to execute simultaneous recognition based on the terminal local and the cloud server, and when the acquired algorithm version state is not the latest version, execution of recognition based on the terminal local and execution of recognition based on the cloud server are stopped;
since it is not necessary to retrieve the user characteristics from the terminal local if the terminal local does not acquire and store the user data from the cloud server in advance, that is, if no user data is stored in the terminal local, in this embodiment of the present specification, the environment parameter may be a terminal local storage state, when the acquired terminal local storage state is the stored user data, it is determined to perform the simultaneous identification based on the terminal local and the cloud server, and when the acquired terminal local storage state is the non-stored user data, the execution of the identification based on the terminal local is stopped, and the identification based on the cloud server is directly performed;
for this situation, in this embodiment of the present specification, the environment parameter may be a network operation state, and when the network operation state is a strong network, it is determined to perform simultaneous recognition based on the terminal local and the cloud server, and when the network operation state is a weak network, the execution of recognition based on the cloud server is stopped, and the recognition based on the terminal local is directly performed.
In summary, in the embodiment of the present specification, the environmental parameter may be one or more of a system security state, an algorithm version state, a terminal local storage state, and a network operation state, and the environmental parameter may be arbitrarily combined to determine whether simultaneous identification based on the terminal local and the cloud server can be performed, or may be used alone to determine whether simultaneous identification based on the terminal local and the cloud server can be performed.
In addition, the four environmental parameters are only exemplary examples, and it is not limited that the embodiment of the present disclosure is limited to the four environmental parameters, and other parameters may also be used as the criteria for determining to perform simultaneous identification based on the local terminal and the cloud server as long as the environmental parameters that cannot be identified simultaneously in the local terminal and the cloud server are all allowed.
By the method, the use relationship of the links of the local terminal and the cloud server can be dynamically regulated and controlled according to actual environmental parameters when face recognition is executed.
Further, in practical application, after the user identity corresponding to the currently acquired face image is identified, a service page containing the user service data can be displayed to the user, the user can perform operation on the service page according to actual requirements, specifically, the service page containing the user service data is returned to the identified user, the operation data of the user for the service page is obtained, and corresponding services are executed according to the operation data.
For example, assuming that payment can be performed only by verifying identity through face recognition when a payment service is used, a user A purchases a commodity in a store, and the payment service is required to be used when payment is performed, the store acquires a face image of the user A through a payment device, the payment device simultaneously retrieves user features corresponding to the face image in a terminal local area and a cloud server, finally, the payment device retrieves the user features in the terminal local area first, identifies that the user corresponding to the face image is the user A according to the user features retrieved in the terminal local area, returns the user A to the payment device and displays a page containing payment content, the user A clicks to confirm payment after confirming the payment content, the payment device acquires confirmation payment data of the user A, and completes payment according to the confirmation payment data.
Apparatus embodiment
Fig. 3 provides a face recognition apparatus 30 for the present embodiment, where the apparatus 30 includes:
an obtaining module 301, configured to obtain a face image to be recognized;
a retrieval module 302, configured to retrieve user characteristics corresponding to the face image in a local terminal and a cloud server at the same time, respectively;
and the identification module 303 is configured to identify a user corresponding to the face image according to a user feature retrieval result in the local terminal and the cloud server.
In one embodiment, the obtaining module 301 is specifically configured to collect an image of a user; determining a quality score corresponding to the image through a face detection algorithm; and taking the image with the quality score exceeding a preset threshold value as a face image to be recognized.
In an embodiment, the retrieving module 302 is specifically configured to, when the user feature corresponding to the facial image is retrieved first in the local terminal, identify a user corresponding to the facial image according to the user feature corresponding to the facial image retrieved by the local terminal; and when the user characteristics corresponding to the face image are retrieved in the cloud server firstly, identifying the user corresponding to the face image according to the user characteristics corresponding to the face image retrieved by the cloud server.
In one embodiment, the apparatus 30 further comprises:
a determining module 304, configured to acquire an environmental parameter to be subjected to face recognition before the obtaining module 301 obtains a face image to be recognized; and when the environmental parameters meet preset conditions, determining to execute simultaneous recognition based on the local terminal and the cloud server.
In one embodiment, the determining module 304 is further configured to determine to perform local identification based on the terminal according to the environment parameter when the environment parameter does not satisfy a preset condition; or determining to perform the identification based on the cloud server; or stopping executing the identification based on the local part of the terminal and stopping executing the identification based on the cloud server.
In one embodiment, the environmental parameters include: at least one of a system security state, an algorithm version state, a terminal local storage state, and a network operational state.
The device 30 further comprises:
a service module 305, configured to return a service page containing user service data to the identified user; acquiring the operation data of the user aiming at the service page; and executing corresponding services according to the operation data.
Apparatus embodiment
In this embodiment, a face recognition device 40 as shown in fig. 4 is further provided, where the face recognition device 40 includes the recognition apparatus 30 described in the apparatus embodiment of this specification; alternatively, the face recognition device 40 includes:
a memory for storing executable commands.
A processor for executing the method described in any of the method embodiments of the present specification under the control of executable commands stored in the memory.
The implementation subject according to the executed method embodiment in the face recognition device is a terminal device.
In one embodiment, any of the modules in the above apparatus embodiments may be implemented by a processor.
Computer-readable storage Medium embodiment
The present embodiments provide a computer-readable storage medium having stored therein an executable command that, when executed by a processor, performs a method described in any of the method embodiments of the present specification.
One or more embodiments of the present description may be a system, method, and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the specification.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations for embodiments of the present description may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present description by utilizing state information of the computer-readable program instructions to personalize the electronic circuit.
Aspects of the present description are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the description. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present description. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
The foregoing description of the embodiments of the present specification has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the application is defined by the appended claims.
Claims (10)
1. A face recognition method, comprising:
acquiring a face image to be recognized;
simultaneously retrieving user characteristics corresponding to the face image in a terminal local area and a cloud server respectively;
and identifying the user corresponding to the face image according to the user characteristic retrieval results in the local terminal and the cloud server.
2. The method of claim 1, acquiring a face image to be recognized, comprising:
collecting an image of a user;
determining a quality score corresponding to the image through a face detection algorithm;
and taking the image with the quality score exceeding a preset threshold value as a face image to be recognized.
3. The method of claim 1, wherein identifying the user corresponding to the facial image according to the user feature retrieval result in the local terminal and the cloud server comprises:
when the user characteristics corresponding to the face image are retrieved in the local terminal first, identifying the user corresponding to the face image according to the user characteristics corresponding to the face image retrieved in the local terminal;
and when the user characteristics corresponding to the face image are retrieved in the cloud server firstly, identifying the user corresponding to the face image according to the user characteristics corresponding to the face image retrieved by the cloud server.
4. The method of claim 1, prior to acquiring a face image to be recognized, further comprising:
collecting environmental parameters of face recognition to be executed;
and when the environmental parameters meet preset conditions, determining to execute simultaneous recognition based on the local terminal and the cloud server.
5. The method of claim 4, when the environmental parameter does not satisfy a preset condition, the method further comprising:
determining to perform local identification based on the terminal according to the environment parameter; or
Determining to perform identification based on the cloud server; or
Stopping performing identification based on the terminal locality, and stopping performing identification based on the cloud server.
6. The method of claim 4, the environmental parameter comprising: at least one of a system security state, an algorithm version state, a terminal local storage state, and a network operational state.
7. The method according to any one of claims 1-6, further comprising:
returning a service page containing user service data to the identified user;
acquiring the operation data of the user aiming at the service page;
and executing corresponding services according to the operation data.
8. A face recognition apparatus comprising:
the acquisition module is used for acquiring a face image to be recognized;
the retrieval module is used for simultaneously retrieving the user characteristics corresponding to the face image in the local terminal and the cloud server respectively;
and the identification module is used for identifying the user corresponding to the face image according to the user characteristic retrieval results in the local terminal and the cloud server.
9. A face recognition apparatus comprising the face recognition device of claim 8, or the apparatus comprising:
a memory for storing executable commands;
a processor for executing the face recognition method according to any one of claims 1-7 under the control of the executable command.
10. A computer-readable storage medium storing executable instructions that, when executed by a processor, perform the face recognition method of any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010140394.6A CN110991431A (en) | 2020-03-03 | 2020-03-03 | Face recognition method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010140394.6A CN110991431A (en) | 2020-03-03 | 2020-03-03 | Face recognition method, device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN110991431A true CN110991431A (en) | 2020-04-10 |
Family
ID=70081296
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010140394.6A Pending CN110991431A (en) | 2020-03-03 | 2020-03-03 | Face recognition method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110991431A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111768580A (en) * | 2020-06-30 | 2020-10-13 | 上海上实龙创智能科技股份有限公司 | Indoor anti-theft system and anti-theft method based on edge gateway |
| CN115659305A (en) * | 2022-12-27 | 2023-01-31 | 成都国星宇航科技股份有限公司 | Identity information identification method and system and electronic equipment |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108319944A (en) * | 2018-05-03 | 2018-07-24 | 山东汇贸电子口岸有限公司 | A kind of remote human face identification system and method |
| CN108564074A (en) * | 2018-06-26 | 2018-09-21 | 杭州车厘子智能科技有限公司 | A kind of timesharing car rental method for managing security and system based on FACEID |
| CN108875516A (en) * | 2017-12-12 | 2018-11-23 | 北京旷视科技有限公司 | Face identification method, device, system, storage medium and electronic equipment |
| CN109816838A (en) * | 2019-03-14 | 2019-05-28 | 福建票付通信息科技有限公司 | A kind of recognition of face gate and its ticket checking method |
| CN109934591A (en) * | 2019-03-21 | 2019-06-25 | 苏州迈荣祥信息科技有限公司 | A kind of method and mobile terminal ensureing safety of payment |
| KR20190136877A (en) * | 2018-05-31 | 2019-12-10 | 윈스로드(주) | Method and apparatus for obtaining face image by using ip camera |
| CN110738770A (en) * | 2019-09-25 | 2020-01-31 | 浙江大华技术股份有限公司 | Face recognition forbidden processing method, gate, control end and system |
-
2020
- 2020-03-03 CN CN202010140394.6A patent/CN110991431A/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108875516A (en) * | 2017-12-12 | 2018-11-23 | 北京旷视科技有限公司 | Face identification method, device, system, storage medium and electronic equipment |
| CN108319944A (en) * | 2018-05-03 | 2018-07-24 | 山东汇贸电子口岸有限公司 | A kind of remote human face identification system and method |
| KR20190136877A (en) * | 2018-05-31 | 2019-12-10 | 윈스로드(주) | Method and apparatus for obtaining face image by using ip camera |
| CN108564074A (en) * | 2018-06-26 | 2018-09-21 | 杭州车厘子智能科技有限公司 | A kind of timesharing car rental method for managing security and system based on FACEID |
| CN109816838A (en) * | 2019-03-14 | 2019-05-28 | 福建票付通信息科技有限公司 | A kind of recognition of face gate and its ticket checking method |
| CN109934591A (en) * | 2019-03-21 | 2019-06-25 | 苏州迈荣祥信息科技有限公司 | A kind of method and mobile terminal ensureing safety of payment |
| CN110738770A (en) * | 2019-09-25 | 2020-01-31 | 浙江大华技术股份有限公司 | Face recognition forbidden processing method, gate, control end and system |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111768580A (en) * | 2020-06-30 | 2020-10-13 | 上海上实龙创智能科技股份有限公司 | Indoor anti-theft system and anti-theft method based on edge gateway |
| CN115659305A (en) * | 2022-12-27 | 2023-01-31 | 成都国星宇航科技股份有限公司 | Identity information identification method and system and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11487503B2 (en) | Interactive control method and device for voice and video communications | |
| KR102248474B1 (en) | Voice command providing method and apparatus | |
| US9913246B1 (en) | Intelligent notification redirection | |
| CN108376546B (en) | Speech input method and electronic device and system for supporting the method | |
| US10558749B2 (en) | Text prediction using captured image from an image capture device | |
| KR102178892B1 (en) | Method for providing an information on the electronic device and electronic device thereof | |
| US10216404B2 (en) | Method of securing image data and electronic device adapted to the same | |
| CN107919129A (en) | Method and apparatus for controlling the page | |
| US11194401B2 (en) | Gesture control of internet of things devices | |
| US20200160748A1 (en) | Cognitive snapshots for visually-impaired users | |
| US20160048665A1 (en) | Unlocking an electronic device | |
| CN113378855A (en) | Method for processing multitask, related device and computer program product | |
| US10057358B2 (en) | Identifying and mapping emojis | |
| CN118298049B (en) | Multi-mode data generation method and multi-mode model training method | |
| US20180032748A1 (en) | Mobile device photo data privacy | |
| JP2021517297A (en) | Systems and methods for autofill field classification | |
| US11676599B2 (en) | Operational command boundaries | |
| CN110991431A (en) | Face recognition method, device, equipment and storage medium | |
| US20220108624A1 (en) | Reader assistance method and system for comprehension checks | |
| CN109086097B (en) | Method and device for starting small program, server and storage medium | |
| CN114978749A (en) | Login authentication method and system, storage medium and electronic equipment | |
| US20200089812A1 (en) | Updating social media post based on subsequent related social media content | |
| US20240013364A1 (en) | Image-based vehicle damage assessment method, apparatus and storage medium | |
| CN113807369B (en) | Target re-identification method and device, electronic equipment and storage medium | |
| CN110929241B (en) | Method and device for quickly starting small program, medium and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200410 |