[go: up one dir, main page]

CN110163104B - Face detection method and device and electronic equipment - Google Patents

Face detection method and device and electronic equipment Download PDF

Info

Publication number
CN110163104B
CN110163104B CN201910313452.8A CN201910313452A CN110163104B CN 110163104 B CN110163104 B CN 110163104B CN 201910313452 A CN201910313452 A CN 201910313452A CN 110163104 B CN110163104 B CN 110163104B
Authority
CN
China
Prior art keywords
image
floating layer
face detection
user interface
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910313452.8A
Other languages
Chinese (zh)
Other versions
CN110163104A (en
Inventor
孙娜
周亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910313452.8A priority Critical patent/CN110163104B/en
Publication of CN110163104A publication Critical patent/CN110163104A/en
Application granted granted Critical
Publication of CN110163104B publication Critical patent/CN110163104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a face detection method, a face detection device and electronic equipment, wherein the method comprises the following steps: starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface; displaying a face detection state in a first floating layer of the user interface; when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface; wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.

Description

Face detection method and device and electronic equipment
Technical Field
The present application relates to the field of computer software technologies, and in particular, to a method and an apparatus for face detection, and an electronic device.
Background
The face recognition technology is applied to the scenes of identity verification, risk release, login, payment and the like in the app of the payment treasure at present, and various scenes of offline access control, unlocking, face brushing payment and the like. At present, the mainstream scheme of face brushing is 'looking into the mirror', namely, a user can see own face in a screen while an algorithm is running, and the face recognition is finished just like looking into the mirror.
The 'looking into the mirror' interaction is convenient for a user to adjust the angle according to the camera picture, and can make actions (blinking, head shaking and the like) according to prompt coordination, but the looking into the mirror interaction needs to render a new page, the time consumption is long, and the user can be provided with bad experiences in certain situations, such as in public places or when the user does not dress up.
There is a need for a face-brushing detection scheme that can increase the starting speed of the face-brushing detection function and provide a relatively friendly face detection prompt.
Disclosure of Invention
The embodiment of the application aims to provide a face detection method, a face detection device and electronic equipment, so that a relatively friendly auxiliary payment prompt is provided, and meanwhile, the starting speed of a face brushing detection function is increased.
In order to solve the above technical problem, the embodiments of the present application are implemented as follows:
in a first aspect, a face detection method is provided, where the method includes:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a second aspect, a face detection apparatus is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module starts a camera to acquire images so as to detect human faces, and the images are preset to be invisible in a user interface;
the face detection module is used for carrying out face detection based on the acquired image;
the floating layer display module is used for displaying a face detection state in a first floating layer of the user interface, wherein the image is preset to be invisible in the user interface; and when the image does not meet the face detection requirement, displaying the image in a second floating layer of the user interface; wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a third aspect, an electronic device is provided, which includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a fourth aspect, a computer-readable storage medium is presented, the computer-readable storage medium storing one or more programs that, when executed by an electronic device that includes a plurality of application programs, cause the electronic device to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
As can be seen from the technical solutions provided in the embodiments of the present application, the embodiments of the present application have at least one of the following technical effects:
during face detection, a camera is started to collect images, the images are preset to be in an invisible state, and the face detection state is displayed on a floating layer of a user interface, so that the rendering content of a page can be reduced, and the opening speed of the face detection function is increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a flowchart of a face detection method according to an embodiment of the present application.
Fig. 2 is a flowchart of an implementation of a face detection method according to an embodiment of the present application.
Fig. 3-5 are schematic views of a specific implementation scenario of face detection according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a face detection method and device and electronic equipment.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of a face detection method according to an embodiment of the present application. The method of fig. 1 may include:
and S110, starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface.
It should be understood that, in the embodiment of the present application, the face detection operation may be triggered in various ways, which is not limited in the embodiment of the present application.
After the face detection operation is triggered, a camera of the terminal equipment can be started to collect images so as to detect the face.
It should be appreciated that the initial state of the image in the user interface of the terminal device is not visible after the camera is started to capture the image. The image collected by the camera is preset to be in an invisible state, so that the rendering content of the page can be reduced, and the opening speed of the face detection function is increased.
Specifically, in the embodiments of the present application, the invisibility of the image can be achieved in various ways.
For example, optionally, as an embodiment, the image is hidden in a second floating layer of the user interface when the camera is started.
For another example, optionally, as another embodiment, the image is displayed in a second floating layer of the user interface and provided with a covering layer when the camera is started.
It should be understood that a mask is a view used to occlude a target object. For an image, the mask of the image is an occlusion view that is set to occlude the image. From a rendering perspective, the layer of the masking layer is rendered over the target object layer. In general, the masking layer may be an opaque layer or a layer with less transparency. In a specific implementation, taking an Android system as an example, a masking layer can be set by using a guideView object; of course, other implementation manners may also be adopted to implement the function of the cover layer, and specific implementation may refer to the prior art and will not be described in detail.
Of course, other implementations are possible, and the embodiments of the present application are not limited thereto.
And S120, displaying a face detection state in a first floating layer of the user interface.
In the embodiment of the application, when the face detection is carried out, the face detection state can be displayed in the floating layer of the user interface so as to carry out face detection reminding on the user.
Of course, it should be understood that, in the embodiment of the present application, there is no chronological order in step S110 and step S120, and the execution order thereof may also be replaced, or executed in parallel, and so on.
S130, when the image does not meet the requirement of face detection, displaying the image in a second floating layer of the user interface.
It should be understood that, in the embodiment of the present application, the image is displayed in the second floating layer of the user interface, so that the user performs face detection based on the adjusted face position, and the user is prevented from being unaware of how to perform adjustment.
It should be understood that, in the embodiment of the present application, the first floating layer and the second floating layer of the user interface may be the same floating layer, or may be different floating layers.
In the embodiment of the application, when the face detection is carried out, the camera is started to collect images, the images are preset to be in the invisible state, and the face detection state is displayed on the floating layer of the user interface, so that the rendering content of a page can be reduced, and the opening speed of the face detection function is increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
Optionally, as an embodiment, for a scheme that the image is hidden in a second floating layer of the user interface when the camera is started, the image in the second floating layer may be displayed in a manner of switching from a hidden state to a display state.
Optionally, as another embodiment, for a scheme that the image is displayed in a second floating layer of the user interface when the camera is started and a cover layer is provided, the image may be displayed in the second floating layer by removing the cover layer on the second floating layer.
Optionally, as still another embodiment, for a scheme that the image is displayed in a second floating layer of the user interface and a cover layer is provided when the camera is started, the transparency of the cover layer on the second floating layer may be further reduced so that the image of the second floating layer becomes visible.
It should be understood that the lower the transparency is, the weaker the masking effect of the masking layer is, and when the transparency is 0, the masking layer is completely transparent.
It should be understood that in the embodiments of the present application, the transparency of the masking layer may be adjusted down to a preset threshold, for example, 0.3, 0, etc.; alternatively, the transparency of the mask layer may be gradually reduced based on a preset operation, for example, the transparency of the mask layer is adjusted by a mouse wheel, and the like.
It should be understood that, in the embodiment of the present application, an image meeting the requirement of face detection may specifically include multiple implementation scenarios.
Optionally, as an embodiment, meeting the face detection requirement includes one or more of the following conditions:
the face posture in the image is a preset posture;
the light intensity in the image is within a predetermined light intensity range;
the size of the face in the image is larger than a preset face threshold value;
the face proportion in the image is higher than the preset proportion.
For example, if the face pose in the image is not satisfactory, such as a side face, the image acquired by the image acquisition interface needs to be displayed in a floating layer of the page.
For another example, if the light intensity of the image is too dark or too bright, which may cause the image quality to be too poor to be recognized, the image captured by the image capturing interface needs to be displayed in the floating layer of the page to remind the user to perform adjustment.
Optionally, as an embodiment, the image meeting the requirement of face detection includes: and when the face detection is the living body detection, the user executes the image of the living body detection action.
For example, when a user is required to blink or nod his head, the image may be presented in the second floating layer.
It should be understood that in order to avoid too short a time to obtain an image that meets the face detection requirements, an agreement may be made on the shortest time to display the image. Optionally, as an embodiment, when the image does not meet the requirement of face detection all the time within a preset time, the image is displayed in a second floating layer of the user interface.
It should be understood that to enhance page friendliness, the face detection status may also be displayed in the first and/or second floating layers when the image is shown in the second floating layer.
It will be appreciated that the switching of the face detection state of the display may be gradual, such as a gradual animation effect, or the like, in order to make the display of the reminder more friendly.
It should be understood that after step S120, the method may further include: and after the face detection is successful, closing or hiding the image acquisition interface.
Fig. 2 is a flowchart of an implementation of a face detection method according to an embodiment of the present application. The method of the embodiment of the present application is described below with reference to fig. 2 from an algorithm level and an interaction level, respectively.
After triggering face brushing detection operation, starting a camera and starting a face detection algorithm at an algorithm layer; and displaying the floating layer on the user interface at the interaction layer, and prompting the face detection in progress at the floating layer, such as character prompting information of 'face detection in progress'.
When the algorithm layer detects that the user is required to be matched and adjusted, a camera picture is displayed on a user interface floating layer in the interaction layer, and the process of smoothing is carried out so as to assist the user in carrying out face detection through the camera picture.
When the algorithm layer detection is successful, face recognition comparison can be carried out; correspondingly, the face detection of the interaction layer is successful, the camera picture can be hidden, and prompt information such as 'processing' is displayed on the floating layer of the user interface.
And after the algorithm layer returns the face recognition result, the recognition result can be output at the interaction layer, and the floating layer is closed at the same time.
It should be understood that the specific implementation of the scenario shown in fig. 2 in the present application may refer to the specific implementation scheme in the embodiment shown in fig. 1, and details of the embodiment of the present application are not repeated herein.
In order to facilitate understanding of the technical solution of the embodiment of the present application, fig. 3 to fig. 5 show a scene diagram of a specific implementation of face detection.
Fig. 3 shows the first stage of face detection, no face exposed, displaying a breathing animation, while the user is prompted with a script to adjust the pose. As shown in fig. 3, when the face detection starts, the user interface is floated to display "face detection in progress", and then the user is prompted to adjust the posture by the documents such as "please leave a little away", "please face the mobile phone", "please leave a little away".
Fig. 4 shows the second stage of face detection, face exposed, showing the camera view on the user interface floating layer, breathing animation to guide the user into alignment. The gradual effect of the displayed switching of the face detection state can be shown as the breathing animation of fig. 4, and the state switching of the face detection state is displayed by two semicircles of the floating layer and a pattern in the middle of the semicircle (which is replaced by a transverse line to avoid sensitive information). In a specific implementation, the first stage first lights each half of the two semicircles of the floating layer (the first drawing in fig. 4, the two semicircles are gray to show lighting), the second stage lights all of the two semicircles of the floating layer, and the third stage lights the pattern in the middle of the two semicircles of the floating layer. Of course, other gradual change schemes of the face detection state may also exist, and the embodiment of the present application does not limit this.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the human face detection device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
the first floating layer and the second floating layer are the same floating layer or different floating layers.
The method executed by the face detection device according to the embodiment shown in fig. 1 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method.
The electronic device may also execute the method shown in fig. 1 and implement the functions of the face detection apparatus in the embodiments shown in fig. 1 and fig. 2, which are not described herein again in this embodiment of the present application.
Of course, besides the software implementation, the electronic device of the present application does not exclude other implementations, such as a logic device or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or a logic device.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by a portable electronic device including a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 1, and are specifically configured to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
Fig. 7 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application. Referring to fig. 7, in a software implementation, a face detection apparatus 700 may include:
the acquisition module 710 starts a camera to acquire an image for face detection, wherein the image is preset to be invisible in a user interface;
a face detection module 720, which performs face detection based on the collected image;
a floating layer display module 730, configured to display a face detection state in a first floating layer of the user interface, where the image is preset as invisible in the user interface; and when the image does not meet the face detection requirement, displaying the image in a second floating layer of the user interface; wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In the embodiment of the application, after the face detection operation is triggered, the face detection device 700 prompts the face detection state of the user on the floating layer of the current page, starts the camera to collect images and is preset to be in the invisible state when the face detection is performed, and displays the face detection state on the floating layer of the user interface, so that the rendering content of the page can be reduced, and the opening speed of the face detection function can be increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
The face detection apparatus 700 may also execute the method in fig. 1, and implement the functions of the face detection apparatus in the embodiments shown in fig. 1 and fig. 2, which are not described herein again in this embodiment of the present application.
In short, the above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.

Claims (13)

1. A face detection method, comprising:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
the first floating layer and the second floating layer are the same floating layer or different floating layers.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of a red light source, a green light source, and a blue light source,
meeting the face detection requirement includes one or more of the following conditions:
the face posture in the image is a preset posture;
the light intensity in the image is within a predetermined light intensity range;
the size of the face in the image is larger than a preset face threshold value;
the face proportion in the image is higher than a preset proportion.
3. The method of claim 1, when the image does not meet face detection requirements, presenting the image in a second floating layer of the user interface, comprising:
and when the image does not meet the requirement of face detection all the time within the preset time, displaying the image in a second floating layer of the user interface.
4. The method of claim 1, the image meeting face detection requirements comprising: and when the face detection is the living body detection, the user executes the image of the living body detection action.
5. The method of claim 1, wherein said at least one of said first and second methods,
displaying the image in a second floating layer of the user interface, including:
and removing the covering layer on the second floating layer, wherein when the camera is started to collect the image, the image is displayed on the second floating layer and is provided with the covering layer.
6. The method of claim 1, presenting the image in a second floating layer of the user interface, comprising:
and reducing the transparency of the covering layer on the second floating layer, wherein the image is displayed on the second floating layer and provided with the covering layer when the camera is started to collect the image.
7. The method of claim 1, presenting the image in a second floating layer of the user interface, comprising:
and switching the image in the second floating layer from a hidden state to a display state, wherein the image is preset to be hidden in the second floating layer when the camera is started to collect the image.
8. The method of claim 1, further comprising:
and when the image is displayed in the second floating layer, displaying a face detection state on the first floating layer or the second floating layer.
9. The method of any one of claims 1 or 8, wherein the switching of the face detection state is gradual.
10. The method of any one of claims 1-8, further comprising:
and after the face detection is successful, closing the camera or hiding the image.
11. A face detection apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module starts a camera to acquire images so as to detect human faces, and the images are preset to be invisible in a user interface;
the face detection module is used for carrying out face detection based on the acquired image;
the floating layer display module is used for displaying a face detection state in a first floating layer of the user interface, wherein the image is preset to be invisible in the user interface; and when the image does not meet the face detection requirement, displaying the image in a second floating layer of the user interface; the first floating layer and the second floating layer are the same floating layer or different floating layers.
12. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface, wherein the image is preset to be invisible in the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
13. A computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
CN201910313452.8A 2019-04-18 2019-04-18 Face detection method and device and electronic equipment Active CN110163104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910313452.8A CN110163104B (en) 2019-04-18 2019-04-18 Face detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910313452.8A CN110163104B (en) 2019-04-18 2019-04-18 Face detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110163104A CN110163104A (en) 2019-08-23
CN110163104B true CN110163104B (en) 2023-02-17

Family

ID=67639510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910313452.8A Active CN110163104B (en) 2019-04-18 2019-04-18 Face detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110163104B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868677A (en) * 2015-01-19 2016-08-17 阿里巴巴集团控股有限公司 Live human face detection method and device
CN107862189A (en) * 2017-10-26 2018-03-30 广东欧珀移动通信有限公司 Unlocking method and related product
CN109063604A (en) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 A kind of face identification method and terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738015B2 (en) * 1997-10-09 2010-06-15 Fotonation Vision Limited Red-eye filter method and apparatus
US7606417B2 (en) * 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868677A (en) * 2015-01-19 2016-08-17 阿里巴巴集团控股有限公司 Live human face detection method and device
CN107862189A (en) * 2017-10-26 2018-03-30 广东欧珀移动通信有限公司 Unlocking method and related product
CN109063604A (en) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 A kind of face identification method and terminal device

Also Published As

Publication number Publication date
CN110163104A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
US11210541B2 (en) Liveness detection method, apparatus and computer-readable storage medium
TWI786291B (en) Face recognition method, terminal device, and computer-readable storage medium
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
US11295149B2 (en) Living body detection method, apparatus and device
CN108280431B (en) Face recognition processing method, face recognition processing device and intelligent terminal
CN113313026B (en) Face recognition interaction method, device and equipment based on privacy protection
GB2589996A (en) Multiple Face Tracking Method For Facial Special Effect, Apparatus And Electronic Device
US9846956B2 (en) Methods, systems and computer-readable mediums for efficient creation of image collages
TWI752473B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN112560530B (en) Two-dimensional code processing method, device, medium and electronic device
CN104902143B (en) A kind of image de-noising method and device based on resolution ratio
CN107690804B (en) Image processing method and user terminal
JP6373446B2 (en) Program, system, apparatus and method for selecting video frame
CN110163104B (en) Face detection method and device and electronic equipment
CN108564537B (en) Image processing method, image processing device, electronic equipment and medium
CN111373409B (en) Method and terminal for obtaining color value change
WO2025031315A1 (en) Video processing method and apparatus, computer device, and storage medium
HK40013031B (en) Face detection method and device and electronic equipment
HK40013031A (en) Face detection method and device and electronic equipment
CN109089042B (en) Image processing method identification method, device, storage medium and mobile terminal
CN110059576A (en) Screening technique, device and the electronic equipment of picture
CN105791689A (en) An image processing method and device
CN109035136A (en) Image processing method and device, storage medium
CN114973426B (en) Living body detection method, device and equipment
CN107276974B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40013031

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

Effective date of registration: 20200923

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant