Disclosure of Invention
The embodiment of the application aims to provide a face detection method, a face detection device and electronic equipment, so that a relatively friendly auxiliary payment prompt is provided, and meanwhile, the starting speed of a face brushing detection function is increased.
In order to solve the above technical problem, the embodiments of the present application are implemented as follows:
in a first aspect, a face detection method is provided, where the method includes:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a second aspect, a face detection apparatus is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module starts a camera to acquire images so as to detect human faces, and the images are preset to be invisible in a user interface;
the face detection module is used for carrying out face detection based on the acquired image;
the floating layer display module is used for displaying a face detection state in a first floating layer of the user interface, wherein the image is preset to be invisible in the user interface; and when the image does not meet the face detection requirement, displaying the image in a second floating layer of the user interface; wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a third aspect, an electronic device is provided, which includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In a fourth aspect, a computer-readable storage medium is presented, the computer-readable storage medium storing one or more programs that, when executed by an electronic device that includes a plurality of application programs, cause the electronic device to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
As can be seen from the technical solutions provided in the embodiments of the present application, the embodiments of the present application have at least one of the following technical effects:
during face detection, a camera is started to collect images, the images are preset to be in an invisible state, and the face detection state is displayed on a floating layer of a user interface, so that the rendering content of a page can be reduced, and the opening speed of the face detection function is increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
Detailed Description
The embodiment of the application provides a face detection method and device and electronic equipment.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of a face detection method according to an embodiment of the present application. The method of fig. 1 may include:
and S110, starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface.
It should be understood that, in the embodiment of the present application, the face detection operation may be triggered in various ways, which is not limited in the embodiment of the present application.
After the face detection operation is triggered, a camera of the terminal equipment can be started to collect images so as to detect the face.
It should be appreciated that the initial state of the image in the user interface of the terminal device is not visible after the camera is started to capture the image. The image collected by the camera is preset to be in an invisible state, so that the rendering content of the page can be reduced, and the opening speed of the face detection function is increased.
Specifically, in the embodiments of the present application, the invisibility of the image can be achieved in various ways.
For example, optionally, as an embodiment, the image is hidden in a second floating layer of the user interface when the camera is started.
For another example, optionally, as another embodiment, the image is displayed in a second floating layer of the user interface and provided with a covering layer when the camera is started.
It should be understood that a mask is a view used to occlude a target object. For an image, the mask of the image is an occlusion view that is set to occlude the image. From a rendering perspective, the layer of the masking layer is rendered over the target object layer. In general, the masking layer may be an opaque layer or a layer with less transparency. In a specific implementation, taking an Android system as an example, a masking layer can be set by using a guideView object; of course, other implementation manners may also be adopted to implement the function of the cover layer, and specific implementation may refer to the prior art and will not be described in detail.
Of course, other implementations are possible, and the embodiments of the present application are not limited thereto.
And S120, displaying a face detection state in a first floating layer of the user interface.
In the embodiment of the application, when the face detection is carried out, the face detection state can be displayed in the floating layer of the user interface so as to carry out face detection reminding on the user.
Of course, it should be understood that, in the embodiment of the present application, there is no chronological order in step S110 and step S120, and the execution order thereof may also be replaced, or executed in parallel, and so on.
S130, when the image does not meet the requirement of face detection, displaying the image in a second floating layer of the user interface.
It should be understood that, in the embodiment of the present application, the image is displayed in the second floating layer of the user interface, so that the user performs face detection based on the adjusted face position, and the user is prevented from being unaware of how to perform adjustment.
It should be understood that, in the embodiment of the present application, the first floating layer and the second floating layer of the user interface may be the same floating layer, or may be different floating layers.
In the embodiment of the application, when the face detection is carried out, the camera is started to collect images, the images are preset to be in the invisible state, and the face detection state is displayed on the floating layer of the user interface, so that the rendering content of a page can be reduced, and the opening speed of the face detection function is increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
Optionally, as an embodiment, for a scheme that the image is hidden in a second floating layer of the user interface when the camera is started, the image in the second floating layer may be displayed in a manner of switching from a hidden state to a display state.
Optionally, as another embodiment, for a scheme that the image is displayed in a second floating layer of the user interface when the camera is started and a cover layer is provided, the image may be displayed in the second floating layer by removing the cover layer on the second floating layer.
Optionally, as still another embodiment, for a scheme that the image is displayed in a second floating layer of the user interface and a cover layer is provided when the camera is started, the transparency of the cover layer on the second floating layer may be further reduced so that the image of the second floating layer becomes visible.
It should be understood that the lower the transparency is, the weaker the masking effect of the masking layer is, and when the transparency is 0, the masking layer is completely transparent.
It should be understood that in the embodiments of the present application, the transparency of the masking layer may be adjusted down to a preset threshold, for example, 0.3, 0, etc.; alternatively, the transparency of the mask layer may be gradually reduced based on a preset operation, for example, the transparency of the mask layer is adjusted by a mouse wheel, and the like.
It should be understood that, in the embodiment of the present application, an image meeting the requirement of face detection may specifically include multiple implementation scenarios.
Optionally, as an embodiment, meeting the face detection requirement includes one or more of the following conditions:
the face posture in the image is a preset posture;
the light intensity in the image is within a predetermined light intensity range;
the size of the face in the image is larger than a preset face threshold value;
the face proportion in the image is higher than the preset proportion.
For example, if the face pose in the image is not satisfactory, such as a side face, the image acquired by the image acquisition interface needs to be displayed in a floating layer of the page.
For another example, if the light intensity of the image is too dark or too bright, which may cause the image quality to be too poor to be recognized, the image captured by the image capturing interface needs to be displayed in the floating layer of the page to remind the user to perform adjustment.
Optionally, as an embodiment, the image meeting the requirement of face detection includes: and when the face detection is the living body detection, the user executes the image of the living body detection action.
For example, when a user is required to blink or nod his head, the image may be presented in the second floating layer.
It should be understood that in order to avoid too short a time to obtain an image that meets the face detection requirements, an agreement may be made on the shortest time to display the image. Optionally, as an embodiment, when the image does not meet the requirement of face detection all the time within a preset time, the image is displayed in a second floating layer of the user interface.
It should be understood that to enhance page friendliness, the face detection status may also be displayed in the first and/or second floating layers when the image is shown in the second floating layer.
It will be appreciated that the switching of the face detection state of the display may be gradual, such as a gradual animation effect, or the like, in order to make the display of the reminder more friendly.
It should be understood that after step S120, the method may further include: and after the face detection is successful, closing or hiding the image acquisition interface.
Fig. 2 is a flowchart of an implementation of a face detection method according to an embodiment of the present application. The method of the embodiment of the present application is described below with reference to fig. 2 from an algorithm level and an interaction level, respectively.
After triggering face brushing detection operation, starting a camera and starting a face detection algorithm at an algorithm layer; and displaying the floating layer on the user interface at the interaction layer, and prompting the face detection in progress at the floating layer, such as character prompting information of 'face detection in progress'.
When the algorithm layer detects that the user is required to be matched and adjusted, a camera picture is displayed on a user interface floating layer in the interaction layer, and the process of smoothing is carried out so as to assist the user in carrying out face detection through the camera picture.
When the algorithm layer detection is successful, face recognition comparison can be carried out; correspondingly, the face detection of the interaction layer is successful, the camera picture can be hidden, and prompt information such as 'processing' is displayed on the floating layer of the user interface.
And after the algorithm layer returns the face recognition result, the recognition result can be output at the interaction layer, and the floating layer is closed at the same time.
It should be understood that the specific implementation of the scenario shown in fig. 2 in the present application may refer to the specific implementation scheme in the embodiment shown in fig. 1, and details of the embodiment of the present application are not repeated herein.
In order to facilitate understanding of the technical solution of the embodiment of the present application, fig. 3 to fig. 5 show a scene diagram of a specific implementation of face detection.
Fig. 3 shows the first stage of face detection, no face exposed, displaying a breathing animation, while the user is prompted with a script to adjust the pose. As shown in fig. 3, when the face detection starts, the user interface is floated to display "face detection in progress", and then the user is prompted to adjust the posture by the documents such as "please leave a little away", "please face the mobile phone", "please leave a little away".
Fig. 4 shows the second stage of face detection, face exposed, showing the camera view on the user interface floating layer, breathing animation to guide the user into alignment. The gradual effect of the displayed switching of the face detection state can be shown as the breathing animation of fig. 4, and the state switching of the face detection state is displayed by two semicircles of the floating layer and a pattern in the middle of the semicircle (which is replaced by a transverse line to avoid sensitive information). In a specific implementation, the first stage first lights each half of the two semicircles of the floating layer (the first drawing in fig. 4, the two semicircles are gray to show lighting), the second stage lights all of the two semicircles of the floating layer, and the third stage lights the pattern in the middle of the two semicircles of the floating layer. Of course, other gradual change schemes of the face detection state may also exist, and the embodiment of the present application does not limit this.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the human face detection device on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
the first floating layer and the second floating layer are the same floating layer or different floating layers.
The method executed by the face detection device according to the embodiment shown in fig. 1 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and combines hardware thereof to complete the steps of the method.
The electronic device may also execute the method shown in fig. 1 and implement the functions of the face detection apparatus in the embodiments shown in fig. 1 and fig. 2, which are not described herein again in this embodiment of the present application.
Of course, besides the software implementation, the electronic device of the present application does not exclude other implementations, such as a logic device or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or a logic device.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by a portable electronic device including a plurality of application programs, enable the portable electronic device to perform the method of the embodiment shown in fig. 1, and are specifically configured to:
starting a camera to collect images so as to detect the human face, wherein the images are preset to be invisible in a user interface;
displaying a face detection state in a first floating layer of the user interface;
when the image does not meet the human face detection requirement, displaying the image in a second floating layer of the user interface;
wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
Fig. 7 is a schematic structural diagram of a face detection apparatus according to an embodiment of the present application. Referring to fig. 7, in a software implementation, a face detection apparatus 700 may include:
the acquisition module 710 starts a camera to acquire an image for face detection, wherein the image is preset to be invisible in a user interface;
a face detection module 720, which performs face detection based on the collected image;
a floating layer display module 730, configured to display a face detection state in a first floating layer of the user interface, where the image is preset as invisible in the user interface; and when the image does not meet the face detection requirement, displaying the image in a second floating layer of the user interface; wherein, the first floating layer and the second floating layer are the same floating layer or different floating layers.
In the embodiment of the application, after the face detection operation is triggered, the face detection device 700 prompts the face detection state of the user on the floating layer of the current page, starts the camera to collect images and is preset to be in the invisible state when the face detection is performed, and displays the face detection state on the floating layer of the user interface, so that the rendering content of the page can be reduced, and the opening speed of the face detection function can be increased; when the image meeting the face detection requirement is not obtained after the face detection function is started, the image is displayed in a floating layer of a user interface, so that a user can carry out face detection based on the adjusted face position, the defect that the position of the face in a camera cannot be accurately adjusted by the user or the defect that the matching action of the face detection cannot be carried out specifically due to pure character prompting can be avoided, and friendly face detection prompting is provided for the user.
The face detection apparatus 700 may also execute the method in fig. 1, and implement the functions of the face detection apparatus in the embodiments shown in fig. 1 and fig. 2, which are not described herein again in this embodiment of the present application.
In short, the above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.