Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
(1) A terminal device, also called a User Equipment (UE), is a device providing voice and/or data connectivity to a User, for example, a handheld device with an unlimited connection function, a vehicle-mounted device, and so on. Common terminals include, for example: the mobile phone includes a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device such as a smart watch, a smart bracelet, a pedometer, and the like.
(2) Parallel execution means that at least two actions are performed simultaneously on different processes, such as action a and action B, action a being performed on process 1, action B being performed on process 2, and action B being performed during the execution of action a.
Referring to fig. 1, fig. 1 is a schematic flow chart of a locking method provided in an embodiment of the present application, and is applied to a terminal device including a facial image capturing device and a touch display screen, where the facial image capturing device may be a front camera of the terminal device or a common camera module, which is not limited herein, and the method includes:
step 101: and the terminal equipment acquires the face image of the user through the face image acquisition device and matches the face image of the user with the face template.
In an embodiment, step 101 is performed when the terminal device detects that the pending event requires a face unlock.
Wherein, the event to be processed comprises: payment events, screen unlock events, video encrypted chat events, application login events, and the like. When the event to be processed is a screen unlocking event and the terminal device is in a black screen state before step 101, the terminal device needs to light a touch display screen of the terminal device before step 101.
Further, when the facial image of the user is collected, the brightness of the touch display screen is the same under different events to be processed. Or, when the facial image of the user is collected, the brightness of the touch display screen is determined according to the event to be processed, specifically: each event to be processed corresponds to a safety level, the higher the safety level is, the higher the brightness of the touch display screen is when the facial image of the user is collected, and the lower the safety level is, the lower the brightness of the touch display screen is when the facial image of the user is collected. Or, when the facial image of the user is collected, the brightness of the touch display screen is determined according to the brightness of the ambient light, specifically: the higher the brightness of the ambient light is, the lower the brightness of the touch display screen is when the facial image of the user is collected, and the lower the brightness of the ambient light is, the higher the brightness of the touch display screen is when the facial image of the user is collected. Or, different time periods correspond to different brightness, and when the facial image of the user is collected, the brightness of the touch display screen is determined according to the time of the current system.
In an embodiment, a specific implementation manner of the terminal device acquiring a face image of a user through the face image acquisition device and matching the face image of the user with a face template includes:
the terminal equipment continuously collects N facial images of the user through the facial image collecting device, wherein N is an integer larger than 1;
the terminal equipment matches the face images of the N users with the face template in parallel;
when at least one of the N facial images of the user is matched with the facial template, the terminal equipment determines that the facial image of the user is matched with the facial template;
when the face images of the N users are not matched with the face template, the terminal equipment determines that the face images of the users are not matched with the face template.
For example, assuming that N is 3, the terminal device continuously acquires 3 face images of the user, where the 3 face images of the user are, for example: the face image matching method comprises the steps that face images 1, face images 2 and face images 3 are obtained, the terminal device matches the face images 1 with face templates in a first process, the terminal device matches the face images 2 with the face templates in a second process, the terminal device matches the face images 3 with the face templates in a third process, if none of the 3 face images is matched with the face templates, the face images are not matched with the face templates, and otherwise, the face images are matched with the face templates.
Further, the values of different pending events N are the same, such as all 3, or all 4, or all 5, or other values, and so on. Alternatively, the value of N is determined based on the pending event. The method specifically comprises the following steps: each event to be processed corresponds to a security level, the higher the security level is, the larger the value of N is, and the lower the security level is, the smaller the value of N is.
In an embodiment, a specific implementation manner of the terminal device acquiring a face image of a user through the face image acquisition device and matching the face image of the user with a face template includes:
step a 1: the terminal equipment collects a face image of a user through the face image collecting device;
step a 2: the terminal equipment matches the collected face image of the user with a face template;
and when the collected face image of the user is not matched with the face template and the matching value of the collected face image of the user and the face template is smaller than a third threshold value, the terminal equipment stops collecting the face image of the user and displays prompt information on the touch display screen.
When the collected face image of the user does not match with the face template and the collected face image of the user matches with the face template by a value greater than or equal to a third threshold value, the terminal device executes step a 1.
Step a 3: and when the number of times that the collected face image of the user is not matched with the face template is larger than or equal to a fourth threshold value, the terminal equipment stops collecting the face image of the user and displays prompt information on the touch display screen.
It should be noted that, when the matching value of the face image of the user and the face template is greater than or equal to a fifth threshold, the terminal device determines that the face image of the user matches the face template; when the matching value of the face image of the user and the face template is smaller than a fifth threshold value, the terminal device determines that the face image of the user does not match the face template. Wherein the third threshold is less than the fifth threshold. The fifth threshold is determined according to the event to be processed, and specifically includes: each event to be processed corresponds to one safety level, the higher the safety level is, the larger the fifth threshold value is, and the lower the safety level is, the smaller the fifth threshold value is. The third threshold is determined according to the event to be processed, and specifically includes: each event to be processed corresponds to a safety level, the higher the safety level is, the larger the third threshold value is, and the lower the safety level is, the smaller the third threshold value is. The fourth threshold may be self-defined by the terminal device, or may be self-defined by the user, which is not limited herein.
For example, assuming that the third threshold is 50% and the fourth threshold is 5, the terminal device first collects a face image of a user, and if the matching value of the face image of the user and the face template is 30% (less than 50%), the terminal device directly stops collecting the face image and displays the prompt information on the touch display screen; if the matching value of the face image of the user and the face template is 55 percent (more than 50 percent), the terminal equipment collects the face image of the next user, and the process is circulated; if the number of times that the collected face image of the user is not matched with the face template is less than 5, the terminal equipment continues to collect the face image of the user; and if the number of times that the collected face image of the user is not matched with the face template is more than or equal to 5, the terminal equipment stops collecting the face image of the user.
Step 102: when the facial image of the user is not matched with the facial template, the terminal equipment stops collecting the facial image of the user and displays prompt information on the touch display screen, wherein the prompt information is used for prompting the user whether to continue face unlocking or not. When the terminal device detects a confirmation instruction for the prompt information, the terminal device continues to perform face unlocking, namely, step 101 is performed. And when the terminal equipment does not detect the confirmation instruction aiming at the prompt message, not doing any operation.
In an embodiment, when the face image of the user does not match the face template, the terminal device reports a matching failure result to the application.
In an embodiment, specific implementation manners of the terminal device displaying the prompt information on the touch display screen include: and the terminal equipment pops up a prompt box on the touch display screen, and the prompt information is displayed in the prompt box.
Step 103: and when the number of times that the face image of the user is not matched with the face template is greater than or equal to a first threshold value, the terminal equipment locks the face unlocking function for a set time length.
Wherein the first thresholds for different pending events are the same, such as all being 3, or all being 4, or all being 5, or other values, etc. Alternatively, the first threshold is determined based on the pending event. The method specifically comprises the following steps: each event to be processed corresponds to a safety level, the higher the safety level is, the smaller the first threshold value is, and the lower the safety level is, the larger the first threshold value is.
Further, when the face image of the user is matched with the face template, the terminal equipment executes the event to be processed.
For example, assuming that the event to be processed is a screen unlocking event, the facial image acquisition device is a front-facing camera, the first threshold is 5, N is 3, the terminal device continuously acquires facial images of 3 users through the front-facing camera, when all facial images of the 3 users do not match with the facial template, the terminal device stops acquiring the facial images of the users, and meanwhile, a prompt box pops up on the touch display screen, and prompt information is displayed in the prompt box, where the prompt information may be, for example, "whether to continue facial unlocking", as shown in fig. 2. If the user clicks the confirmation option for the prompt information, the terminal device then executes step 101, otherwise, no operation is performed, and thus, the terminal device locks the face unlocking function for a period of time until the matching times are greater than or equal to 5.
Therefore, in the application, when primary matching is unsuccessful, the acquisition of the facial image is stopped, so that the power consumption of the terminal equipment can be reduced, when the acquisition of the facial image is stopped, whether the user continues to perform facial unlocking is prompted, if the user determines to continue to perform facial unlocking, the facial image of the user is acquired again to perform matching operation, so that the interaction between the user and the equipment can be increased, and meanwhile, the failure of facial unlocking of the user can be prompted to some extent due to the prompting reason, so that the user can appropriately adjust the facial image acquisition mode, and the success rate of facial unlocking is further improved.
In an embodiment, after the prompt information is displayed on the touch display screen, the method further includes:
and when the difference value between a first matching value and a second matching value is smaller than a second threshold value, the terminal equipment locks the face unlocking function for the set time length, wherein the first matching value is the matching value between the face image of the user and the face template before the prompt information is displayed, and the second matching value is the matching value between the face image of the user and the face template after the prompt information is displayed.
The second threshold is smaller than the fifth threshold, and the second threshold may be, for example, 5%, 10%, 15%, 16%, 18%, or other values.
Specifically, assuming that the first matching value is 40% and the second matching value is 45%, this means that the quality of the face image of the user captured by the face capturing device is still not improved after displaying a case of prompting the user whether to continue face unlocking, which may be a face unlocking operation performed by another person using the terminal device of the user, in which case the terminal device is directly locked for a while in order to secure the terminal device.
In one embodiment, the set duration is determined according to a matching value of the face image of the user and the face template.
Specifically, the set time length is determined according to an average difference value of matching values of the face image of the user and the face template, the larger the average difference value is, the shorter the set time length is, and the smaller the average difference value is, the longer the set time length is; or the set time length is determined according to a minimum matching value of the face image of the user and the face template, the smaller the minimum matching value is, the longer the set time length is, and the larger the minimum matching value is, the shorter the set time length is; or the set time length is determined according to the maximum matching value of the face image of the user and the face template, the smaller the maximum matching value is, the longer the set time length is, and the larger the maximum matching value is, the shorter the set time length is; or, the set time length is determined according to a last matching value of the face image of the user and the face template, and the set time length is longer when the last matching value is smaller, and the set time length is shorter when the last matching value is larger.
In one embodiment, the set duration is determined according to the last time the facial unlock function was locked and the current system time.
Specifically, the set time period is determined according to a difference value between the time of locking the face unlocking function last time and the current system time, and the smaller the difference value is, the longer the set time period is, the larger the difference value is, the shorter the set time period is.
It should be noted that after the terminal device locks the face unlocking function, all functions requiring face unlocking cannot be used within a set time period, for example, the current event to be processed is a payment event, and then after the terminal device locks the face unlocking function, all events requiring face unlocking cannot be used, for example, the face unlocking events such as a screen unlocking event and an application login event cannot be used. Or after the terminal device locks the face unlocking function, only the face unlocking function corresponding to the event to be processed cannot be used, for example, when the current event to be processed is a payment event, only the face unlocking function of the payment event cannot be used after the terminal device locks the face unlocking function.
The embodiment of the present application further provides another more detailed method flow, as shown in fig. 3, including:
step 301: the terminal device detects a pending event.
Step 302: and the terminal equipment continuously collects N facial images of the user through the facial image collecting device, wherein N is an integer larger than 1.
Step 303: and the terminal equipment matches the face images of the N users with the face template in parallel.
Step 304: and the terminal equipment judges whether at least one of the N facial images of the user is matched with the facial template.
If yes, go to step 312.
If not, go to step 305.
Step 305: the terminal device determines that the face image of the user does not match the face template.
Step 306: and the terminal equipment stops acquiring the facial image of the user.
Step 307: and the terminal equipment displays prompt information on the touch display screen, wherein the prompt information is used for prompting the user whether to continue face unlocking.
Step 308: and the terminal equipment detects whether a confirmation instruction aiming at the prompt information exists.
If yes, go to step 302.
If not, no operation is performed.
Step 309: the terminal equipment judges whether the number of times that the face image of the user is not matched with the face template is larger than or equal to a first threshold value.
If yes, go to step 311.
If not, go to step 302.
Step 310: in the process that the terminal equipment judges whether the number of times that the face image of the user is not matched with the face template is larger than or equal to a first threshold value, the terminal equipment judges whether the difference value between the first matching value and the second matching value is smaller than a second threshold value.
If yes, go to step 311.
If not, go to step 302.
Step 311: and the terminal equipment locks the face unlocking function for a set time.
The set time length is determined according to a matching value of the face image of the user and the face template.
The set time length is determined according to the last time of locking the face unlocking function and the current system time.
Step 312: and the terminal equipment determines that the face image of the user is matched with the face template.
Step 313: and the terminal equipment executes the event to be processed.
It should be noted that, the specific implementation of the steps of the method shown in fig. 3 can refer to the specific implementation described in the above method, and will not be described here.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 4, fig. 4 is a terminal device 400 according to an embodiment of the present application, including: at least one processor, at least one memory, and at least one communication interface; and one or more programs;
the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps of:
acquiring a face image of a user through the face image acquisition device, and matching the face image of the user with a face template;
when the facial image of the user is not matched with the facial template, stopping collecting the facial image of the user, and displaying prompt information on the touch display screen, wherein the prompt information is used for prompting the user whether to continue facial unlocking or not, and when a confirmation instruction aiming at the prompt information is detected, continuing facial unlocking;
and when the number of times that the face image of the user is not matched with the face template is greater than or equal to a first threshold value, locking the face unlocking function for a set time length.
In an embodiment, after the prompt message is displayed on the touch display screen, the program includes instructions further for:
and when the difference value between a first matching value and a second matching value is smaller than a second threshold value, locking the face unlocking function for the set time length, wherein the first matching value is the matching value between the face image of the user and the face template before the prompt information is displayed, and the second matching value is the matching value between the face image of the user and the face template after the prompt information is displayed.
In an embodiment, the capturing of the facial image of the user by the facial image capture device and matching the facial image of the user to a facial template, the program comprising instructions further for:
continuously acquiring N facial images of a user through the facial image acquisition device, wherein N is an integer greater than 1;
matching the face images of the N users with the face template in parallel;
when at least one of the N facial images of the user is matched with the facial template, determining that the facial image of the user is matched with the facial template;
when the face images of the N users do not match the face template, determining that the face images of the users do not match the face template.
In an embodiment, the program comprises instructions for further performing the steps of:
the set time length is determined according to a matching value of the face image of the user and the face template.
In an embodiment, the program comprises instructions for further performing the steps of:
the set time length is determined according to the last time of locking the face unlocking function and the current system time.
It should be noted that, the specific implementation manner of the content described in this embodiment may refer to the above method, and will not be described here.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal device includes hardware structures and/or software modules for performing the respective functions in order to implement the functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal device may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In case of integrated units, fig. 5 shows a block diagram of a possible functional unit composition of the terminal device involved in the above embodiments. The terminal device 500 includes: the processing unit 501, the communication unit 502 and the storage unit 503, the processing unit 501 comprising an acquisition unit 5011, a matching unit 5012, a prompting unit 5013 and a locking unit 5014. The storage unit 503 is used to store program codes and data of the terminal device. The communication unit 502 is used to support communication between the terminal device and other devices. Some of the units described above (the acquisition unit 5011, the matching unit 5012, the prompting unit 5013, and the locking unit 5014) are used to perform the relevant steps of the method described above.
The acquisition unit 5011 is used for acquiring a facial image of a user through the facial image acquisition device;
a matching unit 5012 for matching the face image of the user with the face template;
a prompting unit 5013, configured to stop acquiring the face image of the user when the face image of the user does not match the face template, and display prompting information on the touch display screen, where the prompting information is used to prompt the user whether to continue face unlocking, and when a confirmation instruction for the prompting information is detected, continue face unlocking;
a locking unit 5014 for locking the face unlocking function for a set period of time when the number of times the face image of the user does not match the face template is greater than or equal to a first threshold.
In an embodiment, after the prompt unit 5013 displays the prompt information on the touch display screen, the method includes:
the locking unit 5014 locks the face unlock function for the set period of time when a difference between a first matching value that is a matching value of the face image of the user and the face template before the prompt information is displayed and a second matching value that is a matching value of the face image of the user and the face template after the prompt information is displayed is smaller than a second threshold.
In one embodiment, the acquiring unit 5011 acquires a face image of a user through the face image acquiring device, and the matching unit 5012 matches the face image of the user with a face template, including:
continuously acquiring N facial images of a user through the facial image acquisition device, wherein N is an integer greater than 1;
matching the face images of the N users with the face template in parallel;
when at least one of the N facial images of the user is matched with the facial template, determining that the facial image of the user is matched with the facial template;
when the face images of the N users do not match the face template, determining that the face images of the users do not match the face template.
In one embodiment, the terminal device includes:
the set time length is determined according to a matching value of the face image of the user and the face template.
In one example, the terminal device further includes:
the set time length is determined according to the last time of locking the face unlocking function and the current system time.
The processing Unit 501 may be a Processor or a controller (e.g., a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof). The storage unit 503 may be a memory, and the communication unit 502 may be a transceiver, a transceiver circuit, a radio frequency chip, a communication interface, or the like.
As shown in fig. 6, for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiments of the present application.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a terminal device 600 according to an embodiment of the present application, where the terminal device 600 includes: the terminal device 600 comprises a shell 10, a main board 20, a touch display screen 30, a battery 40 and an auxiliary board 50, wherein the main board 20 is provided with an infrared light source 21, an iris camera 22, a front camera 23, a processor 24, a memory 25, a SIM card slot 26 and the like, the auxiliary board is provided with a vibrator 51, an integrated sound cavity 52, a VOOC flash charging interface 53 and a fingerprint module 54, and the front camera 23 forms a facial image acquisition device of the terminal device 600.
The face image acquisition device 23 is used for acquiring a face image of a user through the face image acquisition device;
a processor 24 for matching the facial image of the user with a face template; when the facial image of the user is not matched with the facial template, stopping collecting the facial image of the user, and displaying prompt information on the touch display screen, wherein the prompt information is used for prompting the user whether to continue face unlocking;
the facial image acquisition device 23 is further configured to continue facial unlocking when a confirmation instruction for the prompt information is detected;
the processor 24 is further configured to lock the face unlocking function for a set time period when the number of times that the face image of the user does not match the face template is greater than or equal to a first threshold.
In one embodiment, after displaying the prompt message on the touch display screen, the processor 24 is further configured to:
and when the difference value between a first matching value and a second matching value is smaller than a second threshold value, locking the face unlocking function for the set time length, wherein the first matching value is the matching value between the face image of the user and the face template before the prompt information is displayed, and the second matching value is the matching value between the face image of the user and the face template after the prompt information is displayed.
In one embodiment, the capturing of the facial image of the user by the facial image capture device and the matching of the facial image of the user to the facial template, the processor 24 is further configured to:
continuously acquiring N facial images of a user through the facial image acquisition device, wherein N is an integer greater than 1;
matching the face images of the N users with the face template in parallel;
when at least one of the N facial images of the user is matched with the facial template, determining that the facial image of the user is matched with the facial template;
when the face images of the N users do not match the face template, determining that the face images of the users do not match the face template.
In one embodiment, the processor 24 is further configured to:
the set time length is determined according to a matching value of the face image of the user and the face template.
In an example, the processor 24 is further configured to:
the set time length is determined according to the last time of locking the face unlocking function and the current system time.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a terminal device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as set out in the above method embodiments. The computer program product may be a software installation package, said computer comprising terminal equipment.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.