[go: up one dir, main page]

CN107169429A - Vivo identification method and device - Google Patents

Vivo identification method and device Download PDF

Info

Publication number
CN107169429A
CN107169429A CN201710294231.1A CN201710294231A CN107169429A CN 107169429 A CN107169429 A CN 107169429A CN 201710294231 A CN201710294231 A CN 201710294231A CN 107169429 A CN107169429 A CN 107169429A
Authority
CN
China
Prior art keywords
eyes
pupil
width
feature information
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710294231.1A
Other languages
Chinese (zh)
Inventor
范晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710294231.1A priority Critical patent/CN107169429A/en
Publication of CN107169429A publication Critical patent/CN107169429A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The disclosure is directed to a kind of vivo identification method and device, the method includes:Gather the N under the first illumination the first eyes images of target to be detected, and N the second eyes images under the second illumination;The brightness of first illumination is different from the brightness of the second illumination;According to N the first eyes images, the fisrt feature information of pupil is obtained;According to N the second eyes images, the second feature information of pupil is obtained;According to fisrt feature information and second feature information, vivo identification is carried out to target to be detected.Therefore, the present embodiment can carry out vivo identification, it is necessary to which what is changed is illumination by the eyes image gathered under different illumination, and without target carry out action cooperation to be detected, the degree of accuracy of raising vivo identification and efficiency also simplify the operation of target to be detected.

Description

Vivo identification method and device
Technical field
This disclosure relates to technical field of biometric identification, more particularly to vivo identification method and device.
Background technology
With the development of computer vision, recognition of face has been widely used in the business such as on-line payment, network finance. Live body checking is an important step during recognition of face, can improve the security of system.Existing face is lived Body verification process is often such as blinked according to the action of user's face, nozzle type changes to determine whether live body, and these schemes are both needed to The cooperation of user well is wanted just to can recognize that live body.
The content of the invention
To overcome problem present in correlation technique, the disclosure provides a kind of vivo identification method and device.
According to the first aspect of the embodiment of the present disclosure there is provided a kind of vivo identification method, including:
Gather the N under the first illumination the first eyes images of target to be detected, and N second under the second illumination Eyes image;The N is the integer more than or equal to 1;The brightness of first illumination is different from the brightness of second illumination;
According to the N the first eyes images, the fisrt feature information of pupil is obtained;The fisrt feature information is described Characteristic information of the pupil under first illumination;
According to the N the second eyes images, the second feature information of pupil is obtained;The second feature information is described Characteristic information of the pupil under second illumination;
According to the fisrt feature information and the second feature information, vivo identification is carried out to the target to be detected.
Alternatively, according to N eyes images, the characteristic information of pupil is obtained, including:
Obtain the width and the width of pupil of the eyes in N eyes images in every eyes image;
The width and the width of the pupil of the eyes in N eyes images, obtain the characteristic information of pupil;
Wherein, when the eyes image is the first eyes image, the characteristic information is fisrt feature information;
When the eyes image is the second eyes image, the characteristic information is second feature information.
Alternatively, the width and the width of the pupil of the eyes in N eyes images, obtain pupil Characteristic information, including:
The ratio of the width and the width of the eyes of the pupil in N eyes images of acquisition in every eyes image Value;
The ratio of the width and the width of the eyes of the pupil in the N eyes images, obtains pupil Characteristic information.
Alternatively, the width and the width of pupil of the eyes in the acquisition N eyes images in every eyes image, bag Include:Pupil in the width of pupil, the width of right eye and right eye is obtained in the width of left eye in every eyes image, left eye respectively Width;
The width of the pupil in the acquisition N eyes images in every eyes image and the width of the eyes Ratio, including:The first ratio of the width of pupil and the width of the left eye in left eye described in every eyes image is obtained, with And obtain the second ratio of the width of pupil and the width of the right eye in right eye described in every eyes image;
According to first ratio and second ratio, width and the institute of the pupil in every eyes image are obtained State the ratio of the width of eyes.
Alternatively, the width and the width of pupil of the eyes in the acquisition N eyes images in every eyes image, bag Include:
The characteristic point of eyes and the characteristic point of pupil are obtained from every eyes image;
According to the characteristic point of the eyes, the width of the eyes is determined;
According to the characteristic point of the eyes, the width of the pupil is determined.
Alternatively, it is described according to the fisrt feature information and the second feature information, the target to be detected is entered Row vivo identification, including:
According to the fisrt feature information and the second feature information, determine whether the size of the pupil becomes Change;
When the size of the pupil changes, it is live body to recognize the target to be detected;
When the size of the pupil does not change, it is non-living body to recognize the target to be detected.
According to the second aspect of the embodiment of the present disclosure there is provided a kind of vivo identification device, including:
Acquisition module, is configured as gathering the N under the first illumination the first eyes images of target to be detected, and the N the second eyes images under two illumination;The N is the integer more than or equal to 1;The brightness of first illumination is different from institute State the brightness of the second illumination;
Acquisition module, is configured as, according to the N the first eyes images, obtaining the fisrt feature information of pupil;It is described Fisrt feature information is characteristic information of the pupil under first illumination;And according to the N the second eyes images, Obtain the second feature information of pupil;The second feature information is characteristic information of the pupil under second illumination;
Identification module, is configured as according to the fisrt feature information and the second feature information, to described to be detected Target carries out vivo identification.
Alternatively, the acquisition module, including:First acquisition submodule and the second acquisition submodule;
First acquisition submodule, is configured as obtaining the width of the eyes in N eyes images in every eyes image The width of degree and pupil;
Second acquisition submodule, the width and the pupil of the eyes being configured as in N eyes images The width in hole, obtains the characteristic information of pupil;
Wherein, when the eyes image is the first eyes image, the characteristic information is fisrt feature information;
When the eyes image is the second eyes image, the characteristic information is second feature information.
Alternatively, second acquisition submodule, is configured as obtaining the institute in N eyes images in every eyes image State the ratio of the width of pupil and the width of the eyes;And the width of the pupil in the N eyes images With the ratio of the width of the eyes, the characteristic information of pupil is obtained.
Alternatively, first acquisition submodule, the width for the left eye being configured in every eyes image of acquisition, In left eye in the width of pupil, the width of right eye and right eye pupil width;
Second acquisition submodule, is configured as obtaining described in every eyes image the width of pupil and institute in left eye State the first ratio of the width of left eye, and obtain described in every eyes image the width of pupil and the right eye in right eye Second ratio of width;According to first ratio and second ratio, the pupil in every eyes image is obtained The ratio of width and the width of the eyes.
Alternatively, first acquisition submodule, be configured as from every eyes image obtain eyes characteristic point and The characteristic point of pupil;According to the characteristic point of the eyes, the width of the eyes is determined;And according to the feature of the eyes Point, determines the width of the pupil.
Alternatively, the identification module includes:Determination sub-module and identification submodule;
The determination sub-module, is configured as, according to the fisrt feature information and the second feature information, determining institute Whether the size for stating pupil changes;
The identification submodule, when the size for being configured as the pupil changes, recognizes the target to be detected For live body;When the size of the pupil does not change, it is non-living body to recognize the target to be detected.
According to the third aspect of the embodiment of the present disclosure there is provided a kind of vivo identification device, including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
Gather the N under the first illumination the first eyes images of target to be detected, and N second under the second illumination Eyes image;The N is the integer more than or equal to 1;The brightness of first illumination is different from the brightness of second illumination;
According to the N the first eyes images, the fisrt feature information of pupil is obtained;The fisrt feature information is described Characteristic information of the pupil under first illumination;
According to the N the second eyes images, the second feature information of pupil is obtained;The second feature information is described Characteristic information of the pupil under second illumination;
According to the fisrt feature information and the second feature information, vivo identification is carried out to the target to be detected.
The technical scheme provided by this disclosed embodiment can include the following benefits:Existed by gathering target to be detected Eyes image under different illumination, then according to the eyes image under different illumination, obtains the feature of the pupil under different illumination Information, and then according to the characteristic information of the pupil under different illumination, vivo identification is carried out to target to be detected.Therefore, this implementation Example can carry out vivo identification by the eyes image gathered under different illumination, it is necessary to which what is changed is illumination, without mesh to be detected Carry out action cooperation is marked, the degree of accuracy and the efficiency of vivo identification is improved, also simplify the operation of target to be detected.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary and explanatory, not The disclosure can be limited.
Brief description of the drawings
Accompanying drawing herein is merged in specification and constitutes the part of this specification, shows the implementation for meeting the disclosure Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 is a kind of flow chart of vivo identification method according to an exemplary embodiment.
Fig. 2 is the schematic diagram of the eyes image collected under different illumination according to an exemplary embodiment.
Fig. 3 is a kind of flow chart of vivo identification method according to another exemplary embodiment.
Fig. 4 be the eyes according to an exemplary embodiment width and pupil width schematic diagram.
Fig. 5 be the eyes according to an exemplary embodiment characteristic point and pupil characteristic point schematic diagram.
Fig. 6 is a kind of block diagram of vivo identification device according to an exemplary embodiment.
Fig. 7 is a kind of block diagram of vivo identification device according to another exemplary embodiment.
Fig. 8 is a kind of block diagram of vivo identification device according to another exemplary embodiment.
Fig. 9 is a kind of block diagram of vivo identification device 800 according to an exemplary embodiment.
Pass through above-mentioned accompanying drawing, it has been shown that the clear and definite embodiment of the disclosure, will hereinafter be described in more detail.These accompanying drawings It is not intended to limit the scope that the disclosure is conceived by any mode with word description, but is by reference to specific embodiment Those skilled in the art illustrate the concept of the disclosure.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects of be described in detail in claims, disclosure.
Fig. 1 is a kind of flow chart of vivo identification method according to an exemplary embodiment, as shown in figure 1, live body Recognition methods is used in terminal, comprises the following steps.
In step s 11, the N under the first illumination the first eyes image of target to be detected is gathered.
In step s 12, the N under the second illumination the second eyes image of target to be detected is gathered.
In vivo detection can be carried out to target to be detected, in the present embodiment to recognize the target to be detected as live body also right and wrong Live body.The target to be detected can be people, or animal, this implementation is not limited this.The present embodiment is to be detected at this Target is under the illumination of two kinds of different brightness, and the eyes image of the target to be detected is gathered respectively, the eyes image be comprising Image including eyes, the present embodiment is not limited the image range of eyes image.Target to be detected is irradiated in the first illumination When, N eyes images of collection, the eyes image is referred to herein as the first eyes image;The first eye graphical representation is in the first light The eyes image collected according under.When target to be detected is irradiated in the second illumination, N eyes images of collection, the eyes image claims For the second eyes image, second eyes image represents the eyes image collected under the second illumination.In the present embodiment The brightness of one illumination is different from the brightness of the second illumination.Wherein, Fig. 2 shows the eyes image collected under different illumination Schematic diagram.
It should be noted that the illumination that the screen that the first illumination and the second illumination can be terminals is sent, or, can also It is the illumination that the flash lamp of terminal is sent.Or, the first illumination and the second illumination can be independently of the light source hair outside terminal The illumination gone out, the present embodiment is not limited this.
Alternatively, the present embodiment can gather above-mentioned N the first eyes image according to prefixed time interval, that is, adjacent The acquisition time of two the first eyes images is at intervals of the prefixed time interval.Correspondingly, the present embodiment is also according to preset time Above-mentioned N the second eyes image of interval collection.
In step s 13, according to the N the first eyes images, the fisrt feature information of pupil is obtained;Described first is special Reference breath is characteristic information of the pupil under first illumination.
In the present embodiment, after N the first eyes images are collected, it can be extracted from the N the first eyes images The characteristic information of pupil in eyes, this feature information is referred to as fisrt feature information, and the fisrt feature information is target to be detected Characteristic information of the pupil under the first illumination.
In step S14, according to the N the second eyes images, the second feature information of pupil is obtained;Described second is special Reference breath is characteristic information of the pupil under second illumination.
In the present embodiment, after N the second eyes images are collected, it can be extracted from the N the second eyes images The characteristic information of pupil in eyes, this feature information is referred to as second feature information, and the second feature information is target to be detected Characteristic information of the pupil under the second illumination.
It should be noted that step S13 and step S12 and step S14 execution sequence, the present embodiment is not limited.
In step S15, according to the fisrt feature information and the second feature information, the target to be detected is entered Row vivo identification.
In the present embodiment, fisrt feature information of the pupil of target to be detected under the first illumination is being obtained, and it is to be checked Survey after second feature information of the pupil of target under the second illumination, because the pupil of the target to be detected of live body is not shared the same light Stimulation according under can produce change, correspondingly, and characteristic information of the pupil under different illumination can also be differed, therefore, this implementation Example carries out live body according to the second feature information under the fisrt feature information and the second illumination under the first illumination to target to be detected Identification.
In summary, the vivo identification method that the present embodiment is provided, by gathering target to be detected under different illumination Eyes image, then according to the eyes image under different illumination, the characteristic information of the pupil under the different illumination of acquisition, and then according to The characteristic information of pupil under different illumination, vivo identification is carried out to target to be detected.Therefore, the present embodiment is different by gathering Eyes image under illumination can carry out vivo identification, it is necessary to change be illumination, without target carry out action cooperation to be detected, The degree of accuracy and the efficiency of vivo identification are improved, the operation of target to be detected is also simplify.
Wherein, a kind of above-mentioned S15 possible implementation is:According to the fisrt feature information and the second feature Information, determines whether the size of the pupil changes;When the size of the pupil changes, recognize described to be detected Target is live body;When the size of the pupil does not change, it is non-living body to recognize the target to be detected.The present embodiment In, due to the stimulation under different illumination, the size of the pupil of the target to be detected of live body can change, and this is physiological phenomenon, And the size of the pupil of the target to be checked of non-living body will not then change, therefore, the present embodiment is obtained according under different illumination To the characteristic information of pupil judge whether the size of pupil changes, however, it is determined that the size of pupil changes, then knows The target to be detected is not live body, however, it is determined that the size of pupil does not change, then it is non-living body to recognize the target to be detected.
Wherein, a kind of S15 possible implementation is:Fisrt feature information and second feature information are inputted to training In good grader, vivo identification, this reality are carried out according to fisrt feature information and second feature information by the grader that trains Apply the recognition result that example obtains the grader trained again.It is trained to grader in the stage, gathers a number of work Eyes image of the body before and after illumination variation, and the characteristic information of pupil in each eyes image is extracted, input classification to be trained Device (such as adaboost).In addition, also gathering eyes image of a number of non-living body before and after illumination variation, and extract each eye The characteristic information of pupil in portion's image, also inputs grader (such as adaboost) to be trained, and carries out the learning training of grader. The pupil of live body characteristic information and non-living body pupil characteristic information learn to finish after, grader to be trained turns into The grader trained.
Fig. 3 is a kind of flow chart of vivo identification method according to another exemplary embodiment, as shown in figure 3, living Body recognition methods is used in terminal, comprises the following steps.
In the step s 21, the N under the first illumination the first eyes image of target to be detected is gathered.
In step S22, the N under the second illumination the second eyes image of target to be detected is gathered.
In the present embodiment, S21 and the S22 process that implements may refer to associated description in embodiment illustrated in fig. 1, this Place is repeated no more.
In step S23, the width and pupil of the eyes in N the first eyes images in every first eyes image are obtained Width.
In step s 24, the width and the width of the pupil of the eyes in N the first eyes images, are obtained Take the fisrt feature information of pupil.
In the present embodiment, it is illustrated so that N is equal to 2 as an example, the eyes in first the first eyes image of acquisition The width of width and pupil, obtains the width and the width of pupil of the eyes in second the first eyes image.Wherein, eyes The width of width and pupil is as shown in Figure 4.Then the width and the width of pupil of the eyes in first the first eyes image Degree, and the eyes in second the first eyes image width and the width of pupil, obtain pupil fisrt feature information.
Alternatively, a kind of above-mentioned S23 possible implementation can include step S231- steps S233:
In step S231, the characteristic point of eyes and the characteristic point of pupil are obtained from every first eyes image.
In step S232, according to the characteristic point of the eyes, the width of the eyes is determined.
In step S233, according to the characteristic point of the eyes, the width of the pupil is determined.
In the present embodiment, location algorithm (such as ESR algorithms) can be used from every first eyes image to position eye The characteristic point of eyeball and the characteristic point of pupil.Location algorithm is divided into two stages of training and application.In the training stage, sentenced according to live body Other needs mark the characteristic point of eyes and pupil by hand.Wherein, the position of characteristic point may refer to Fig. 5 institutes in the present embodiment Show.And the sample of training stage needs to demarcate the eyes image of sufficient amount, different people under different illumination.Then with mark Good sample carrys out training characteristics point location algorithm.In the application stage, collect after eyes image, it is possible to use determining for training Position algorithm, positions the characteristic point of eyes and the characteristic point of pupil.Then eyes can be navigated to according to the characteristic point of eyes Position in one eyes image, so that it is determined that the width of the eyes gone out in the first eyes image, and according to the characteristic point of pupil Position of the pupil in the first eyes image can be navigated to, so that it is determined that the width of the pupil gone out in the first eyes image.
Alternatively, a kind of step S24 possible implementation is:Obtain every First view in N the first eyes images The ratio of the width of pupil in portion's image and the width of eyes, the width and eye of the pupil in all first eyes images The ratio of the width of eyeball, obtains the fisrt feature information of pupil.For example:The width and eye of pupil in all first eyes images The ratio of the width of eyeball is respectively:R11, R12 ..., R1N, then a kind of mode be:By the pupil in all first eyes images The ratio of the width of width and eyes is grouped together into the fisrt feature information of pupil, and the fisrt feature information of pupil is for example For (R11, R12 ..., R1N).Or, another way is:Width and eyes to the pupil in all first eyes images The ratio of width is weighted average treatment, obtains the fisrt feature information of pupil, so that each weight coefficient is 1 as an example, then pupil The fisrt feature information in hole is (R11+R12+ ...+R1N)/N.
Wherein, the first eyes image of collection includes left eye and right eye, therefore, and the width of above-mentioned eyes can only refer to The width of left eye, the width of above-mentioned pupil can only refer to the width of the pupil of left eye.Or, the width of above-mentioned eyes can be with Only refer to the width of right eye, the width of above-mentioned pupil can only refer to the width of the pupil of right eye.Or, the width of above-mentioned eyes Can refer to the average value of the width of left eye and the width of right eye, the width of above-mentioned pupil can refer to the pupil of left eye width and The average value of the width of the pupil of right eye.
Or, the width of above-mentioned eyes includes the width of the width and right eye of left eye.The width of above-mentioned pupil includes The width of the width of the pupil of left eye and the pupil of right eye.Correspondingly, eyes in the above-mentioned eyes image of acquisition every first The width of width and pupil, including:Obtain the width of left eye in every first eyes image, the width of pupil, the right side in left eye The width of pupil in the width and right eye of eye.The width of the pupil in the acquisition N eyes images in every eyes image The ratio of degree and the width of the eyes, including:Obtain described in every eyes image the width of pupil and the left side in left eye First ratio of the width of eye, and obtain the width of the width of pupil and the right eye in right eye described in every eyes image The second ratio;According to first ratio and second ratio, the pupil in every first eyes image is obtained The ratio of width and the width of the eyes.
Alternatively, according to first ratio and second ratio, the pupil in every first eyes image is obtained A kind of possible implementation of the width in hole and the ratio of the width of the eyes is:First ratio and the second ratio are carried out Weighted average processing, the value of acquisition as the pupil in the first eyes image width and eyes width ratio.
Alternatively, according to first ratio and second ratio, the pupil in every first eyes image is obtained The alternatively possible implementation of the width in hole and the ratio of the width of the eyes is:By the first ratio (r is left), Yi Ji Two ratios (r is right) are merged, and obtain the ratio of the width of pupil and the width of eyes, for example:R11=(r1 is left, and r1 is right).
In step s 25, the width and pupil of the eyes in N the second eyes images in every second eyes image are obtained Width.
In step S26, the width and the width of the pupil of the eyes in N the second eyes images are obtained Take the second feature information of pupil.
In the present embodiment, step S25 and step S26's implements the specific reality that process may refer to step S23 and S24 Existing process, here is omitted.
In step s 27, according to the fisrt feature information and the second feature information, the target to be detected is entered Row vivo identification.
In the present embodiment, the S27 process that implements may refer to associated description in embodiment illustrated in fig. 1, herein not Repeat again.
In summary, the vivo identification method that the present embodiment is provided, by gathering target to be detected under different illumination Eyes image, then according to the eyes image under different illumination, the characteristic information of the pupil under the different illumination of acquisition, and then according to The characteristic information of pupil under different illumination, vivo identification is carried out to target to be detected.Therefore, the present embodiment is different by gathering Eyes image under illumination can carry out vivo identification, it is necessary to change be illumination, without target carry out action cooperation to be detected, The degree of accuracy and the efficiency of vivo identification are improved, the operation of target to be detected is also simplify.
Following is disclosure device embodiment, can be used for performing method of disclosure embodiment.It is real for disclosure device The details not disclosed in example is applied, method of disclosure embodiment is refer to.
Fig. 6 is a kind of block diagram of vivo identification device according to an exemplary embodiment.The vivo identification device can With by software, hardware or both be implemented in combination with it is some or all of as electronic equipment.Reference picture 6, the device bag Include acquisition module 100, acquisition module 200 and identification module 300.
Acquisition module 100, is configured as gathering the N under the first illumination the first eyes images of target to be detected, and N the second eyes images under the second illumination;The N is the integer more than or equal to 1;The brightness of first illumination is different In the brightness of second illumination.
Acquisition module 200, is configured as the N the first eyes images gathered according to acquisition module 100, obtains pupil Fisrt feature information;The fisrt feature information is characteristic information of the pupil under first illumination;And according to The N the second eyes images that acquisition module 100 is gathered, obtain the second feature information of pupil;The second feature information For characteristic information of the pupil under second illumination;
Identification module 300, is configured as the fisrt feature information according to the acquisition of acquisition module 200 and second spy Reference is ceased, and vivo identification is carried out to the target to be detected.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 7 is a kind of block diagram of vivo identification device according to another exemplary embodiment.The vivo identification device Can by software, hardware or both be implemented in combination with it is some or all of as electronic equipment.Reference picture 7, this implementation The device of example is on the basis of embodiment illustrated in fig. 6, the acquisition module 200, including:First acquisition submodule 210 and second Acquisition submodule 220.
First acquisition submodule 210, the eyes for being configured as obtaining in N eyes images in every eyes image The width of width and pupil.
Second acquisition submodule 220, the width for the eyes being configured as in N eyes images and described The width of pupil, obtains the characteristic information of pupil.
Wherein, when the eyes image is the first eyes image, the characteristic information is fisrt feature information;
When the eyes image is the second eyes image, the characteristic information is second feature information.
Alternatively, second acquisition submodule 220, is configured as obtaining in N eyes images in every eyes image The pupil width and the eyes width ratio;And the pupil in the N eyes images The ratio of width and the width of the eyes, obtains the characteristic information of pupil.
Alternatively, first acquisition submodule 210, is configured to obtain the width of the left eye in every eyes image Degree, in left eye in the width of pupil, the width of right eye and right eye pupil width.
Second acquisition submodule 220, is configured as obtaining the width of pupil in left eye described in every eyes image With the first ratio of the width of the left eye, and the width of pupil and the right side in right eye are obtained described in every eyes image Second ratio of the width of eye;According to first ratio and second ratio, the pupil in every eyes image is obtained The ratio of the width in hole and the width of the eyes.
Alternatively, first acquisition submodule 210, is configured as obtaining the characteristic point of eyes from every eyes image With the characteristic point of pupil;According to the characteristic point of the eyes, the width of the eyes is determined;And according to the feature of the eyes Point, determines the width of the pupil.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 8 is a kind of block diagram of vivo identification device according to another exemplary embodiment.The vivo identification device Can by software, hardware or both be implemented in combination with it is some or all of as electronic equipment.Reference picture 8, this implementation The device of example is on the basis of Fig. 6 or embodiment illustrated in fig. 7, and the identification module 300 in the present embodiment includes:Determine submodule Block 310 and identification submodule 320.
The determination sub-module 310, is configured as according to the fisrt feature information and the second feature information, it is determined that Whether the size of the pupil changes;
The identification submodule 320, is configured as determination sub-module 310 and determines that the size of the pupil changes When, it is live body to recognize the target to be detected;When determination sub-module 310 determines that the size of the pupil does not change, know Not described target to be detected is non-living body.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 9 is a kind of block diagram of vivo identification device 800 according to an exemplary embodiment.For example, device 800 can To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices are good for Body equipment, personal digital assistant etc..
Reference picture 9, device 800 can include following one or more assemblies:Processing assembly 802, memory 804, electric power Component 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and Communication component 816.
The integrated operation of the usual control device 800 of processing assembly 802, such as with display, call, data communication, phase Machine operates the operation associated with record operation.Processing assembly 802 can refer to including one or more processors 820 to perform Order, to complete all or part of step of above-mentioned method.In addition, processing assembly 802 can include one or more modules, just Interaction between processing assembly 802 and other assemblies.For example, processing assembly 802 can include multi-media module, it is many to facilitate Interaction between media component 808 and processing assembly 802.
Memory 804 is configured as storing various types of data supporting the operation in device 800.These data are shown Example includes the instruction of any application program or method for being operated on device 800, and contact data, telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) is erasable to compile Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash Device, disk or CD.
Electric power assembly 806 provides electric power for the various assemblies of device 800.Electric power assembly 806 can include power management system System, one or more power supplys, and other are that device 800 generates, manages and distributed electric power associated component.
Multimedia groupware 808 is included in the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action Border, but also detection touches or slide related duration and pressure with described.In certain embodiments, many matchmakers Body component 808 includes a front camera and/or rear camera.When device 800 be in operator scheme, such as screening-mode or During video mode, front camera and/or rear camera can receive the multi-medium data of outside.Each front camera and Rear camera can be a fixed optical lens system or with focusing and optical zoom capabilities.
Audio-frequency assembly 810 is configured as output and/or input audio signal.For example, audio-frequency assembly 810 includes a Mike Wind (MIC), when device 800 be in operator scheme, when such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The audio signal received can be further stored in memory 804 or via communication set Part 816 is sent.In certain embodiments, audio-frequency assembly 810 also includes a loudspeaker, for exports audio signal.
I/O interfaces 812 is provide interface between processing assembly 802 and peripheral interface module, above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor cluster 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor cluster 814 can detect opening/closed mode of device 800, the relative positioning of component is for example described Component is the display and keypad of device 800, and sensor cluster 814 can be with 800 1 components of detection means 800 or device Position change, the existence or non-existence that user contacts with device 800, the orientation of device 800 or acceleration/deceleration and device 800 Temperature change.Sensor cluster 814 can include proximity transducer, be configured to detect in not any physical contact The presence of neighbouring object.Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, for into As being used in application.In certain embodiments, the sensor cluster 814 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 also includes near-field communication (NFC) module, to promote junction service.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuits (ASIC), numeral Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 804 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 820 of device 800.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of device 800 When device is performed so that device 800 is able to carry out above-mentioned vivo identification method.
Those skilled in the art will readily occur to its of the disclosure after considering specification and putting into practice invention disclosed herein Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or Person's adaptations follow the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by following Claims are pointed out.
It should be appreciated that the precision architecture that the disclosure is not limited to be described above and is shown in the drawings, and And various modifications and changes can be being carried out without departing from the scope.The scope of the present disclosure is only limited by appended claims System.

Claims (13)

1. a kind of vivo identification method, it is characterised in that including:
Gather the N under the first illumination the first eyes images of target to be detected, and N the second eyes under the second illumination Image;The N is the integer more than or equal to 1;The brightness of first illumination is different from the brightness of second illumination;
According to the N the first eyes images, the fisrt feature information of pupil is obtained;The fisrt feature information is the pupil Characteristic information under first illumination;
According to the N the second eyes images, the second feature information of pupil is obtained;The second feature information is the pupil Characteristic information under second illumination;
According to the fisrt feature information and the second feature information, vivo identification is carried out to the target to be detected.
2. according to the method described in claim 1, it is characterised in that according to N eyes images, obtain the characteristic information of pupil, Including:
Obtain the width and the width of pupil of the eyes in N eyes images in every eyes image;
The width and the width of the pupil of the eyes in N eyes images, obtain the characteristic information of pupil;
Wherein, when the eyes image is the first eyes image, the characteristic information is fisrt feature information;
When the eyes image is the second eyes image, the characteristic information is second feature information.
3. method according to claim 2, it is characterised in that the width of the eyes in N eyes images The width of degree and the pupil, obtains the characteristic information of pupil, including:
The ratio of the width and the width of the eyes of the pupil in N eyes images of acquisition in every eyes image;
The ratio of the width and the width of the eyes of the pupil in the N eyes images, obtains the spy of pupil Reference ceases.
4. method according to claim 3, it is characterised in that in the acquisition N eyes images in every eyes image Eyes width and the width of pupil, including:Pupil in the width of left eye in every eyes image, left eye is obtained respectively The width of pupil in width, the width of right eye and right eye;
The ratio of the width and the width of the eyes of the pupil in the acquisition N eyes images in every eyes image Value, including:The first ratio of the width of pupil and the width of the left eye in left eye described in every eyes image is obtained, and Obtain the second ratio of the width of pupil and the width of the right eye in right eye described in every eyes image;
According to first ratio and second ratio, the width and the eye of the pupil in every eyes image are obtained The ratio of the width of eyeball.
5. method according to claim 2, it is characterised in that in the acquisition N eyes images in every eyes image Eyes width and the width of pupil, including:
The characteristic point of eyes and the characteristic point of pupil are obtained from every eyes image;
According to the characteristic point of the eyes, the width of the eyes is determined;
According to the characteristic point of the eyes, the width of the pupil is determined.
6. the method according to claim 1-5 any one, it is characterised in that it is described according to the fisrt feature information and The second feature information, vivo identification is carried out to the target to be detected, including:
According to the fisrt feature information and the second feature information, determine whether the size of the pupil changes;
When the size of the pupil changes, it is live body to recognize the target to be detected;
When the size of the pupil does not change, it is non-living body to recognize the target to be detected.
7. a kind of vivo identification device, it is characterised in that including:
Acquisition module, is configured as gathering the N under the first illumination the first eyes images of target to be detected, and in the second light N the second eyes images according under;The N is the integer more than or equal to 1;The brightness of first illumination is different from described the The brightness of two illumination;
Acquisition module, is configured as, according to the N the first eyes images, obtaining the fisrt feature information of pupil;Described first Characteristic information is characteristic information of the pupil under first illumination;And according to the N the second eyes images, obtain The second feature information of pupil;The second feature information is characteristic information of the pupil under second illumination;
Identification module, is configured as according to the fisrt feature information and the second feature information, to the target to be detected Carry out vivo identification.
8. device according to claim 7, it is characterised in that the acquisition module, including:First acquisition submodule and Two acquisition submodules;
First acquisition submodule, be configured as obtaining the eyes in N eyes images in every eyes image width and The width of pupil;
Second acquisition submodule, the width of the eyes being configured as in N eyes images and the pupil Width, obtains the characteristic information of pupil;
Wherein, when the eyes image is the first eyes image, the characteristic information is fisrt feature information;
When the eyes image is the second eyes image, the characteristic information is second feature information.
9. device according to claim 8, it is characterised in that second acquisition submodule, is configured as acquisition N and opens one's eyes The ratio of the width of the pupil in portion's image in every eyes image and the width of the eyes;And according to the N The ratio of the width of the pupil in eyes image and the width of the eyes, obtains the characteristic information of pupil.
10. device according to claim 9, it is characterised in that first acquisition submodule, is configured to obtain In the width of left eye in every eyes image, left eye in the width of pupil, the width of right eye and right eye pupil width;
Second acquisition submodule, is configured as obtaining described in every eyes image the width of pupil and the left side in left eye First ratio of the width of eye, and obtain the width of the width of pupil and the right eye in right eye described in every eyes image The second ratio;According to first ratio and second ratio, the width of the pupil in every eyes image is obtained With the ratio of the width of the eyes.
11. device according to claim 8, it is characterised in that first acquisition submodule, is configured as opening one's eyes from often The characteristic point of eyes and the characteristic point of pupil are obtained in portion's image;According to the characteristic point of the eyes, the width of the eyes is determined Degree;And according to the characteristic point of the eyes, determine the width of the pupil.
12. the device according to claim 7-11 any one, it is characterised in that the identification module includes:It is determined that sub Module and identification submodule;
The determination sub-module, is configured as, according to the fisrt feature information and the second feature information, determining the pupil Whether the size in hole changes;
The identification submodule, when the size for being configured as the pupil changes, recognizes the target to be detected to live Body;When the size of the pupil does not change, it is non-living body to recognize the target to be detected.
13. a kind of vivo identification device, it is characterised in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
Gather the N under the first illumination the first eyes images of target to be detected, and N the second eyes under the second illumination Image;The N is the integer more than or equal to 1;The brightness of first illumination is different from the brightness of second illumination;
According to the N the first eyes images, the fisrt feature information of pupil is obtained;The fisrt feature information is the pupil Characteristic information under first illumination;
According to the N the second eyes images, the second feature information of pupil is obtained;The second feature information is the pupil Characteristic information under second illumination;
According to the fisrt feature information and the second feature information, vivo identification is carried out to the target to be detected.
CN201710294231.1A 2017-04-28 2017-04-28 Vivo identification method and device Pending CN107169429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710294231.1A CN107169429A (en) 2017-04-28 2017-04-28 Vivo identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710294231.1A CN107169429A (en) 2017-04-28 2017-04-28 Vivo identification method and device

Publications (1)

Publication Number Publication Date
CN107169429A true CN107169429A (en) 2017-09-15

Family

ID=59812386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710294231.1A Pending CN107169429A (en) 2017-04-28 2017-04-28 Vivo identification method and device

Country Status (1)

Country Link
CN (1) CN107169429A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399365A (en) * 2018-01-19 2018-08-14 东北电力大学 The method and its equipment of living body faces are detected using pupil diameter
CN108875497A (en) * 2017-10-27 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of In vivo detection
CN110135370A (en) * 2019-05-20 2019-08-16 北京百度网讯科技有限公司 Method and apparatus, electronic device, and computer-readable medium for face liveness detection
WO2019161730A1 (en) * 2018-02-26 2019-08-29 阿里巴巴集团控股有限公司 Living body detection method, apparatus and device
CN113239887A (en) * 2021-06-04 2021-08-10 Oppo广东移动通信有限公司 Living body detection method and apparatus, computer-readable storage medium, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955717A (en) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 Iris activity detecting method
US9202121B2 (en) * 2011-07-11 2015-12-01 Accenture Global Services Limited Liveness detection
CN105138996A (en) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 Iris identification system with living body detecting function
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202121B2 (en) * 2011-07-11 2015-12-01 Accenture Global Services Limited Liveness detection
CN103955717A (en) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 Iris activity detecting method
CN105138996A (en) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 Iris identification system with living body detecting function
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875497A (en) * 2017-10-27 2018-11-23 北京旷视科技有限公司 The method, apparatus and computer storage medium of In vivo detection
CN108399365A (en) * 2018-01-19 2018-08-14 东北电力大学 The method and its equipment of living body faces are detected using pupil diameter
CN108399365B (en) * 2018-01-19 2022-03-25 东北电力大学 Method and device for detecting living human face by using pupil diameter
WO2019161730A1 (en) * 2018-02-26 2019-08-29 阿里巴巴集团控股有限公司 Living body detection method, apparatus and device
US10977508B2 (en) 2018-02-26 2021-04-13 Advanced New Technologies Co., Ltd. Living body detection method, apparatus and device
US11295149B2 (en) 2018-02-26 2022-04-05 Advanced New Technologies Co., Ltd. Living body detection method, apparatus and device
CN110135370A (en) * 2019-05-20 2019-08-16 北京百度网讯科技有限公司 Method and apparatus, electronic device, and computer-readable medium for face liveness detection
US11188771B2 (en) 2019-05-20 2021-11-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Living-body detection method and apparatus for face, and computer readable medium
CN113239887A (en) * 2021-06-04 2021-08-10 Oppo广东移动通信有限公司 Living body detection method and apparatus, computer-readable storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN107038428A (en) Vivo identification method and device
CN106651955A (en) Method and device for positioning object in picture
CN107220621A (en) Terminal carries out the method and device of recognition of face
CN107169429A (en) Vivo identification method and device
CN106548468B (en) The method of discrimination and device of image definition
CN106339680A (en) Human face key point positioning method and device
CN107832741A (en) The method, apparatus and computer-readable recording medium of facial modeling
CN106355573A (en) Target object positioning method and device in pictures
CN106951884A (en) Gather method, device and the electronic equipment of fingerprint
CN107527059A (en) Character recognition method, device and terminal
CN106778531A (en) Face detection method and device
CN107527053A (en) Object detection method and device
CN107582028A (en) Sleep monitor method and device
CN107679483A (en) Number plate recognition methods and device
CN106682736A (en) Image identification method and apparatus
CN107944447A (en) Image classification method and device
CN110717399A (en) Face recognition method and electronic terminal equipment
CN106228556A (en) Image quality analysis method and device
CN107563994A (en) The conspicuousness detection method and device of image
CN106355549A (en) Photographing method and equipment
CN107742120A (en) The recognition methods of bank card number and device
CN107463903A (en) Face key independent positioning method and device
CN109934275A (en) Image processing method and device, electronic equipment and storage medium
CN106339695A (en) Face similarity detection method, device and terminal
CN107766820A (en) Image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170915

RJ01 Rejection of invention patent application after publication