CN108921080A - Image-recognizing method, device and electronic equipment - Google Patents
Image-recognizing method, device and electronic equipment Download PDFInfo
- Publication number
- CN108921080A CN108921080A CN201810681293.2A CN201810681293A CN108921080A CN 108921080 A CN108921080 A CN 108921080A CN 201810681293 A CN201810681293 A CN 201810681293A CN 108921080 A CN108921080 A CN 108921080A
- Authority
- CN
- China
- Prior art keywords
- image
- distance
- identification
- images
- interpupillary distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides a kind of image-recognizing method, device and electronic equipment.In one embodiment, described image recognition methods includes:Images to be recognized input identification model is calculated, to obtain the identification interpupillary distance of the target portrait in the images to be recognized;Calculate the difference of the reference interpupillary distance of the identification interpupillary distance and the pre-stored target portrait;And matched to obtain the recognition result of the images to be recognized with the criterion of identification of setting according to the difference, the criterion of identification includes identifying interpupillary distance and referring to the difference range of interpupillary distance and the corresponding relationship of recognition result.
Description
Technical field
The present invention relates to field of image processings, in particular to a kind of image-recognizing method, device and electronic equipment.
Background technique
Face recognition technology is widely used at present as a kind of effective authentication and identification technology.So
And face identification system is also easy the attack by some illegal users.Mainly there are three classes to the attack of face identification system:According to
Piece attack, video attack and the attack of 3D model.Illegal person may attempt to take advantage of by the photo of legitimate user, video and 3D model
System is deceived to achieve the purpose that access identifying system.
Summary of the invention
In view of this, the embodiment of the present invention is designed to provide a kind of image-recognizing method, device and electronic equipment.
In a first aspect, a kind of image-recognizing method provided in an embodiment of the present invention, including:
Images to be recognized input identification model is calculated, to obtain the knowledge of the target portrait in the images to be recognized
Other interpupillary distance;
Calculate the difference of the reference interpupillary distance of the identification interpupillary distance and the pre-stored target portrait;And
It is matched to obtain the recognition result of the images to be recognized with the criterion of identification of setting according to the difference, it is described
Criterion of identification includes identification interpupillary distance with referring to the difference range of interpupillary distance and the corresponding relationship of recognition result, and the recognition result includes
The target portrait corresponding objects are living body or non-living body.
Further, described to be matched to obtain the step of target identification result with the recognition result of setting according to the difference
Suddenly, including:
When the difference is in the first preset range, the identification knot that the target portrait corresponding objects are living body is obtained
Fruit;
When the difference is in the second preset range, the identification knot that the target portrait corresponding objects are non-living body is obtained
Fruit.
Further, the identification model obtains in the following manner:
Interpupillary distance calculation formula is fitted to form identification model according to the fitting data of acquisition, and the fitting data includes multiple
Image comprising face.
Further, it is applied to electronic equipment, the electronic equipment includes image collecting device, described according to the quasi- of acquisition
The step of data fitting interpupillary distance calculation formula is to obtain the identification model is closed, including:
The depth image of the specified quantity of designated person is obtained by described image acquisition device, is wrapped in the depth image
Include the facial image of designated person;
Calculate the pupil and the figure in the facial image in the depth image of the specified quantity in every depth image
As the first distance of acquisition device;
Calculate second between the pupil of the facial image in the depth image of the specified quantity in every depth image
Distance;
Obtain the practical interpupillary distance of the designated person;
It is calculated according to the second distance of the practical interpupillary distance, the first distance of specified quantity and specified quantity
Obtain fitting parameter;
With the fitting parameter, facial image pupil at a distance from image collecting device and the pupil of facial image it
Between the product of distance be fitted to interpupillary distance calculation formula.
Further, the step of the depth image of the specified quantity that designated person is obtained by described image acquisition device
Suddenly, including:
Obtain depth image of the designated person from image collecting device apart from different specified quantities.
Further, described according to the practical interpupillary distance, the first distance of specified quantity and specified quantity
Second distance is calculated fitting parameter and is realized by following formula:
Wherein, K indicates the fitting parameter;DREAL indicates the practical interpupillary distance;N indicates the specified quantity;diIt indicates
I-th of first distance in the depth image of specified quantity;dpiIndicate i-th of second distance in the depth image of specified quantity.
Further, in the facial image in the depth image for calculating the specified quantity in every depth image
The step of first distance of pupil and described image acquisition device, including:
The face figure in the depth image of the specified quantity in every depth image is detected according to Face datection algorithm
Facial key features point as in;
Pupil position is determined in the facial key features point;
First distance is obtained according to the pixel value at the pupil position.
Further, the pupil of the facial image in the depth image for calculating the specified quantity in every depth image
The step of second distance between hole, including:
The face figure in the depth image of the specified quantity in every depth image is detected according to Face datection algorithm
Facial key features point as in;
Pupil position is determined in the facial key features point;
According to the pixel Euclidean distance for determining pupil position two pupils of calculating.
Further, it is applied to electronic equipment, the electronic equipment includes image collecting device, described by figure to be identified
As input identification model is calculated, the step of to obtain the identification interpupillary distance of the target portrait in the images to be recognized before,
The method also includes:
Images to be recognized is obtained by described image acquisition device, includes facial image in the images to be recognized;Or,
Receive the images to be recognized that other equipment are sent.
Further, described image acquisition device is depth camera device, described to be obtained by described image acquisition device
The step of images to be recognized, including:
Images to be recognized is obtained by the depth camera device, the images to be recognized is depth image.
Second aspect, the embodiment of the present invention also provide a kind of pattern recognition device, including:
First computing module, for calculating images to be recognized input identification model, to obtain the figure to be identified
The identification interpupillary distance of target portrait as in;
Second computing module, the difference of the reference interpupillary distance for calculating the identification interpupillary distance and the pre-stored target portrait
Value;And
Matching module, for being matched to obtain target identification as a result, institute with the criterion of identification of setting according to the difference
Stating criterion of identification includes multiple recognition results and the corresponding difference range of each recognition result.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, including memory, processor, the memories
In be stored with the computer program that can be run on the processor, the processor realizes the when executing the computer program
On the one hand the step of described in any item methods.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage
It is stored with computer program on medium, first aspect described in any item sides are executed when the computer program is run by processor
The step of method.
Compared with prior art, the image-recognizing method of the embodiment of the present invention, by carrying out pupil to upper images to be recognized
The calculating of distance, and further carry out matching available knowledge with criterion of identification with the difference referring to interpupillary distance by identification interpupillary distance
Not as a result, the physical characteristic information of the mankind will be used to be converted to the identification of image, it is possible to reduce just to operations such as the contacts of human body
Can be realized detection characteristics of human body, so as to largely imitate very very very exquisite but that interpupillary distance is different with true man attack is quick
It detected, improve performance and robustness that image recognition judges algorithm.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, special embodiment below, and appended by cooperation
Attached drawing is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the block diagram of electronic equipment provided in an embodiment of the present invention.
Fig. 2 is the flow chart of image-recognizing method provided in an embodiment of the present invention.
Fig. 3 is the partial process view of the fitting of identification model used in image-recognizing method provided in an embodiment of the present invention.
Fig. 4 is the functional block diagram of pattern recognition device provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should be noted that:Similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
With the development of network technology, every field is had begun using online or offline e gate inhibition's isotype.By
Initial card control develops to subsequent fingerprint control, to the newest identification by face to realize face control.Phase
Fingerprint pattern card is easier to be transmitted, lose or substituted, therefore it is a kind of safer electronic access mould that fingerprint is opposite
Formula.But operate more troublesome since fingerprint control needs user that finger is put into cog region, face has been developed based on this
Control, face control, which only needs user to pass by cog region, can be realized as identifying.But some of the staff may use photo, view
Frequently, 3D model substitutes legal face to release gate inhibition, therefore this may result in attack of the electronic access by lawless people, also just makes
There are security risks for electronic access.Therefore, judge whether the object in acquired image is that living body is to ensure that through face control
One of the key of the safety of the electronic access of system.Sentence at present using structure light or the collected picture progress living body of floodlight equipment
When disconnected, there are much attacks smaller than the interpupillary distance of true man or much larger to will appear missing inspection.Based on the deficiency of foregoing invention people discovery, mention
Above-mentioned technical problem can be efficiently solved for following embodiment, is described in detail below.
Embodiment one
Firstly, describing the exemplary electronic device of the scene recognition method for realizing the embodiment of the present invention referring to Fig.1
100.The exemplary electronic device 100 can be computer, be also possible to the mobile terminals such as smart phone, tablet computer, can be with
It is the authenticating devices such as testimony of a witness all-in-one machine.
As shown in Figure 1, electronic equipment 100 include one or more processors 102, it is one or more storage device 104, defeated
Enter device 106, output device 108 and image collecting device 110, these components pass through bus system 112 and/or other forms
Bindiny mechanism's (not shown) interconnection.It should be noted that the component and structure of electronic equipment 100 shown in FIG. 1 are only exemplary,
And not restrictive, as needed, the electronic equipment also can have other assemblies and structure.
The processor 102 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution
The processing unit of the other forms of ability, and the other components that can control in the electronic equipment 100 are desired to execute
Function.
The storage device 104 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy
The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-
Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium
On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute
The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter
Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or
The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 108 can export various information (for example, image or sound) to external (for example, user), and
It and may include one or more of display, loudspeaker etc..
Described image acquisition device 110 can shoot the desired image of user (such as photo, video etc.), and will be clapped
The image taken the photograph is stored in the storage device 104 for the use of other components.
Illustratively, the management method for realizing identity-based according to an embodiment of the present invention identification, apparatus and system
Example electronic system in each device can integrate setting, can also with scattering device, such as by processing equipment 102, storage fill
It sets 104, input unit 106 and output device 108 is integrally disposed in one, and image collecting device 110 is separately positioned.
For ease of understanding, the application example of the electronic system of the present embodiment is described further below.The electronic system
It can be with installation settings in the various places for needing to veritify identity.
Illustratively, the exemplary electronic device for realizing image-recognizing method according to an embodiment of the present invention, device can
To be implemented as, attendance verification terminal, gate inhibition's verification terminal, real-name authentication verification terminal, the testimony of a witness veritifies all-in-one machine or safety check is veritified
Terminal etc..
Embodiment two
A kind of image-recognizing method is present embodiments provided, this method can be executed by electronic equipment.
According to embodiments of the present invention, a kind of embodiment of image-recognizing method is provided, it should be noted that in attached drawing
The step of process illustrates can execute in a computer system such as a set of computer executable instructions, although also,
Logical order is shown in flow chart, but in some cases, it can be to be different from shown by sequence execution herein or retouch
The step of stating.
Referring to Fig. 2, being the flow chart of image-recognizing method provided in an embodiment of the present invention.It below will be to shown in Fig. 2
Detailed process is described in detail.
Step S201 calculates images to be recognized input identification model, to obtain the mesh in the images to be recognized
Mark the identification interpupillary distance of portrait.
In one embodiment, before step S201, the method also includes:It is obtained by described image acquisition device
Images to be recognized is taken, includes facial image in the images to be recognized.
Further, described image acquisition device is depth camera device, described to be obtained by described image acquisition device
The step of images to be recognized, including:Images to be recognized is obtained by the depth camera device, the images to be recognized is depth
Image.
In another embodiment, before step S201, the method also includes:Receive that other equipment send to
Identify image.In present embodiment, other equipment can be the acquisition equipment that pickup area is arranged in.Further, institute
State the depth image that acquisition equipment is used to acquire the target object in the pickup area.
Step S202 calculates the difference of the reference interpupillary distance of the identification interpupillary distance and the pre-stored target portrait.
It, can be by being possibly used for being known using pupillometry tool measurement before carrying out image recognition in the present embodiment
The true interpupillary distance of other who object, and true interpupillary distance is stored.The reference interpupillary distance is the true of the target portrait
Interpupillary distance.
Step S203 is matched to obtain the identification of the images to be recognized with the criterion of identification of setting according to the difference
As a result.
In the present embodiment, the criterion of identification includes identifying interpupillary distance and referring to the difference range of interpupillary distance and pair of recognition result
It should be related to.
In the present embodiment, the recognition result includes that the target portrait corresponding objects are living body or non-living body.
In the present embodiment, the corresponding object of the target portrait refers to pair acquired when collecting the images to be recognized
As.For example, it may be living body faces, are also possible to human face photo, faceform etc. can also be.
In the present embodiment, the step S203 includes:Identify the range where the difference;When the difference is pre- first
When determining in range, the recognition result that the target portrait corresponding objects are living body is obtained;When the difference is in the second preset range
When interior, the recognition result that the target portrait corresponding objects are non-living body is obtained.
Further, that is, the object of described image acquisition device acquisition is living body personage.
Further, first preset range can be [- a, b], wherein a, b ∈ [1.8,2.2].For example, described
One preset range can be the numerical value such as [- 2,2], [- 1.8,2], [- 1.8,2.2], [- 1.9,2.2], [- 2,2.2], [- 2,1.8]
Section.
Test result, which can be used, in numerical intervals by limiting first preset range can better adapt to treat
Identify the In vivo detection of image.
In the present embodiment, second preset range can be other numerical value areas other than first preset range
Between.
Further, after step S203, the method can also include:The identification of output and the images to be recognized
As a result matched prompt information.In one embodiment, when obtain the target portrait corresponding objects be living body recognition result
When, the first character string can be inputted;When obtaining the target portrait is abiotic recognition result, the second character can be inputted
String.For example, first character string can be characters or the character strings such as " OK ", " 0 ";Second character string can be " NO ",
Characters such as " 1 " or character string.
The image-recognizing method of the embodiment of the present invention, by the calculating to upper images to be recognized progress interpupillary distance, and
Further identification interpupillary distance is carried out matching available recognition result with criterion of identification with the difference referring to interpupillary distance, by user
The physical characteristic information of class is converted to the identification of image, it is possible to reduce can be realized as detection human body to operations such as the contacts of human body
Feature, so as to largely imitate very very very exquisite but that interpupillary distance is different with true man attack quickly detect come, improve
Image recognition judges the performance and robustness of algorithm.
In the present embodiment, training obtains the identification model in the following manner:Pupil is fitted according to the fitting data of acquisition
Away from calculation formula to form identification model, the fitting data includes that multiple include the image of face.
As shown in figure 3, described be fitted the step of interpupillary distance calculation formula is to form identification model according to the fitting data of acquisition
It may comprise steps of.
Step S301 obtains the depth image of the specified quantity of designated person, the depth by described image acquisition device
Spend the facial image in image including designated person.
In the present embodiment, depth image of the designated person from image collecting device apart from different specified quantities is obtained.Institute
Stating specified quantity can be arranged according to specifically used demand.For example, the specified quantity can be the quantity such as 50,100,150.
In one embodiment, available designated person and image collecting device distance are the specified number of arithmetic progression
The depth image of amount.It is of course also possible to obtain the finger that the designated person is irregular length at a distance from the acquisition device
The depth image of fixed number amount.
In an example, the available designated person is with 1cm with described image acquisition device 20cm~70cm
50 depth images of gradient.In another example, the also available designated person and described image acquisition device
30cm~100cm is using 1cm as 70 depth images of gradient.
Step S302 calculates the pupil in the facial image in the depth image of the specified quantity in every depth image
With the first distance of described image acquisition device.
In one embodiment, the step S302, including:The specified quantity is detected according to Face datection algorithm
Depth image in facial key features point in facial image in every depth image;In the facial key features point
Determine pupil position;First distance is obtained according to the pixel value at the pupil position.Wherein, each of described depth image
What pixel value represented is the corresponding position of the corresponding designated person of the pixel to the distance of described image acquisition device plane.
In an example, the Face datection algorithm can be the LBF algorithm in OpenCV.
In other embodiments, the step S301 also could alternatively be the step of acquisition normal image, and described image is adopted
Acquisition means acquire image of the designated person at designated position.In this example, the first distance can be according to nominator
The position of object is calculated.For example, can install rack in the designated position, the designated person can be incited somebody to action according to standard
Face is bonded with the rack, and described image acquisition device acquires the image of the designated person again.Further, institute is adjusted
The position of image collecting device is stated, obtains multiple same images of first position to realize.
Step S303, calculate the facial image in the depth image of the specified quantity in every depth image pupil it
Between second distance.
In one embodiment, the step S303 includes:The specified quantity is detected according to Face datection algorithm
Depth image in facial key features point in facial image in every depth image, in the facial key features point
Pupil position is determined, according to the pixel Euclidean distance for determining pupil position two pupils of calculating.
Described the step of calculating the pixel Euclidean distance of two pupils according to determining pupil position, can pass through following formula reality
It is existing:
Wherein, dp indicates the pixel Euclidean distance of two pupils;(x1,y1) indicate the coordinate of a wherein pupil;(x2,y2) table
Show the coordinate of a wherein pupil.
Step S304 obtains the practical interpupillary distance of the designated person.
In the present embodiment, the practical interpupillary distance, which can be, receives input;It is also possible to be stored in advance in specified storage sky
Between in, need using when obtain from the designated memory space.
Step S305, according to described the second of the practical interpupillary distance, the first distance of specified quantity and specified quantity
Fitting parameter is calculated in distance.
In one embodiment, described according to the practical interpupillary distance, the first distance of specified quantity and specified number
The second distance of amount is calculated fitting parameter and is realized by following formula:
Wherein, K indicates the fitting parameter;DREAL indicates the practical interpupillary distance;N indicates the specified quantity;diIt indicates
I-th of first distance in the depth image of specified quantity;dpiIndicate i-th of second distance in the depth image of specified quantity.?
In above-mentioned example, the designated person and described image acquisition device 20cm~70cm are obtained using 1cm as 50 depths of gradient
When spending image, the N is then equal to 50.
Step S306, with the fitting parameter, facial image pupil at a distance from image collecting device and face figure
The product of the distance between the pupil of picture is fitted to interpupillary distance calculation formula.
In one embodiment, the interpupillary distance calculation formula is realized by following formula:
Dpupil=K*d*dp;
Wherein, DpupilIndicate interpupillary distance to be calculated;K indicates the fitting parameter;The pupil and image of d expression facial image
The distance of acquisition device;The distance between the pupil of dp expression facial image.
In an example, the interpupillary distance calculation formula is when for calculating the interpupillary distance in a certain image, the d
Personage's pupil when indicating to acquire a certain image in image is at a distance from the image collecting device for acquiring a certain image.
It can enable the identification model trained more by first fitting the calculation formula for being suitable for calculating face interpupillary distance
The calculating of interpupillary distance is adapted to well, to improve the accuracy that the identification model calculates interpupillary distance.
The use of the image-recognizing method in the present embodiment is described in detail in a specific application scenarios below
Process.In an example, the method in the present embodiment is executed by the security system that gate inhibition is arranged.The security system can wrap
Include image collecting device, processing equipment and storage equipment etc..
A, image collecting device monitoring at gate inhibition is arranged in can image in acquisition range.
B, when there is the facial image of user A in acquired image, step is to the face figure in the step S201
Interpupillary distance as in is calculated to obtain identification interpupillary distance.
C, it carries out the true interpupillary distance of the identification interpupillary distance and pre-stored user A that difference is calculated again.
D, may determine that whether the object in the image of described image acquisition device acquisition is living body according to the difference.
Further, in an example, when the object in the image of described image acquisition device acquisition is living body, then
It may determine that being currently intended to opening gate by face is user A, can open gate inhibition.In another example, when
When object in the image of described image acquisition device acquisition is not living body, then it may determine that current by face intention unlatching door
Taboo is other illegal users, and other illegal users may use the commodities companies such as photo, the video 3D model of user A figure to open door
Prohibit, gate inhibition can be closed.Further, the prompting messages such as alarm signal be can be sent out.
In another application scenarios, the method in described the present embodiment is executed by punch card system.The punch card system can
To include image collecting device, processing equipment and storage equipment etc..Further, in an example, when described image is adopted
When object in the image of acquisition means acquisition is living body, then it may determine that currently checking card by people is user A, Ke Yicheng
Function is checked card.In another example, when the object in the image of described image acquisition device acquisition is not living body, then can sentence
Disconnected currently to be checked card by face as other users, other users may use the objects such as photo, the video 3D model of user A substitution use
Family A checks card.Further, it can be sent out prompting message, such as:" checking card unsuccessfully ".
Further, described image recognition methods can also include:By judgment models trained in advance to recognition result
It is again identified that, the recognition result is reaffirmed to be confirmed as a result, described for the images to be recognized of living body
Confirm that result includes living body correct judgment or living body error in judgement.
The judgment models are accomplished by the following way:Training data is obtained, the training data includes that multiple include people
The training image in face region;By being trained in the neural network model that pre-sets of training data input to obtain described sentencing
Disconnected model.
In the present embodiment, the training data can be image collecting device face image data gathered in advance.Into one
Step ground, the training data may include different sexes, all ages and classes, not agnate multiclass personage image data.
In the present embodiment, the neural network model can be recurrent neural network RNN model, convolutional neural networks CNN
The models such as model.Of course, it will be appreciated that being also possible to other neural network models, the embodiment of the present invention is not to choose
The types of models of neural network be limited.
Judge that the accuracy of vivo identification can be improved in recognition result again by the progress of above-mentioned judgment models, further knows
The non-living body that Chu not cannot be identified by interpupillary distance.
In the present embodiment, first pass through step S201 to S203 rapid preliminary identify living body, then by the judgment models into
Confirm to one step the accuracy of recognition result.
Embodiment three
Corresponding to image-recognizing method provided in embodiment two, a kind of pattern recognition device is present embodiments provided.
The modules in pattern recognition device in the present embodiment are used to execute the step in the method in embodiment two.Fig. 4 is shown
A kind of structural schematic diagram of pattern recognition device provided by the embodiment of the present invention, as shown in figure 4, the device includes with lower die
Block.
First computing module 401, it is described to be identified to obtain for calculating images to be recognized input identification model
The identification interpupillary distance of target portrait in image.
Second computing module 402, for calculating the reference interpupillary distance of the identification interpupillary distance and the pre-stored target portrait
Difference.
Matching module 403, for being matched to obtain target identification with the criterion of identification of setting according to the difference as a result,
The criterion of identification includes multiple recognition results and the corresponding difference range of each recognition result.
The matching module 403 is also used to identify the range where the difference;When the difference is in the first preset range
When interior, the recognition result that the target portrait corresponding objects are living body is obtained;When the difference is in the second preset range, obtain
It is the recognition result of non-living body to the target portrait corresponding objects.
In the present embodiment, training obtains the identification model in the following manner:Fitting module, for according to the quasi- of acquisition
Data fitting interpupillary distance calculation formula is closed to form identification model, the fitting data includes that multiple include the image of face.
In the present embodiment, the fitting module includes with lower unit.
Image acquisition unit, the depth map of the specified quantity for obtaining designated person by described image acquisition device
Picture includes the facial image of designated person in the depth image.
First distance computing unit, the face in the depth image for calculating the specified quantity in every depth image
The first distance of pupil and described image acquisition device in image.
Second distance computing unit, the face in the depth image for calculating the specified quantity in every depth image
Second distance between the pupil of image.
In the present embodiment, the second distance computing unit is realized by following formula:
Wherein, K indicates the fitting parameter;DREAL indicates the practical interpupillary distance;N indicates the specified quantity;diIt indicates
I-th of first distance in the depth image of specified quantity;dpiIndicate i-th of second distance in the depth image of specified quantity.
Interpupillary distance acquiring unit, for obtaining the practical interpupillary distance of the designated person.
Parameter calculation unit, for according to the first distance of the practical interpupillary distance, specified quantity and specified quantity
Fitting parameter is calculated in the second distance.
Formula fitting unit, for the pupil of the fitting parameter, facial image at a distance from image collecting device with
And the product of the distance between pupil of facial image is fitted to interpupillary distance calculation formula.
In the present embodiment, image acquisition unit is also used to obtain designated person and specifies from image collecting device apart from different
The depth image of quantity.
In the present embodiment, the first distance computing unit is also used to detect the specified number according to Face datection algorithm
Facial key features point in facial image in the depth image of amount in every depth image;In the facial key features point
Middle determining pupil position;First distance is obtained according to the pixel value at the pupil position.
In the present embodiment, the second distance computing unit is also used to detect the specified number according to Face datection algorithm
Facial key features point in facial image in the depth image of amount in every depth image;In the facial key features point
Middle determining pupil position;According to the pixel Euclidean distance for determining pupil position two pupils of calculating.
In the present embodiment, described image identification device is also used to obtain images to be recognized by described image acquisition device,
It include facial image in the images to be recognized.
In the present embodiment, described image identification device is also used to obtain images to be recognized by the depth camera device,
The images to be recognized is depth image.
Further, described image identification device is also used to through judgment models trained in advance be living body to recognition result
Images to be recognized again identified that, the recognition result is reaffirmed to be confirmed as a result, the confirmation tie
Fruit includes living body correct judgment or living body error in judgement.
The judgment models with lower module by being realized:
Module is obtained, for obtaining training data, the training data includes that multiple include the training image of human face region;
Training module is trained for inputting the training data in the neural network model pre-seted to obtain
State judgment models.
Other details about the present embodiment can also be further with reference to the description in above method embodiment, herein not
It repeats again.
The pattern recognition device of the embodiment of the present invention, by the calculating to upper images to be recognized progress interpupillary distance, and
Further identification interpupillary distance is carried out matching available recognition result with criterion of identification with the difference referring to interpupillary distance, by user
The physical characteristic information of class is converted to the identification of image, it is possible to reduce can be realized as detection human body to operations such as the contacts of human body
Feature, so as to largely imitate very very very exquisite but that interpupillary distance is different with true man attack quickly detect come, improve
Image recognition judges the performance and robustness of algorithm.
In addition, the embodiment of the invention provides a kind of electronic equipment, including memory and processor, it is stored in memory
The computer program that can be run on a processor, processor realize the side that preceding method embodiment provides when executing computer program
The step of method.
Further, the embodiment of the invention also provides a kind of scene recognition method and the computer program product of device,
Computer readable storage medium including storing program code, the instruction that said program code includes can be used for executing front side
Method method as described in the examples, specific implementation can be found in embodiment of the method, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code
Part, a part of the module, section or code, which includes that one or more is for implementing the specified logical function, to be held
Row instruction.It should also be noted that function marked in the box can also be to be different from some implementations as replacement
The sequence marked in attached drawing occurs.For example, two continuous boxes can actually be basically executed in parallel, they are sometimes
It can execute in the opposite order, this depends on the function involved.It is also noted that every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement
It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.It needs
Illustrate, herein, relational terms such as first and second and the like be used merely to by an entity or operation with
Another entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this realities
The relationship or sequence on border.Moreover, the terms "include", "comprise" or its any other variant are intended to the packet of nonexcludability
Contain, so that the process, method, article or equipment for including a series of elements not only includes those elements, but also including
Other elements that are not explicitly listed, or further include for elements inherent to such a process, method, article, or device.
In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Process, method, article or equipment in there is also other identical elements.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should be noted that:Similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (15)
1. a kind of image-recognizing method, which is characterized in that including:
Images to be recognized input identification model is calculated, to obtain the identification pupil of the target portrait in the images to be recognized
Away from;
Calculate the difference of the reference interpupillary distance of the identification interpupillary distance and the pre-stored target portrait;And
It is matched to obtain the recognition result of the images to be recognized, the identification with the criterion of identification of setting according to the difference
Standard includes identification interpupillary distance with referring to the difference range of interpupillary distance and the corresponding relationship of recognition result, and the recognition result includes described
Target portrait corresponding objects are living body or non-living body.
2. image-recognizing method as described in claim 1, which is characterized in that the identification knot according to the difference and setting
Fruit is matched the step of obtaining target identification result, including:
When the difference is in the first preset range, the recognition result that the target portrait corresponding objects are living body is obtained;
When the difference is in the second preset range, the recognition result that the target portrait corresponding objects are non-living body is obtained.
3. image-recognizing method as claimed in claim 1 or 2, which is characterized in that the identification model obtains in the following manner
It arrives:
Interpupillary distance calculation formula is fitted to form identification model according to the fitting data of acquisition, and the fitting data includes that multiple include
The image of face.
4. image-recognizing method as claimed in claim 3, which is characterized in that be applied to electronic equipment, the electronic equipment packet
Image collecting device is included, it is described that the step of interpupillary distance calculation formula is to form identification model, packet are fitted according to the fitting data of acquisition
It includes:
The depth image of the specified quantity of designated person is obtained by described image acquisition device, includes referring in the depth image
Determine the facial image of personage;
The pupil calculated in the facial image in the depth image of the specified quantity in every depth image is adopted with described image
The first distance of acquisition means;
Calculate the second distance between the pupil of the facial image in the depth image of the specified quantity in every depth image;
Obtain the practical interpupillary distance of the designated person;
It is calculated according to the second distance of the practical interpupillary distance, the first distance of specified quantity and specified quantity
Fitting parameter;
With the fitting parameter, the pupil at a distance from image collecting device and between the pupil of facial image of facial image
Apart from product be fitted to interpupillary distance calculation formula.
5. image-recognizing method as claimed in claim 4, which is characterized in that described to be referred to by the acquisition of described image acquisition device
The step of determining the depth image of the specified quantity of personage, including:
Obtain depth image of the designated person from image collecting device apart from different specified quantities.
6. image-recognizing method as claimed in claim 4, which is characterized in that described according to the practical interpupillary distance, specified quantity
The first distance and specified quantity the second distance be calculated fitting parameter pass through following formula realize:
Wherein, K indicates the fitting parameter;DREAL indicates the practical interpupillary distance;N indicates the specified quantity;diIndicate specified
I-th of first distance in the depth image of quantity;dpiIndicate i-th of second distance in the depth image of specified quantity.
7. image-recognizing method as claimed in claim 4, which is characterized in that the depth image for calculating the specified quantity
In pupil and described image acquisition device in facial image in every depth image first distance the step of, including:
It is detected according to Face datection algorithm in the facial image in the depth image of the specified quantity in every depth image
Facial key features point;
Pupil position is determined in the facial key features point;
First distance is obtained according to the pixel value at the pupil position.
8. image-recognizing method as claimed in claim 4, which is characterized in that the depth image for calculating the specified quantity
In facial image in every depth image pupil between second distance the step of, including:
It is detected according to Face datection algorithm in the facial image in the depth image of the specified quantity in every depth image
Facial key features point;
Pupil position is determined in the facial key features point;
According to the pixel Euclidean distance for determining pupil position two pupils of calculating.
9. image-recognizing method as described in claim 1, which is characterized in that be applied to electronic equipment, the electronic equipment packet
Image collecting device is included, is calculated images to be recognized input identification model described, to obtain in the images to be recognized
Target portrait identification interpupillary distance the step of before, the method also includes:
Images to be recognized is obtained by described image acquisition device, includes facial image in the images to be recognized;Or,
Receive the images to be recognized that other equipment are sent.
10. image-recognizing method as claimed in claim 9, which is characterized in that described image acquisition device is depth camera dress
It sets, described the step of images to be recognized is obtained by described image acquisition device, including:
Images to be recognized is obtained by the depth camera device, the images to be recognized is depth image.
11. image-recognizing method as described in claim 1, which is characterized in that the method also includes:
The images to be recognized that recognition result is living body is again identified that by judgment models trained in advance, to the identification
As a result it is reaffirmed to be confirmed as a result, the confirmation result includes living body correct judgment or living body error in judgement.
12. image-recognizing method as claimed in claim 11, which is characterized in that the judgment models are real in the following manner
It is existing:
Training data is obtained, the training data includes that multiple include the training image of human face region;
The training data is inputted in the neural network model pre-seted and is trained to obtain the judgment models.
13. a kind of pattern recognition device, which is characterized in that including:
First computing module, for calculating images to be recognized input identification model, to obtain in the images to be recognized
Target portrait identification interpupillary distance;
Second computing module, the difference of the reference interpupillary distance for calculating the identification interpupillary distance and the pre-stored target portrait;
And
Matching module, for being matched to obtain target identification as a result, the knowledge with the criterion of identification of setting according to the difference
Other standard includes multiple recognition results and the corresponding difference range of each recognition result, and the recognition result includes the target person
As corresponding objects are living body or non-living body.
14. a kind of electronic equipment, including memory, processor, it is stored with and can runs on the processor in the memory
Computer program, which is characterized in that the processor realizes above-mentioned claim 1~12 when executing the computer program
Any one of described in method the step of.
15. a kind of computer readable storage medium, computer program, feature are stored on the computer readable storage medium
It is, the computer program executes method described in any one of above-mentioned claim 1~12 when being run by processor
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810681293.2A CN108921080A (en) | 2018-06-27 | 2018-06-27 | Image-recognizing method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810681293.2A CN108921080A (en) | 2018-06-27 | 2018-06-27 | Image-recognizing method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108921080A true CN108921080A (en) | 2018-11-30 |
Family
ID=64424084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810681293.2A Pending CN108921080A (en) | 2018-06-27 | 2018-06-27 | Image-recognizing method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108921080A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110630A (en) * | 2019-04-25 | 2019-08-09 | 珠海格力电器股份有限公司 | Face recognition method and device |
CN111753271A (en) * | 2020-06-28 | 2020-10-09 | 深圳壹账通智能科技有限公司 | Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification |
CN113624952A (en) * | 2021-10-13 | 2021-11-09 | 深圳市帝迈生物技术有限公司 | In-vitro diagnosis device, detection method thereof and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | A system and method for human face liveness detection based on light pulses |
CN106529414A (en) * | 2016-10-14 | 2017-03-22 | 国政通科技股份有限公司 | Method for realizing result authentication through image comparison |
CN106778559A (en) * | 2016-12-01 | 2017-05-31 | 北京旷视科技有限公司 | The method and device of In vivo detection |
CN106803065A (en) * | 2016-12-27 | 2017-06-06 | 广州帕克西软件开发有限公司 | A kind of interpupillary distance measuring method and system based on depth information |
CN106898119A (en) * | 2017-04-26 | 2017-06-27 | 华迅金安(北京)科技有限公司 | Safety operation intelligent monitoring system and method based on binocular camera |
CN206807609U (en) * | 2017-06-09 | 2017-12-26 | 深圳市迪威泰实业有限公司 | A kind of USB binoculars In vivo detection video camera |
WO2018009568A1 (en) * | 2016-07-05 | 2018-01-11 | Wu Yecheng | Spoofing attack detection during live image capture |
-
2018
- 2018-06-27 CN CN201810681293.2A patent/CN108921080A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | A system and method for human face liveness detection based on light pulses |
WO2018009568A1 (en) * | 2016-07-05 | 2018-01-11 | Wu Yecheng | Spoofing attack detection during live image capture |
CN106529414A (en) * | 2016-10-14 | 2017-03-22 | 国政通科技股份有限公司 | Method for realizing result authentication through image comparison |
CN106778559A (en) * | 2016-12-01 | 2017-05-31 | 北京旷视科技有限公司 | The method and device of In vivo detection |
CN106803065A (en) * | 2016-12-27 | 2017-06-06 | 广州帕克西软件开发有限公司 | A kind of interpupillary distance measuring method and system based on depth information |
CN106898119A (en) * | 2017-04-26 | 2017-06-27 | 华迅金安(北京)科技有限公司 | Safety operation intelligent monitoring system and method based on binocular camera |
CN206807609U (en) * | 2017-06-09 | 2017-12-26 | 深圳市迪威泰实业有限公司 | A kind of USB binoculars In vivo detection video camera |
Non-Patent Citations (3)
Title |
---|
孙凤芝: "《数值计算方法与实验》", 31 January 2013 * |
章毓晋: "《图像处理和分析基础》", 31 July 2002 * |
董支星等: "人脸识别活体验证专利分析", 《电声技术》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110630A (en) * | 2019-04-25 | 2019-08-09 | 珠海格力电器股份有限公司 | Face recognition method and device |
CN111753271A (en) * | 2020-06-28 | 2020-10-09 | 深圳壹账通智能科技有限公司 | Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification |
CN113624952A (en) * | 2021-10-13 | 2021-11-09 | 深圳市帝迈生物技术有限公司 | In-vitro diagnosis device, detection method thereof and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778525B (en) | Identity authentication method and device | |
CN108573203B (en) | Identity authentication method and device and storage medium | |
CN105117695B (en) | In vivo detection equipment and biopsy method | |
US10621454B2 (en) | Living body detection method, living body detection system, and computer program product | |
CN106599772B (en) | Living body verification method and device and identity authentication method and device | |
CN106407914B (en) | Method and device for detecting human face and remote teller machine system | |
US9985963B2 (en) | Method and system for authenticating liveness face, and computer program product thereof | |
Li et al. | Understanding OSN-based facial disclosure against face authentication systems | |
CN107844748A (en) | Auth method, device, storage medium and computer equipment | |
CN108369785A (en) | Activity determination | |
CN107844744A (en) | With reference to the face identification method, device and storage medium of depth information | |
CN108573202A (en) | Identity identifying method, device and system and terminal, server and storage medium | |
CN110851835A (en) | Image model detection method and device, electronic equipment and storage medium | |
CN105518708A (en) | Method and equipment for verifying living human face, and computer program product | |
CN106997452B (en) | Living body verification method and device | |
CN109886697A (en) | Method, apparatus and electronic equipment are determined based on the other operation of expression group | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN109829370A (en) | Face identification method and Related product | |
CN106599872A (en) | Method and equipment for verifying living face images | |
CN108108711B (en) | Face control method, electronic device and storage medium | |
CN109815813A (en) | Image processing method and related products | |
CN108629259A (en) | Identity identifying method and device and storage medium | |
CN113591603A (en) | Certificate verification method and device, electronic equipment and storage medium | |
CN108921080A (en) | Image-recognizing method, device and electronic equipment | |
CN111898538A (en) | Certificate authentication method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181130 |
|
RJ01 | Rejection of invention patent application after publication |