US20200125824A1 - Method of extracting features from a fingerprint represented by an input image - Google Patents
Method of extracting features from a fingerprint represented by an input image Download PDFInfo
- Publication number
- US20200125824A1 US20200125824A1 US16/658,384 US201916658384A US2020125824A1 US 20200125824 A1 US20200125824 A1 US 20200125824A1 US 201916658384 A US201916658384 A US 201916658384A US 2020125824 A1 US2020125824 A1 US 2020125824A1
- Authority
- US
- United States
- Prior art keywords
- input image
- orientation
- angular deviation
- image
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1359—Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
-
- G06K9/00067—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G06K9/00087—
-
- G06K9/3208—
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/243—Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Definitions
- the present invention relates to the field of biometrics, and in particular proposes a method for extracting features of interest from a fingerprint represented by an input image, with a view to a biometric processing of the input image.
- Biometric authentication/identification consists of recognizing an individual on the basis of biometric traits of that individual, such as fingerprints (digital recognition), the iris or the face (facial recognition).
- fingertip images are processed so as to extract the features of a print that can be classified into three categories:
- Level 1 defines the general pattern of that print (one of four classes: right loop, left loop, arch and spiral), and the overall layout of the ridges (in particular, an orientation map called “Ridge Flow Matrix”—RFM map—is obtained, which represents the general direction of the ridge at each point of the print).
- RPM map orientation map
- Level 2 defines the particular points of the prints called minutia, which constitute “events” along the ridges (end of a ridge, bifurcation, etc.).
- the conventional recognition approaches essentially use these features.
- Level 3 defines more complex information such as the shape of the ridges, pores of the skin, scars, etc.
- the method of extracting features from a print is called “encoding,” which make it possible to compose a signature called “template” encoding the useful information in the final phase of classification. More specifically, classification will be done by comparing feature maps obtained with one or more reference feature maps associated with known individuals.
- Image improvement (contrast enhancement, noise reduction, etc.);
- One approach is to use neuron networks, which are already extensively used for data classification.
- a neural network After an automatic training phase (generally supervised, meaning on an already classified reference database), a neural network “learns” and becomes capable on its own of applying the same classification to unknown data.
- CNN Convolutional neural networks
- a CNN can be trained to recognize an individual on the basis of biometric traits of that individual insofar as those data are handled in the form of images.
- the present invention relates to a method for extracting features of interest from a fingerprint represented by an input image, the method being characterized in that it comprises the implementation, by processing means of a client equipment's data, of steps of:
- said CNN comprises a set of successive convolution layers having a decreasing filter size and a decreasing number of filters
- step (a) comprises the identification of at least one potential angular deviation class of orientation of said input image with respect to the reference orientation among a plurality of predetermined potential angular deviation classes, each potential angular deviation class being associated with an angular deviation value representative of the class, the estimated candidate angular deviation(s) having as values the representative value(s) of the identified class(es);
- each class is defined by an interval of angular deviation values
- intervals are continuous and form a partition of a given range of potential angular orientation deviations
- step (a) comprises determining an orientation vector of said input image associating each of said plurality of potential angular deviation classes a score representative of the probability that said input data belongs to said potential angular deviation class;
- the method comprises a prior training step (a0), by data processing means of a server, from a fingerprint image database where each image is already associated with a class of said plurality of predetermined potential classes of angular deviation, of parameters of said CNN;
- said training uses a Sigmoid type cost function
- step (b) a recalibrated image is generated in step (b) for each estimated candidate angular deviation, step (c) being implemented for each recalibrated image;
- step (c) is also implemented on the non-recalibrated input image
- said features of interest to be extracted from the fingerprint represented by said input image comprise the position and/or orientation of minutia
- said fingerprint represented by the input image is that of an individual, the method further comprising a step (d) of identifying or authenticating said individual by comparison of the features of interest extracted from the fingerprint represented by said input image, with the features from reference fingerprints.
- the invention proposes a computer program product comprising code instructions for the execution of a method according to the first aspect of extraction of features of interest of a fingerprint represented by an input image; and a storage means readable by a computer equipment on which a computer program product comprises code instructions for executing a method according to the first aspect of extraction of features of interest from a fingerprint represented by an input image.
- FIG. 1 is a diagram of an architecture for implementation of the method according to the invention.
- FIG. 2 depicts an example of convolutional neural network for the implementation of the method according to the invention
- FIG. 3 represents an example of an orientation vector obtained using said convolutional neural network
- FIGS. 4 a -4 b illustrate two performance tests of different modes of carrying out a process according to this invention.
- the present method proposes a method for extracting features of interest from a fingerprint represented by an input image.
- This method consists typically of “encoding” the print, i.e. said features of interest to be extracted are typically “biometric” features, namely “final” features making it possible to compose a template of the fingerprint for purpose of classification (identification/authentication of an individual by comparing the features of interest extracted from the fingerprint represented by said input image with the reference fingerprint features, see below).
- said features of interest typically describe minutia, i.e. they comprise the position and/or orientation of the minutia.
- the present method is not limited to this embodiment, and all the features possibly of interest in biometrics can be extracted at the end of this method.
- the present method is distinct in that it proposes a step (a) of direct estimation (from the input image) of at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation, by means of a convolutional neural network, CNN.
- the idea of this method is to allow in a step (b) the recalibration of said input image according to said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation.
- feature extraction can be implemented on a “well oriented” recalibrated image in order to obtain a robust process with respect to orientation variations.
- said reference orientation corresponds to an arbitrarily chosen orientation such as the one in which the finger is vertical and directed upwards, called “North-South”, i.e. the natural orientation when the finger is pressed on a sensor under good conditions, corresponds to the usual orientation of the reference fingerprint databases.
- North-South i.e. the natural orientation when the finger is pressed on a sensor under good conditions
- the reference orientation is preferentially fixed with respect to the finger orientation, in particular, equal to the finger orientation, but it will be understood that any reference orientation can be used as a starting point.
- the angular orientation deviation can be expressed as an angle value, for example in the trigonometric direction (positive when the image is turned counter-clockwise, negative otherwise).
- a 360° range is preferred as it corresponds to all potential orientations, but it will be understood that it is possible to work on a smaller range corresponding to most of the orientations observed, for example [ ⁇ 45°, +45° ].
- a smartphone fingerprint sensor-type application it is very rare to have an orientation deviation of 180°, i.e. a completely reversed fingerprint, but, on the other hand it is very common to have deviations of 10°-20° in absolute value with respect to the reference orientation.
- any orientation of the fingerprint can be found.
- a detection CNN is capable of detecting elements of interest (objects) of various categories in an image in the form of a “bounding box”: a first “pre-processing” module (of VGG-16 type in Faster R-CNN), again of “pre-processing” extracted from the candidate areas of the input image as potential bounding boxes of an element of interest, and a “post-processing” module selects/classifies the most probable bounding boxes (i.e. determines the category of the interest element in the box).
- a detection CNN To use a detection CNN to estimate an orientation, it is sufficient to consider various fingerprint orientations as different categories of elements of interest. Therefore, once the best bounding box has been identified for a print in the entry image, the center of the predicted bounding box and the associated category are extracted to deduce the corresponding position.
- orientation deviations can be extracted automatically, without any prior information on their nature, and especially when the fingerprint is degraded or has a “whorl” type (i.e. a whorl, also called a spire or whirlpool) that does not distinguish between left and right unlike an “arch” type.
- a whorl also called a spire or whirlpool
- the present method is implemented within an architecture such as shown by FIG. 1 , with a server 1 and a client 2 .
- the server 1 is the training device (implementing the training of the CNN) and the client 2 is a classification device (implementing the present method of extracting features of interest from a fingerprint), for example a user terminal.
- server 1 is that of a security service provider
- client 2 a personal consumer device, particularly a smart phone, a personal computer, a tablet, a safe, etc., or latent fingerprint acquisition equipment.
- each device 1 , 2 is typically remote computer equipment connected to an extended network 10 such as the Internet for the exchange of data.
- Each comprises data processing means 11 , 21 of processor type, and data storage means 12 , 22 such as computer memory, for example a flash memory or a hard disc.
- the server 1 stores a training database, i.e. a set of fingerprint images for which orientation is already known (see below how to represent it) in contrast with said input images that are to be processed.
- a training database i.e. a set of fingerprint images for which orientation is already known (see below how to represent it) in contrast with said input images that are to be processed.
- the client device 2 advantageously comprises a fingerprint scanner 23 , so as to be able to directly acquire said input image, typically so that a user can be authenticated.
- a CNN generally comprises four types of layers successively processing information:
- the convolution layer which processes blocks from the input one after the other
- nonlinear layer which implements an activation function allowing to add nonlinearity to the network and therefore having much more complex decision functions
- the fully connected layer which connects all the neurons from one layer to all the neurons of the preceding layer (for classification).
- the non-linear layers are often preceded by a batch normalization layer (“BN layer”) before each nonlinear layer NL, so as to accelerate the training.
- BN layer batch normalization layer
- ReLU Rectified Linear Unit
- AvgPool which corresponds to an average among the values of a square (several values are pooled into only one).
- the convolution layer, labeled CONV, and the fully connected layer, labeled FC, generally correspond to a scalar product between the neurons of the preceding layer and the weights from the CNN.
- a CNN typically comprises a set of successive convolution layers.
- each of said convolution layers can be followed by a batch normalization layer BN and/or a non-linear layer, in particular ReLU, preferably both in that order.
- Typical CNN architectures stack several pairs of CONV NL layers and then add a POOL layer and repeat this plan [(CONV NL) p ⁇ POOL] until getting a sufficiently small size output factor, and then ending by two fully connected FC layers.
- the direct determination of the angular orientation deviation is seen as a “pure” classification of the input image, i.e. the problem is quantified, without any object detection task being involved.
- this classification CNN does not predict any bounding box or directly estimates precise values of candidate orientation angular deviation, but directly identifies at least one potential class of orientation angular deviation of said input image with respect to the reference orientation among a plurality of predetermined potential angular deviation classes, each potential angular deviation class being associated with an angular deviation value representative of the class (which will be the value taken into account when a class is identified).
- this CNN associates one or more classifications with the input image without prior processing. This is in contrast to the mentioned CNNs of the Faster R-CNN type in which the detection block (such as VGG-16) proposes candidate bounding boxes, which are each selected and classified; in other words, the classification is implemented on the candidate boxes (i.e.
- any CNN produces a number of internal states that feature maps are, including in this case of said “direct” classification, but such feature maps are part of the normal functioning of any CNN, in contrast to the intermediate results that the candidate enclosures of a detection CNN are.
- this CNN associates only classifications (as will generally be seen as a confidence score) with the input image, but not other outputs such as including box coordinates or even an annotated image.
- Each class is defined in particular by an interval (preferably continuous, but possibly a discontinuous union of continuous sub-intervals) of angular deviations, and the angular deviation value representative of the class is then typically the average value of the interval.
- an interval preferably continuous, but possibly a discontinuous union of continuous sub-intervals
- the angular deviation value representative of the class is then typically the average value of the interval.
- an interval [10°, 20° ] can be associated with the value 15°.
- the set of intervals preferably forms a partition of a range of angular deviations of potential orientations (repeated as 360°, or less).
- the intervals defining the classes are all the same size. For example, 360 classes of 1 degree each (for example, [2°,3° ] is an interval), or 60 classes of 6 degrees each (for example [6°,12° ] is an interval) can be considered.
- intervals defining the classes have different sizes, particularly according to a probability density. For example, for a smartphone fingerprint sensor-type application, it is, as explained, much more likely to have an orientation deviation in the range of 10°-20° than a deviation of 90°-100°. In such an application, narrower intervals can be expected for small deviations than for large deviations, for example on one side [2°, 4° ] and on the other side [90°, 105° ]. The person skilled in the art will be able to construct any set of classes of deviation of orientation of his/her choice.
- the estimated candidate angular deviation values are theoretical class function values, and more precisely angular deviation values representative of the classes. Therefore, for a given fingerprint image, the CNN identifies one or more candidate classes, i.e. defined by an interval likely to contain the actual value of the angular deviation of orientation of this image with respect to the reference orientation (it is noted that several classes can be identified, see below, although in practice the actual angular deviation has a unique value), and for each identified class, the representative angular deviation value of the class is considered as a candidate angular deviation estimated in step (a).
- the class corresponding to the interval [6, 12° ] is identified, and if the representative value of this class is 9° (in the case of the average), then the estimated value of the candidate angular deviation is 9°.
- a plurality of classes can be identified if the CNN returns an orientation vector, i.e. a score for each class. More precisely, the orientation vector of an input image associates each of the plurality of classes with a score representative of the probability of belonging of said input data to the potential angular deviation class; in other words, the probability that the actual angular deviation of orientation belongs to the corresponding interval.
- the vector therefore, has a dimension equal to the number of classes, i.e. 60 in our example. It is understood that a classification CNN, as defined above, prefers to return only the orientation vector, as opposed to a detection CNN.
- the class or classes identified may be the k (for example, two) with the highest score (i.e., a fixed number of classes are identified), or all those with a score above a predetermined threshold (for example, probability greater than 10%, or even probability greater than 20%, depending on the desired flexibility and the “risk” of having improperly oriented input images.
- a predetermined threshold for example, probability greater than 10%, or even probability greater than 20%, depending on the desired flexibility and the “risk” of having improperly oriented input images.
- the threshold can be lowered to maximize the number of peaks considered due to the potential multiplicity of plausible orientations).
- Said CNN can be of many types, including a conventional CNN (direct succession of CONV convolution layers, BN batch normalization layers, and NL non-linear layers).
- said CNN is of the residual network type. It can be seen that such residual networks are very efficient for their unique task of classifying the input image, and much lighter than detection CNNs, whose blocks such as VGG-16 or VGG-19 of image processing are particularly massive.
- a residual network is a CNN with at least one “residual connection” (also known as “skip connection” or simply “short-cut”), i. e. a connection from which at least one layer is “short-circuited”, by analogy with what is found in the pyramidal neurons of the brain.
- “residual connection” also known as “skip connection” or simply “short-cut”
- the main branch (shorted by the residual connection) has a plurality of convolution layers each followed by a batch normalization layer and/or a non-linear layer of the ReLU type.
- the output of this branch is typically added point by point with the input image (due to the residual connection), and goes through a final activation layer of the ReLU type, with a reduction in dimensionality (typically MaxPool).
- IP/FC fully connected layer
- the method begins by a training step (a0), by the data processing means 11 of the server 1 , from a database of labeled fingerprint images (i.e. for which the corresponding angular deviation class of orientation is known), from parameters of said CNN.
- This training can be done in a traditional way, minimizing the loss function.
- the main difficulty of this approach is to extract the features relevant to the determination of the expected angular deviation, while penalizing those features that do not allow class discrimination.
- a first possibility is to use a cross entropy on a Softmax as a loss function, since it allows to extract those features that maximize the probability of the expected angle deviation and it penalizes the probability of the other angle deviations (i.e. of the other classes).
- the Softmax function favors only one class, alternatively if it is desired to increase the chances of having several identified classes, a Sigmoid can be used as a loss function to privilege not only the expected angular deviation of orientation, but also, with a lower probability, those who are closest.
- the orientation vector will have two distinct peaks of scores, see for example FIG. 3 , so as to take into account the two potential cases, each of which remains probable.
- said database of already labeled fingerprint images can be obtained by increasing data. More precisely, a reference database in which all images are oriented according to the reference orientation is the starting point (i.e. zero orientation angular deviation). Other versions of these reference prints are then artificially generated by applying various angles of rotation to them (and labeled accordingly), so as to create new training images with varying orientation deviations. For example, angles of ⁇ [5°, 10°, 20°, 30°, 40°, 70°, 120° ] are applied to them, creating a new database 14 times larger, to ensure the robustness of the CNN against common acquisition defects.
- the trained CNN can be stored as necessary on data storage means 22 of the client 2 for use in orientation estimation. It should be noted that the same CNN can be embedded on numerous clients 2 , only one training is necessary.
- a main step (a) at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation is estimated by the data 21 processing means of the client 2 using the embedded CNN.
- a step (b) said input image is recalibrated according to said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation.
- a recalibrated image can be generated in step (b) for each estimated candidate angular deviation. The idea is that if several probability peaks emerge, it is not possible to know which one is the right one, so all of them need to be evaluated. Therefore, one of the recalibrated images will be “well recalibrated” i.e. oriented in accordance with the reference orientation.
- step (c) said recalibrated image can be processed so as to extract said features of interest from the fingerprint represented by said input image, which notably can comprise the position and/or orientation of minutia. If several recalibrated images have been generated, step (c) is implemented for each one. In particular, this step (c) can be implemented by means of a second dedicated CNN, taking as input each recalibrated image. This is referred to as a recalibration CNN for the first CNN, and a coding CNN for the second. Any known coding CNN can be used here.
- step (c) can also be systematically implemented on the input image “as is”, i.e. without recalibration (in other words, the input images are added with the recalibrated images). Indeed, if the input image is already well oriented, the CNN may tend to recalibrate it anyway, which may lead to a slight decrease in performance.
- the method further comprises a step (d) of identifying or authenticating said individual by comparing the features of interest extracted from the fingerprint represented by said input image, with the fingerprint features of reference, which can be implemented in any known way by the person skilled in the art.
- the client 2 can store the features of the prints of one or more authorized users as reference prints, so as to manage the unlocking of the client equipment 2 (particularly in the case of an input image acquired directly by an integrated scanner 23 ); if the extracted features correspond to those expected from an authorized user, the data processing means 21 consider that the individual attempting to be authenticated is authorized, and they proceed with the unlocking.
- the client 2 can send the extracted features to a remote database of said reference fingerprint features, for identification of the individual.
- FIG. 4 a shows the performance of the recalibration CNN in predicting angular deviation of orientation as a function of the average number of attempts, i.e. the average number of recalibrated images obtained. There is an increase in accuracy to nearly 90% when moving from one attempt (i.e. only the highest peak is considered), to an average of 2.8 attempts.
- FIG. 4 b shows the matching performances, as a function of the real angular deviation of orientation: it can be noted that in the absence of recalibration (a single coding CNN used directly for the extraction of features), performances decrease due to even a few degrees of deviation.
- the various tested configurations all of which include a first recalibration CNN and a second coding CNN, and which differ in the average number of attempts, show that performance is maintained regardless of the angular deviation in orientation.
- the embodiment which also includes the systematic use of the input image (not recalibrated), is a little less effective for large orientation deviations (since the input image is not relevant), but more effective if the angular orientation deviation is small, which remains a frequent case.
- the invention relates to a computer program product comprising code instructions for execution (in particular on data processing means 11 , 21 of the server 1 and/or of the client 2 ) of a method of extracting features of interest from a fingerprint represented by an input image, as well as storage means readable by a computer equipment (a memory 12 , 22 of the server 1 and/or of the client 2 ) on which said computer program product is located.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
(b) Recalibration of said input image as a function of said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation;
(c) Processing said recalibrated image so as to extract said features of interest from the fingerprint represented by said input image.
Description
- This application claims the benefit of French Patent Application No. 1859682 filed Oct. 19, 2018, the disclosure of which is herein incorporated by reference in its entirety.
- The present invention relates to the field of biometrics, and in particular proposes a method for extracting features of interest from a fingerprint represented by an input image, with a view to a biometric processing of the input image.
- Biometric authentication/identification consists of recognizing an individual on the basis of biometric traits of that individual, such as fingerprints (digital recognition), the iris or the face (facial recognition).
- Conventional biometric approaches use characteristic information of the biometric trait extracted from the acquired biometry, called features, and the training/classification is done on the basis of the comparison of these characteristics.
- In particular, in the case of fingerprint recognition, fingertip images are processed so as to extract the features of a print that can be classified into three categories:
-
Level 1 defines the general pattern of that print (one of four classes: right loop, left loop, arch and spiral), and the overall layout of the ridges (in particular, an orientation map called “Ridge Flow Matrix”—RFM map—is obtained, which represents the general direction of the ridge at each point of the print). -
Level 2 defines the particular points of the prints called minutia, which constitute “events” along the ridges (end of a ridge, bifurcation, etc.). The conventional recognition approaches essentially use these features. - Level 3 defines more complex information such as the shape of the ridges, pores of the skin, scars, etc.
- The method of extracting features from a print (in the form of feature maps) is called “encoding,” which make it possible to compose a signature called “template” encoding the useful information in the final phase of classification. More specifically, classification will be done by comparing feature maps obtained with one or more reference feature maps associated with known individuals.
- Today there are “encoders” that efficiently perform this operation of extracting features, i.e. algorithms carrying out a set of processes:
- Image improvement (contrast enhancement, noise reduction, etc.);
- Use of dedicated filters (Gabor of different resolutions, differentiators, etc.);
- Use of decision-making methods (thresholding for binarization, extraction of points, etc.).
- One approach is to use neuron networks, which are already extensively used for data classification.
- After an automatic training phase (generally supervised, meaning on an already classified reference database), a neural network “learns” and becomes capable on its own of applying the same classification to unknown data.
- Convolutional neural networks (CNN) are a type of neural network wherein the connection pattern between neurons is inspired by the visual cortex of animals. They are thus particularly suited to a specific type of classification, which is image analysis; indeed they allow efficient recognition of people or objects in images or videos, in particular in security applications (e.g. automatic surveillance, threat detection, etc.).
- Also, in the field of biometric authentication/identification, a CNN can be trained to recognize an individual on the basis of biometric traits of that individual insofar as those data are handled in the form of images.
- However, although such approaches have enabled major advances for example in facial recognition, their application to the recognition of fingerprints runs up against specifics inherent in fingerprints and until now the performance has not been persuasive.
- It would therefore be desirable to have a more efficient solution for extracting features from a fingerprint.
- According to a first aspect, the present invention relates to a method for extracting features of interest from a fingerprint represented by an input image, the method being characterized in that it comprises the implementation, by processing means of a client equipment's data, of steps of:
- (a) Estimation of at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation, by means of a convolutional neural network (CNN);
(b) Recalibration of said input image as a function of said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation;
(c) Processing said recalibrated image so as to extract said features of interest from the fingerprint represented by said input image. - According to other advantageous and nonlimiting characteristics:
- said CNN comprises a set of successive convolution layers having a decreasing filter size and a decreasing number of filters;
- step (a) comprises the identification of at least one potential angular deviation class of orientation of said input image with respect to the reference orientation among a plurality of predetermined potential angular deviation classes, each potential angular deviation class being associated with an angular deviation value representative of the class, the estimated candidate angular deviation(s) having as values the representative value(s) of the identified class(es);
- each class is defined by an interval of angular deviation values;
- said intervals are continuous and form a partition of a given range of potential angular orientation deviations;
- said intervals are all of the same size;
- step (a) comprises determining an orientation vector of said input image associating each of said plurality of potential angular deviation classes a score representative of the probability that said input data belongs to said potential angular deviation class;
- the method comprises a prior training step (a0), by data processing means of a server, from a fingerprint image database where each image is already associated with a class of said plurality of predetermined potential classes of angular deviation, of parameters of said CNN;
- said training uses a Sigmoid type cost function;
- a recalibrated image is generated in step (b) for each estimated candidate angular deviation, step (c) being implemented for each recalibrated image;
- said CNN is of the residual network type;
- step (c) is also implemented on the non-recalibrated input image;
- said features of interest to be extracted from the fingerprint represented by said input image comprise the position and/or orientation of minutia;
- said fingerprint represented by the input image is that of an individual, the method further comprising a step (d) of identifying or authenticating said individual by comparison of the features of interest extracted from the fingerprint represented by said input image, with the features from reference fingerprints.
- According to a second and third aspect, the invention proposes a computer program product comprising code instructions for the execution of a method according to the first aspect of extraction of features of interest of a fingerprint represented by an input image; and a storage means readable by a computer equipment on which a computer program product comprises code instructions for executing a method according to the first aspect of extraction of features of interest from a fingerprint represented by an input image.
- Other characteristics and advantages of the present invention will appear upon reading the following description of a preferred embodiment. This description will be given with reference to the attached drawings in which:
-
FIG. 1 is a diagram of an architecture for implementation of the method according to the invention; -
FIG. 2 depicts an example of convolutional neural network for the implementation of the method according to the invention; -
FIG. 3 represents an example of an orientation vector obtained using said convolutional neural network; -
FIGS. 4a-4b illustrate two performance tests of different modes of carrying out a process according to this invention. - The present method proposes a method for extracting features of interest from a fingerprint represented by an input image. This method consists typically of “encoding” the print, i.e. said features of interest to be extracted are typically “biometric” features, namely “final” features making it possible to compose a template of the fingerprint for purpose of classification (identification/authentication of an individual by comparing the features of interest extracted from the fingerprint represented by said input image with the reference fingerprint features, see below). In this respect, said features of interest typically describe minutia, i.e. they comprise the position and/or orientation of the minutia. However, it will be understood that the present method is not limited to this embodiment, and all the features possibly of interest in biometrics can be extracted at the end of this method.
- The present method is distinct in that it proposes a step (a) of direct estimation (from the input image) of at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation, by means of a convolutional neural network, CNN. Coming back later on the notion of “direct” estimation, we will understand that it means no pre-processing of the input image.
- Indeed, if the majority of the acquired fingerprints are acquired in a controlled environment and with correct orientation, in some cases (latent fingerprint images for example at a crime scene, images from smartphone acquisition, etc.) the acquisition, and specifically the orientation, are not controlled. In addition, if the extraction of features remains possible on an improperly oriented print, matching (i.e. matching with the features of a reference print) is difficult or even impossible if the fingerprints represented by the input image and the reference prints are not in the same reference frame.
- Therefore, in an evaluation on a latent fingerprint basis, using an extractor/matcher developed for a perfectly recalibrated print, a deterioration in performance was observed, from 36% to 0%, as the differences in orientation between the latent print placement and the orientation of the reference print image increased.
- The idea of this method is to allow in a step (b) the recalibration of said input image according to said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation.
- Therefore, feature extraction can be implemented on a “well oriented” recalibrated image in order to obtain a robust process with respect to orientation variations.
- It is understood that said reference orientation corresponds to an arbitrarily chosen orientation such as the one in which the finger is vertical and directed upwards, called “North-South”, i.e. the natural orientation when the finger is pressed on a sensor under good conditions, corresponds to the usual orientation of the reference fingerprint databases. To reformulate, the reference orientation is preferentially fixed with respect to the finger orientation, in particular, equal to the finger orientation, but it will be understood that any reference orientation can be used as a starting point.
- The angular orientation deviation can be expressed as an angle value, for example in the trigonometric direction (positive when the image is turned counter-clockwise, negative otherwise).
- For example, it can be chosen in a range [0°, 360° ] or a range [−180°, +180° ] in an equivalent way. In addition, a 360° range is preferred as it corresponds to all potential orientations, but it will be understood that it is possible to work on a smaller range corresponding to most of the orientations observed, for example [−45°, +45° ]. Indeed, in a smartphone fingerprint sensor-type application, it is very rare to have an orientation deviation of 180°, i.e. a completely reversed fingerprint, but, on the other hand it is very common to have deviations of 10°-20° in absolute value with respect to the reference orientation. Alternatively, in the analysis of latent prints (crime scene for example), any orientation of the fingerprint can be found.
- It should be noted that it is known to recalibrate a fingerprint image as “pre-processing” by image processing algorithms, but it has been discovered that it was possible to perform this recalibration very effectively with neural networks, especially without the need to use a reference image (or model). This is due to, during the training process, the CNN learns to use only the information in the image that is to be recalibrated. In other words, the network has only one entry.
- It should be also noted that the use of “detection” CNN such as Faster R-CNN (see document, Fingerprint Pose Estimation Based on Faster R-CNN, Jiahong Ouyang et al.) or YOLO, is known to estimate the placement of a print, i.e. the position of its center and orientation. More precisely, a detection CNN is capable of detecting elements of interest (objects) of various categories in an image in the form of a “bounding box”: a first “pre-processing” module (of VGG-16 type in Faster R-CNN), again of “pre-processing” extracted from the candidate areas of the input image as potential bounding boxes of an element of interest, and a “post-processing” module selects/classifies the most probable bounding boxes (i.e. determines the category of the interest element in the box). To use a detection CNN to estimate an orientation, it is sufficient to consider various fingerprint orientations as different categories of elements of interest. Therefore, once the best bounding box has been identified for a print in the entry image, the center of the predicted bounding box and the associated category are extracted to deduce the corresponding position.
- However, such an “indirect” approach remains inconvenient in terms of the power required and its results can be improved, especially if the print is not of good quality. As shown below, another advantage of the present approach is that orientation deviations can be extracted automatically, without any prior information on their nature, and especially when the fingerprint is degraded or has a “whorl” type (i.e. a whorl, also called a spire or whirlpool) that does not distinguish between left and right unlike an “arch” type.
- The present method is implemented within an architecture such as shown by
FIG. 1 , with aserver 1 and aclient 2. Theserver 1 is the training device (implementing the training of the CNN) and theclient 2 is a classification device (implementing the present method of extracting features of interest from a fingerprint), for example a user terminal. - It is quite possible for both
1, 2 to be combined, but preferablydevices server 1 is that of a security service provider, and client 2 a personal consumer device, particularly a smart phone, a personal computer, a tablet, a safe, etc., or latent fingerprint acquisition equipment. - In any case, each
1, 2 is typically remote computer equipment connected to andevice extended network 10 such as the Internet for the exchange of data. Each comprises data processing means 11, 21 of processor type, and data storage means 12, 22 such as computer memory, for example a flash memory or a hard disc. - The
server 1 stores a training database, i.e. a set of fingerprint images for which orientation is already known (see below how to represent it) in contrast with said input images that are to be processed. - The
client device 2 advantageously comprises afingerprint scanner 23, so as to be able to directly acquire said input image, typically so that a user can be authenticated. - A CNN generally comprises four types of layers successively processing information:
- the convolution layer which processes blocks from the input one after the other;
- the nonlinear layer which implements an activation function allowing to add nonlinearity to the network and therefore having much more complex decision functions;
- the pooling layer with which to combine several neurons into a single neuron;
- the fully connected layer which connects all the neurons from one layer to all the neurons of the preceding layer (for classification).
- The non-linear layers are often preceded by a batch normalization layer (“BN layer”) before each nonlinear layer NL, so as to accelerate the training.
- The non-linear layer NL activation function is typically the ReLU function (Rectified Linear Unit) which is equal to f(x)=max(0, x) and the most used pooling layer (labeled POOL) is the function AvgPool which corresponds to an average among the values of a square (several values are pooled into only one).
- The convolution layer, labeled CONV, and the fully connected layer, labeled FC, generally correspond to a scalar product between the neurons of the preceding layer and the weights from the CNN.
- In general, a CNN typically comprises a set of successive convolution layers. In a known way and as explained above, each of said convolution layers can be followed by a batch normalization layer BN and/or a non-linear layer, in particular ReLU, preferably both in that order.
- Typical CNN architectures stack several pairs of CONV NL layers and then add a POOL layer and repeat this plan [(CONV NL)p→POOL] until getting a sufficiently small size output factor, and then ending by two fully connected FC layers.
- This is a typical CNN architecture:
-
INPUT→[[CONV→NL]p→POOL]n→FC→FC - Advantageously, the direct determination of the angular orientation deviation is seen as a “pure” classification of the input image, i.e. the problem is quantified, without any object detection task being involved.
- In other words, this classification CNN does not predict any bounding box or directly estimates precise values of candidate orientation angular deviation, but directly identifies at least one potential class of orientation angular deviation of said input image with respect to the reference orientation among a plurality of predetermined potential angular deviation classes, each potential angular deviation class being associated with an angular deviation value representative of the class (which will be the value taken into account when a class is identified). By “directly”, it is understood that this CNN associates one or more classifications with the input image without prior processing. This is in contrast to the mentioned CNNs of the Faster R-CNN type in which the detection block (such as VGG-16) proposes candidate bounding boxes, which are each selected and classified; in other words, the classification is implemented on the candidate boxes (i.e. indirectly) and not directly the image as a whole. Naturally, any CNN produces a number of internal states that feature maps are, including in this case of said “direct” classification, but such feature maps are part of the normal functioning of any CNN, in contrast to the intermediate results that the candidate enclosures of a detection CNN are.
- It is also understood that this CNN associates only classifications (as will generally be seen as a confidence score) with the input image, but not other outputs such as including box coordinates or even an annotated image.
- Each class is defined in particular by an interval (preferably continuous, but possibly a discontinuous union of continuous sub-intervals) of angular deviations, and the angular deviation value representative of the class is then typically the average value of the interval. For example, an interval [10°, 20° ] can be associated with the value 15°. The set of intervals preferably forms a partition of a range of angular deviations of potential orientations (repeated as 360°, or less).
- According to a first possibility, the intervals defining the classes are all the same size. For example, 360 classes of 1 degree each (for example, [2°,3° ] is an interval), or 60 classes of 6 degrees each (for example [6°,12° ] is an interval) can be considered.
- A second possibility is that the intervals defining the classes have different sizes, particularly according to a probability density. For example, for a smartphone fingerprint sensor-type application, it is, as explained, much more likely to have an orientation deviation in the range of 10°-20° than a deviation of 90°-100°. In such an application, narrower intervals can be expected for small deviations than for large deviations, for example on one side [2°, 4° ] and on the other side [90°, 105° ]. The person skilled in the art will be able to construct any set of classes of deviation of orientation of his/her choice.
- It is understood that the estimated candidate angular deviation values are theoretical class function values, and more precisely angular deviation values representative of the classes. Therefore, for a given fingerprint image, the CNN identifies one or more candidate classes, i.e. defined by an interval likely to contain the actual value of the angular deviation of orientation of this image with respect to the reference orientation (it is noted that several classes can be identified, see below, although in practice the actual angular deviation has a unique value), and for each identified class, the representative angular deviation value of the class is considered as a candidate angular deviation estimated in step (a).
- For example, in a 60-class embodiment of equal size, with an input image having an actual angular orientation deviation of 11°, then the class corresponding to the interval [6, 12° ] is identified, and if the representative value of this class is 9° (in the case of the average), then the estimated value of the candidate angular deviation is 9°.
- A plurality of classes can be identified if the CNN returns an orientation vector, i.e. a score for each class. More precisely, the orientation vector of an input image associates each of the plurality of classes with a score representative of the probability of belonging of said input data to the potential angular deviation class; in other words, the probability that the actual angular deviation of orientation belongs to the corresponding interval. The vector, therefore, has a dimension equal to the number of classes, i.e. 60 in our example. It is understood that a classification CNN, as defined above, prefers to return only the orientation vector, as opposed to a detection CNN.
- As shown below, the class or classes identified may be the k (for example, two) with the highest score (i.e., a fixed number of classes are identified), or all those with a score above a predetermined threshold (for example, probability greater than 10%, or even probability greater than 20%, depending on the desired flexibility and the “risk” of having improperly oriented input images. For example, for a latent fingerprint scanner, the threshold can be lowered to maximize the number of peaks considered due to the potential multiplicity of plausible orientations).
- Said CNN can be of many types, including a conventional CNN (direct succession of CONV convolution layers, BN batch normalization layers, and NL non-linear layers).
- According to a preferred embodiment, said CNN is of the residual network type. It can be seen that such residual networks are very efficient for their unique task of classifying the input image, and much lighter than detection CNNs, whose blocks such as VGG-16 or VGG-19 of image processing are particularly massive.
- With reference to
FIG. 2 , a residual network, or RESNET, is a CNN with at least one “residual connection” (also known as “skip connection” or simply “short-cut”), i. e. a connection from which at least one layer is “short-circuited”, by analogy with what is found in the pyramidal neurons of the brain. - Indeed, when a model is made more complex by adding layers, some of these layers can have a negative impact on the model's performance. Residual connections ensure that if a useful transformation is not learned, one layer must at worst learn the identity, avoiding degrading the performance of the other layers. The operating principle behind the residual networks is to add point by point the entry and exit of a convolution layer, allowing the signal to propagate from the superficial layers to the deeper ones. As explained, this network provides excellent results in direct determination of angular deviation of orientation, and in particular in pure classification.
- In the RESNET example in
FIG. 2 , in a classical way, the main branch (shorted by the residual connection) has a plurality of convolution layers each followed by a batch normalization layer and/or a non-linear layer of the ReLU type. The output of this branch is typically added point by point with the input image (due to the residual connection), and goes through a final activation layer of the ReLU type, with a reduction in dimensionality (typically MaxPool). - A fully connected layer (IP/FC) allows the generation of the orientation vector (i.e. the classification itself), and a Sigmoid or Softmax loss function can be used for training, see below.
- Advantageously, the method begins by a training step (a0), by the data processing means 11 of the
server 1, from a database of labeled fingerprint images (i.e. for which the corresponding angular deviation class of orientation is known), from parameters of said CNN. - This training can be done in a traditional way, minimizing the loss function.
- The main difficulty of this approach is to extract the features relevant to the determination of the expected angular deviation, while penalizing those features that do not allow class discrimination. In order to do this, a first possibility is to use a cross entropy on a Softmax as a loss function, since it allows to extract those features that maximize the probability of the expected angle deviation and it penalizes the probability of the other angle deviations (i.e. of the other classes).
- The Softmax function favors only one class, alternatively if it is desired to increase the chances of having several identified classes, a Sigmoid can be used as a loss function to privilege not only the expected angular deviation of orientation, but also, with a lower probability, those who are closest.
- Therefore, if the input image is partial and has, for example, a part (e. g. a loop) that can be located, for example, to the right or left of a fingerprint, the orientation vector will have two distinct peaks of scores, see for example
FIG. 3 , so as to take into account the two potential cases, each of which remains probable. - Note that said database of already labeled fingerprint images can be obtained by increasing data. More precisely, a reference database in which all images are oriented according to the reference orientation is the starting point (i.e. zero orientation angular deviation). Other versions of these reference prints are then artificially generated by applying various angles of rotation to them (and labeled accordingly), so as to create new training images with varying orientation deviations. For example, angles of ±[5°, 10°, 20°, 30°, 40°, 70°, 120° ] are applied to them, creating a new database 14 times larger, to ensure the robustness of the CNN against common acquisition defects.
- The trained CNN can be stored as necessary on data storage means 22 of the
client 2 for use in orientation estimation. It should be noted that the same CNN can be embedded onnumerous clients 2, only one training is necessary. - In a main step (a), as explained, at least one candidate angular deviation of an orientation of said input image with respect to a reference orientation is estimated by the
data 21 processing means of theclient 2 using the embedded CNN. - Then, in a step (b), said input image is recalibrated according to said estimated candidate angular deviation, so that the orientation of the recalibrated image matches said reference orientation. Note that a recalibrated image can be generated in step (b) for each estimated candidate angular deviation. The idea is that if several probability peaks emerge, it is not possible to know which one is the right one, so all of them need to be evaluated. Therefore, one of the recalibrated images will be “well recalibrated” i.e. oriented in accordance with the reference orientation.
- Finally, in a step (c), said recalibrated image can be processed so as to extract said features of interest from the fingerprint represented by said input image, which notably can comprise the position and/or orientation of minutia. If several recalibrated images have been generated, step (c) is implemented for each one. In particular, this step (c) can be implemented by means of a second dedicated CNN, taking as input each recalibrated image. This is referred to as a recalibration CNN for the first CNN, and a coding CNN for the second. Any known coding CNN can be used here.
- Note that step (c) can also be systematically implemented on the input image “as is”, i.e. without recalibration (in other words, the input images are added with the recalibrated images). Indeed, if the input image is already well oriented, the CNN may tend to recalibrate it anyway, which may lead to a slight decrease in performance.
- Preferably, the method further comprises a step (d) of identifying or authenticating said individual by comparing the features of interest extracted from the fingerprint represented by said input image, with the fingerprint features of reference, which can be implemented in any known way by the person skilled in the art.
- For example, the
client 2 can store the features of the prints of one or more authorized users as reference prints, so as to manage the unlocking of the client equipment 2 (particularly in the case of an input image acquired directly by an integrated scanner 23); if the extracted features correspond to those expected from an authorized user, the data processing means 21 consider that the individual attempting to be authenticated is authorized, and they proceed with the unlocking. - Alternatively, the
client 2 can send the extracted features to a remote database of said reference fingerprint features, for identification of the individual. - Different tests of the present method have been carried out.
FIG. 4a shows the performance of the recalibration CNN in predicting angular deviation of orientation as a function of the average number of attempts, i.e. the average number of recalibrated images obtained. There is an increase in accuracy to nearly 90% when moving from one attempt (i.e. only the highest peak is considered), to an average of 2.8 attempts. -
FIG. 4b shows the matching performances, as a function of the real angular deviation of orientation: it can be noted that in the absence of recalibration (a single coding CNN used directly for the extraction of features), performances decrease due to even a few degrees of deviation. - On the other hand, the various tested configurations, all of which include a first recalibration CNN and a second coding CNN, and which differ in the average number of attempts, show that performance is maintained regardless of the angular deviation in orientation. The embodiment, which also includes the systematic use of the input image (not recalibrated), is a little less effective for large orientation deviations (since the input image is not relevant), but more effective if the angular orientation deviation is small, which remains a frequent case.
- According to a second and third aspect, the invention relates to a computer program product comprising code instructions for execution (in particular on data processing means 11, 21 of the
server 1 and/or of the client 2) of a method of extracting features of interest from a fingerprint represented by an input image, as well as storage means readable by a computer equipment (a 12, 22 of thememory server 1 and/or of the client 2) on which said computer program product is located.
Claims (16)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR1859682 | 2018-10-19 | ||
| FR1859682A FR3087558B1 (en) | 2018-10-19 | 2018-10-19 | METHOD OF EXTRACTING CHARACTERISTICS FROM A FINGERPRINT REPRESENTED BY AN INPUT IMAGE |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200125824A1 true US20200125824A1 (en) | 2020-04-23 |
| US11232280B2 US11232280B2 (en) | 2022-01-25 |
Family
ID=66218140
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/658,384 Active 2040-01-24 US11232280B2 (en) | 2018-10-19 | 2019-10-21 | Method of extracting features from a fingerprint represented by an input image |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US11232280B2 (en) |
| EP (1) | EP3640843B1 (en) |
| ES (1) | ES2973497T3 (en) |
| FR (1) | FR3087558B1 (en) |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112818175A (en) * | 2021-02-07 | 2021-05-18 | 中国矿业大学 | Factory worker searching method and training method of worker recognition model |
| CN113111725A (en) * | 2021-03-18 | 2021-07-13 | 浙江大学 | Vibration motor equipment fingerprint extraction identification system based on homologous signal |
| US20210406510A1 (en) * | 2020-06-26 | 2021-12-30 | Idemia Identity & Security France | Method for detecting at least one biometric trait visible in an input image by means of a convolutional neural network |
| US11303853B2 (en) | 2020-06-26 | 2022-04-12 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
| US11361468B2 (en) * | 2020-06-26 | 2022-06-14 | Standard Cognition, Corp. | Systems and methods for automated recalibration of sensors for autonomous checkout |
| CN115205292A (en) * | 2022-09-15 | 2022-10-18 | 合肥中科类脑智能技术有限公司 | Distribution line tree obstacle detection method |
| US20230119918A1 (en) * | 2021-10-14 | 2023-04-20 | Thales Dis France Sas | Deep learning based fingerprint minutiae extraction |
| US11810317B2 (en) | 2017-08-07 | 2023-11-07 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
| CN117831082A (en) * | 2023-12-29 | 2024-04-05 | 广电运通集团股份有限公司 | Palm area detection method and device |
| US12056660B2 (en) | 2017-08-07 | 2024-08-06 | Standard Cognition, Corp. | Tracking inventory items in a store for identification of inventory items to be re-stocked and for identification of misplaced items |
| CN118885402A (en) * | 2024-09-29 | 2024-11-01 | 中国工程物理研究院计算机应用研究所 | A deep learning target detection system stress testing method and device |
| US12190285B2 (en) | 2017-08-07 | 2025-01-07 | Standard Cognition, Corp. | Inventory tracking system and method that identifies gestures of subjects holding inventory items |
| US12288294B2 (en) | 2020-06-26 | 2025-04-29 | Standard Cognition, Corp. | Systems and methods for extrinsic calibration of sensors for autonomous checkout |
| US12373971B2 (en) | 2021-09-08 | 2025-07-29 | Standard Cognition, Corp. | Systems and methods for trigger-based updates to camograms for autonomous checkout in a cashier-less shopping |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114386514B (en) * | 2022-01-13 | 2022-11-25 | 中国人民解放军国防科技大学 | Method and device for identifying unknown traffic data based on dynamic network environment |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10339362B2 (en) * | 2016-12-08 | 2019-07-02 | Veridium Ip Limited | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices |
| JP7029321B2 (en) * | 2017-04-20 | 2022-03-03 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Information processing methods, information processing equipment and programs |
-
2018
- 2018-10-19 FR FR1859682A patent/FR3087558B1/en active Active
-
2019
- 2019-10-16 EP EP19306348.4A patent/EP3640843B1/en active Active
- 2019-10-16 ES ES19306348T patent/ES2973497T3/en active Active
- 2019-10-21 US US16/658,384 patent/US11232280B2/en active Active
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12243256B2 (en) | 2017-08-07 | 2025-03-04 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
| US12190285B2 (en) | 2017-08-07 | 2025-01-07 | Standard Cognition, Corp. | Inventory tracking system and method that identifies gestures of subjects holding inventory items |
| US12056660B2 (en) | 2017-08-07 | 2024-08-06 | Standard Cognition, Corp. | Tracking inventory items in a store for identification of inventory items to be re-stocked and for identification of misplaced items |
| US11810317B2 (en) | 2017-08-07 | 2023-11-07 | Standard Cognition, Corp. | Systems and methods to check-in shoppers in a cashier-less store |
| US11361468B2 (en) * | 2020-06-26 | 2022-06-14 | Standard Cognition, Corp. | Systems and methods for automated recalibration of sensors for autonomous checkout |
| US11580766B2 (en) * | 2020-06-26 | 2023-02-14 | Idemia Identity & Security France | Method for detecting at least one biometric trait visible in an input image by means of a convolutional neural network |
| US11303853B2 (en) | 2020-06-26 | 2022-04-12 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
| US11818508B2 (en) | 2020-06-26 | 2023-11-14 | Standard Cognition, Corp. | Systems and methods for automated design of camera placement and cameras arrangements for autonomous checkout |
| US20210406510A1 (en) * | 2020-06-26 | 2021-12-30 | Idemia Identity & Security France | Method for detecting at least one biometric trait visible in an input image by means of a convolutional neural network |
| US12079769B2 (en) | 2020-06-26 | 2024-09-03 | Standard Cognition, Corp. | Automated recalibration of sensors for autonomous checkout |
| US12288294B2 (en) | 2020-06-26 | 2025-04-29 | Standard Cognition, Corp. | Systems and methods for extrinsic calibration of sensors for autonomous checkout |
| US12231818B2 (en) | 2020-06-26 | 2025-02-18 | Standard Cognition, Corp. | Managing constraints for automated design of camera placement and cameras arrangements for autonomous checkout |
| CN112818175A (en) * | 2021-02-07 | 2021-05-18 | 中国矿业大学 | Factory worker searching method and training method of worker recognition model |
| CN113111725A (en) * | 2021-03-18 | 2021-07-13 | 浙江大学 | Vibration motor equipment fingerprint extraction identification system based on homologous signal |
| US12373971B2 (en) | 2021-09-08 | 2025-07-29 | Standard Cognition, Corp. | Systems and methods for trigger-based updates to camograms for autonomous checkout in a cashier-less shopping |
| US20230119918A1 (en) * | 2021-10-14 | 2023-04-20 | Thales Dis France Sas | Deep learning based fingerprint minutiae extraction |
| US12190629B2 (en) * | 2021-10-14 | 2025-01-07 | Thales Dis France Sas | Deep learning based fingerprint minutiae extraction |
| CN115205292A (en) * | 2022-09-15 | 2022-10-18 | 合肥中科类脑智能技术有限公司 | Distribution line tree obstacle detection method |
| CN117831082A (en) * | 2023-12-29 | 2024-04-05 | 广电运通集团股份有限公司 | Palm area detection method and device |
| CN118885402A (en) * | 2024-09-29 | 2024-11-01 | 中国工程物理研究院计算机应用研究所 | A deep learning target detection system stress testing method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3640843A1 (en) | 2020-04-22 |
| EP3640843B1 (en) | 2023-12-20 |
| ES2973497T3 (en) | 2024-06-20 |
| FR3087558B1 (en) | 2021-08-06 |
| US11232280B2 (en) | 2022-01-25 |
| FR3087558A1 (en) | 2020-04-24 |
| EP3640843C0 (en) | 2023-12-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11232280B2 (en) | Method of extracting features from a fingerprint represented by an input image | |
| Qin et al. | Deep representation for finger-vein image-quality assessment | |
| US20150178547A1 (en) | Apparatus and method for iris image analysis | |
| US20080166026A1 (en) | Method and apparatus for generating face descriptor using extended local binary patterns, and method and apparatus for face recognition using extended local binary patterns | |
| US11087106B2 (en) | Method of extracting features from a fingerprint represented by an input image | |
| Rajasekar et al. | Efficient multimodal biometric recognition for secure authentication based on deep learning approach | |
| Vignesh et al. | Land use and land cover classification using recurrent neural networks with shared layered architecture | |
| Zhang et al. | Advanced biometrics | |
| Al-Dabbas et al. | Two proposed models for face recognition: Achieving high accuracy and speed with artificial intelligence | |
| Kumar et al. | An efficient gravitational search decision forest approach for fingerprint recognition | |
| US11580774B2 (en) | Method for the classification of a biometric trait represented by an input image | |
| Prasanth et al. | Fusion of iris and periocular biometrics authentication using CNN | |
| Kumar et al. | A multimodal SVM approach for fused biometric recognition | |
| Kakulapati et al. | Fingerprint recognition using the HOG and LIME algorithm | |
| Agarwal et al. | Human identification and verification based on signature, fingerprint and iris integration | |
| Prakash et al. | Fusion of multimodal biometrics using feature and score level fusion | |
| Safavipour et al. | A hybrid approach for multimodal biometric recognition based on feature level fusion in reproducing kernel Hilbert space | |
| Subitha et al. | Artificial Intelligence in Biometric Systems | |
| Elbendary et al. | Palm-print recognition based on deep residual networks | |
| Kumar et al. | Fusing face and iris: a deep/machine learning approach for advanced biometric recognition | |
| Liu et al. | Palm-dorsa vein recognition based on independent principle component analysis | |
| Htwe et al. | Image Processing Techniques for Fingerprint Identification and Classification-A Review [J] | |
| CHLAOUA | Combination of Multiple Biometrics for Recognition of Persons | |
| Verma et al. | SCCM: A Novel Hybrid Framework for Dual Biometric Authentication for Identity Enhancement | |
| Dey et al. | Design and Implementation of Authentication System Using Deep Convoluted Siamese Network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: IDEMIA IDENTITY & SECURITY FRANCE, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MABYALAHT, GUY;KAZDAGHLI, LAURENT;REEL/FRAME:050775/0136 Effective date: 20190719 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| AS | Assignment |
Owner name: IDEMIA PUBLIC SECURITY FRANCE, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:IDEMIA IDENTITY & SECURITY FRANCE;REEL/FRAME:071930/0625 Effective date: 20241231 Owner name: IDEMIA PUBLIC SECURITY FRANCE, FRANCE Free format text: ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNOR:IDEMIA IDENTITY & SECURITY FRANCE;REEL/FRAME:071930/0625 Effective date: 20241231 |