[go: up one dir, main page]

CN112329528B - Fingerprint entry method and device, storage medium and electronic device - Google Patents

Fingerprint entry method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112329528B
CN112329528B CN202011054188.XA CN202011054188A CN112329528B CN 112329528 B CN112329528 B CN 112329528B CN 202011054188 A CN202011054188 A CN 202011054188A CN 112329528 B CN112329528 B CN 112329528B
Authority
CN
China
Prior art keywords
image
base
matching
input image
fingerprint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011054188.XA
Other languages
Chinese (zh)
Other versions
CN112329528A (en
Inventor
邢源
汤瑶
梁嘉骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Force Map New Chongqing Technology Co ltd
Original Assignee
Force Map New Chongqing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Force Map New Chongqing Technology Co ltd filed Critical Force Map New Chongqing Technology Co ltd
Priority to CN202011054188.XA priority Critical patent/CN112329528B/en
Publication of CN112329528A publication Critical patent/CN112329528A/en
Application granted granted Critical
Publication of CN112329528B publication Critical patent/CN112329528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及指纹识别技术领域,提供一种指纹录入方法及装置、存储介质及电子设备。其中,指纹录入方法包括:获取输入图像,输入图像为待录入的指纹图像;获取输入图像的特征以及指纹底库中的每张底库图像的特征,并基于获取到的特征将输入图像分别与每张底库图像进行匹配,与输入图像匹配的底库图像为匹配底库图像;若存在图像底库图像,则基于输入图像和匹配底库图像中的重叠区域,将输入图像和匹配底库图像进行拼接,获得新的底库图像。该方法扩大了底库图像的面积并改善了底库图像的质量,从而有利于提升指纹识别精度。

The present application relates to the field of fingerprint recognition technology, and provides a fingerprint entry method and device, a storage medium and an electronic device. The fingerprint entry method includes: obtaining an input image, the input image is a fingerprint image to be entered; obtaining the features of the input image and the features of each base image in the fingerprint base library, and matching the input image with each base image based on the obtained features, and the base image matching the input image is the matching base image; if there is an image base image, then based on the overlapping area in the input image and the matching base image, the input image and the matching base image are spliced to obtain a new base image. This method expands the area of the base image and improves the quality of the base image, which is conducive to improving the accuracy of fingerprint recognition.

Description

Fingerprint input method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of fingerprint identification technologies, and in particular, to a fingerprint input method and apparatus, a storage medium, and an electronic device.
Background
In recent years, fingerprint identification is widely used in various fields as a biometric authentication technique. The existing fingerprint identification method generally comprises an input stage and a comparison stage, wherein a plurality of fingerprint images are input in the input stage, a plurality of base templates are constructed, fingerprint features are extracted for each template, features in the fingerprint images are extracted and compared with the base templates in the comparison stage, and whether the comparison is passed or not is determined. However, with the demands of applications such as fingerprint identification under a mobile phone screen and the like on high integration, the area of a fingerprint image collected by a sensor each time is continuously reduced, so that a base template is directly constructed based on the collected fingerprint image, and the fingerprint identification precision is low.
Disclosure of Invention
The embodiment of the application aims to provide a fingerprint input method and device, a storage medium and electronic equipment, so as to improve the technical problems.
In order to achieve the above purpose, the present application provides the following technical solutions:
The embodiment of the application provides a fingerprint input method, which comprises the steps of obtaining an input image, wherein the input image is a fingerprint image to be input, obtaining characteristics of the input image and characteristics of each base image in a fingerprint base, and respectively matching the input image with each base image based on the obtained characteristics, wherein the matching of the input image and the base image means that an overlapping area exists between a fingerprint area in the input image and a fingerprint area in the base image, the base image matched with the input image is a matched base image, and if the matched base image exists, the input image and the matched base image are spliced based on the overlapping area in the input image and the matched base image, so that a new base image is obtained.
The method has the beneficial effects that at least the following steps are included:
Firstly, the area of the existing base image is enlarged in the process of inputting the input image (the area of the new base image is not smaller than that of the base image or the input image before stitching), so that more fingerprint features are extracted from the base image to meet the high-precision verification requirement. The method can effectively solve the problem of low fingerprint identification precision caused by small area of the fingerprint image collected by the sensor.
In addition, splicing fingerprint images based on the overlapping areas is equivalent to multi-image fusion noise reduction in the overlapping areas, so that the quality of the bottom library image is improved, and fingerprint identification accuracy is improved. Taking the fingerprint identification under the screen as an example, due to interference caused by the screen, fingerprint signals and non-fingerprint signals are mixed mutually, so that the collected fingerprint image has serious noise, the fingerprint identification precision is negatively influenced, and the problem can be improved after the method is applied.
In one implementation manner of the first aspect, the stitching the input image and the matching base image based on the overlapping area in the input image and the matching base image to obtain a new base image includes mapping the input image and the matching base image to a reference coordinate system by using a geometric transformation matrix between the input image and the matching base image, wherein the geometric transformation matrix between the input image and any matching base image is calculated according to a matching point pair between the input image and the matching base image, the matching point pair between the input image and the matching base image is a point pair in the overlapping area in two images, and stitching the mapped input image and the mapped matching base image to obtain the new base image.
In the above implementation manner, since one matching point pair between the input image and a certain base image is formed by two pixel points corresponding to the same position in the fingerprint area in the input image and the base image, the set formed by the matching point pair approximately defines the overlapping area between the input image and the base image, and further, the calculated geometric transformation based on the matching point pair is used for mapping the input image and the matching base image, which is equivalent to aligning the input image and the matching base image based on the overlapping area in the input image and the matching base image.
In one implementation manner of the first aspect, the mapping the input image and the matching base image to a reference coordinate system by using a geometric transformation matrix between the input image and the matching base image includes mapping the matching base image to a coordinate system where the input image is located by using a geometric transformation matrix between the input image and the matching base image, or mapping the input image and a non-target base image to a coordinate system where a target base image is located by using a geometric transformation matrix between the input image and the matching base image, where the target base image is a base image selected from the matching base images, and the non-target base image is an image other than the target base image in the matching base image.
The geometric transformation matrix between the input image and the matching bottom library image is utilized to map the input image and the matching bottom library image to the same coordinate system, which is equivalent to aligning the input image and the matching bottom library image, thereby being beneficial to accurately splicing the images.
In an implementation manner of the first aspect, the target base image is a base image with a largest area selected from the matching base images.
For the target base image, one can be selected from all the matching base images. However, the inventor has studied for a long time, and found that after mapping by using a geometric transformation matrix, an image generates certain distortion, and the larger the image area before mapping is, the more serious the distortion degree is, so that if one of all the matched bottom library images with the largest area is used as a target bottom library image, the other matched bottom library images are necessarily smaller than the bottom library image in area, and the distortion generated after mapping is also smaller. And, doing so also facilitates later stitching to produce a larger area bottom library image.
In one implementation manner of the first aspect, the mapping the input image and the non-target base image to the coordinate system where the target base image is located by using the geometric transformation matrix between the input image and the matching base image includes mapping the input image to the coordinate system where the target base image is located by using the geometric transformation matrix between the input image and the target base image, calculating the geometric transformation matrix between the non-target base image and the target base image according to the geometric transformation matrix between the input image and the non-target base image and the geometric transformation matrix between the input image and the target base image, and mapping the non-target base image to the coordinate system where the target base image is located by using the geometric transformation matrix between the non-target base image and the target base image.
The geometric transformation matrix between the non-target base image and the target base image is unknown, but can be obtained by conversion according to the geometric transformation matrix between the input image and the non-target base image and the geometric transformation matrix between the input image and the target base image.
In one implementation manner of the first aspect, the steps of acquiring the characteristics of the input image and the characteristics of each base image in the fingerprint base, and respectively matching the input image with each base image based on the acquired characteristics include acquiring at least one characteristic of the input image and at least one characteristic of the base image for each base image in the fingerprint base, determining the characteristics of the input image and the characteristics of the base image, wherein each characteristic corresponds to one pixel point in an image, two pixels corresponding to two matched characteristics form one matching point pair between the input image and the base image, calculating a geometric transformation matrix between the input image and the base image according to the matching point pair if the total number of the matching point pairs is larger than a first threshold, and determining whether each matching point pair accords with the geometric transformation represented by the geometric transformation matrix if the total number of the matching point pairs accords with the geometric transformation is larger than a second threshold, and determining that the input image matches with the base image.
The image matching process can be considered to be the condition of only considering one type of feature in the image or not distinguishing various types of features, and the matching logic is relatively simple.
In one implementation manner of the first aspect, the steps of obtaining the features of the input image and the features of each base image in the fingerprint base, and matching the input image with each base image based on the obtained features respectively include obtaining multiple types of features of the input image and multiple types of features of the base image for each base image in the fingerprint base, determining for each type of features that the type of features of the input image matches the type of features of the base image, wherein each type of features of each image includes at least one feature, each feature corresponds to one pixel point in the image, two pixels corresponding to two matched features form a matching point pair between the input image and the base image, calculating an initial geometric transformation matrix between the input image and the base image according to the matching point pair if the total number of the matching point pair is larger than a first threshold, determining for each matching point pair if each matching point pair meets the initial geometric transformation matrix, determining for each type of features under the initial geometric transformation matrix if the type of features meets the first geometric transformation, and determining for each type of matching point pair under the initial geometric transformation, and calculating the initial geometric transformation matrix if the total number of the matching point pair matches the initial geometric transformation matrix is larger than the first threshold, and determining for each type of matching point pair under the initial geometric transformation based on the initial geometric transformation.
The image matching process can be considered as a condition of considering multiple types of features in the image, and the matching logic is complex, but the reliability of the matching conclusion is high.
In one implementation manner of the first aspect, before the matching the mapped input image and the mapped matching base image, the method further includes obtaining a weight matrix of the mapped input image and a weight matrix of the mapped matching base image, where the weight matrix includes weights of pixels in corresponding images, and the matching base image and the input image after mapping are spliced to obtain a new base image, where the obtaining a pixel-by-pixel fusion is performed on the input image after mapping and the matching base image after mapping based on the obtained weight matrix to obtain the new base image.
The splicing between the mapped input image and the mapped matched base image can be realized in a weighted fusion mode, and the weighted fusion can play a denoising effect in the overlapping area of a plurality of images.
In one implementation manner of the first aspect, the obtaining the weight matrix of the mapped input image and the weight matrix of the mapped matching base image includes obtaining the weight matrix of the input image and the weight matrix of the matching base image, where the weight matrix includes weights of pixels in corresponding images, and mapping the weight matrix of the input image and the weight matrix of the matching base image to the reference coordinate system by using a geometric transformation matrix between the input image and the matching base image, so as to obtain the weight matrix of the mapped input image and the weight matrix of the matching base image.
The weight matrix of the mapped input image and the weight matrix of the mapped matched bottom library image can be obtained by mapping the weight matrix of the input image and the weight matrix of the matched bottom library image, and the mapping mode is consistent with the mapping of the input image and the bottom library image. Mapping the weight matrix of the input image and the weight matrix of the matching base image to the same coordinate system is equivalent to aligning the weight matrix of the input image and the weight matrix of the matching base image, thereby facilitating weight fusion (e.g., accumulation weight) between the weight matrices.
In one implementation manner of the first aspect, the method further includes acquiring a weight matrix of the new bottom-library image, and storing the weight matrix in association with the new bottom-library image.
And saving the weight matrix of the new bottom library image in the process of inputting the input image every time, and directly reading if the weight matrix of the bottom library image is needed in the process of inputting the input image later.
In one implementation manner of the first aspect, the obtaining the weight matrix of the new bottom-bank image includes accumulating the mapped weight matrix of the input image and the mapped weight matrix of the matched bottom-bank image element by element, so as to obtain the weight matrix of the new bottom-bank image.
Element-by-element accumulation is a way of performing weight fusion, and meanwhile, the accumulated weight becomes larger, which indicates that the denoising fusion effect at the pixel position corresponding to the weight is better.
In one implementation manner of the first aspect, the weights in the weight matrix represent fingerprint sharpness of a corresponding image of the weight matrix at corresponding pixel points.
If the weight can represent fingerprint definition, the blurred image or the blurred region in the image will correspond to a smaller weight, and the sharper image or the sharper region in the image will correspond to a larger weight, so that after the fusion, the pixel value in the new bottom library image will mainly be derived from the pixel value of the sharper image or the sharper region in the image at the corresponding position, thereby significantly improving the quality of the bottom library image.
In one implementation manner of the first aspect, the acquiring the weight matrix of the input image includes inputting the input image to a pre-trained neural network model, and acquiring the weight matrix of the input image output by the neural network model.
For the blurred region in the image, the neural network model automatically distributes smaller weight to the blurred region, so that even if the input image is blurred, the quality influence on the new bottom library image in the fusion process is very limited due to the fact that the weight in the corresponding weight matrix is generally smaller. For clearer regions in the image, the neural network model automatically assigns larger weights to the clearer regions so as to strengthen the influence of the regions on the quality of the new bottom library image in the fusion process.
In one implementation manner of the first aspect, before the input image is input to a pre-trained neural network model to obtain a weight matrix of the input image output by the neural network model, the method further comprises inputting a training image to a neural network model to be trained to obtain a weight matrix of the training image output by the neural network model, wherein the training image is a fingerprint image, calculating a prediction loss according to the weight matrix of the training image and a label matrix of the training image, and updating parameters of the neural network model by using a back propagation algorithm based on the prediction loss, wherein each label in the label matrix takes one of a plurality of preset label values, each label value represents a fingerprint definition, and training the neural network model by continuously inputting the training image until a training end condition is met to obtain a trained neural network model.
According to the training process, the weight predicted by the trained neural network model can represent the fingerprint definition due to the adoption of the label value for representing the fingerprint definition.
In an implementation manner of the first aspect, the tag matrix includes a plurality of labeling areas, and each tag in the same labeling area takes the same tag value.
The pixel-by-pixel labeling may result in a too great burden on labeling personnel, so that a region-by-region labeling mode may be adopted, and the fingerprint definition of the training image in each labeling region is considered to be approximately the same, thereby being beneficial to reducing the workload of the labeling personnel.
In one implementation manner of the first aspect, after the acquiring the weight matrix of the input image and the weight matrix of the matching bottom-bank image and before the mapping the weight matrix of the input image and the weight matrix of the matching bottom-bank image under the reference coordinate system by using the geometric transformation matrix between the input image and the matching bottom-bank image, the method further includes performing attenuation processing on the weights in the weight matrix of the matching bottom-bank image.
The attenuation processing of the weight is beneficial to forgetting the input blurred image in reasonable time, and the pollution of the fingerprint database is avoided, so that the fingerprint identification precision is influenced.
In one implementation manner of the first aspect, the attenuating the weights in the weight matrix of the matching bottom-bank image includes multiplying the weight matrix of the matching bottom-bank image by an attenuation coefficient, or multiplying the weights in the weight matrix of the matching bottom-bank image greater than a third threshold by an attenuation coefficient.
In the implementation mode, two attenuation strategies are provided, the first attenuation strategy is simpler in logic and is easy to cause poor denoising effect, the second attenuation strategy is used for carrying out weight attenuation in a region with more input times (a region with weight larger than a third threshold value), and is not used for carrying out weight attenuation in a region with less input times (a region with weight not larger than the third threshold value), so that pollution of a fuzzy input image can be effectively resisted, and the problem of poor denoising effect in the region with less input times can be effectively avoided.
In one implementation of the first aspect, if the matching base image is present, after the obtaining a new base image, the method further includes removing the matching base image from the fingerprint base.
After the removing operation is performed on the matched bottom library image, redundant storage of fingerprint information of the same position in the bottom library image is effectively reduced (namely, fingerprint information contained in the overlapping area is only stored in one part in the new bottom library image), so that storage resources and calculation resources are saved.
In one implementation manner of the first aspect, the method further includes determining the input image as a new base image if the fingerprint base is empty or the matching base image does not exist therein, acquiring a weight matrix of the new base image, and associating and storing the acquired weight matrix with the new base image.
The bottom library is empty, which indicates that the fingerprint input process is just started, and the problems of image matching, image stitching and the like are not involved at the moment, so that the input image can be directly used as the first bottom library image in the fingerprint bottom library. The fact that the bottom library image matched with the input image does not exist in the fingerprint bottom library indicates that the fingerprint area in the input image and the fingerprint area in any bottom library image do not exist in an overlapping area, so that the input image and the bottom library image cannot be spliced to enlarge the area of the bottom library image, and the input image can only be added into the fingerprint bottom library as a new bottom library image. For a new bottom library image, the corresponding weight matrix can be stored, and in the process of inputting an input image, if the weight matrix of the bottom library image is needed to be utilized, the bottom library image can be directly read.
In one implementation manner of the first aspect, after each new bottom library image is generated, the method further includes determining whether the total number of bottom library images in the fingerprint bottom library has exceeded a preset number, if so, removing the bottom library image with the smallest area in the fingerprint bottom library from the fingerprint bottom library, or after all input images are processed, reserving the preset number of bottom library images with the largest area in the fingerprint bottom library, and removing other bottom library images from the fingerprint bottom library.
In consideration of storage performance, calculation performance and the like, only a preset number of base images can be stored in the final fingerprint base. The two schemes for achieving the aim are provided, namely, firstly, the bottom library images with smaller areas are continuously removed in the recording process, so that the number of the bottom library images in the fingerprint bottom library is always kept at a smaller level, more storage resources and calculation resources are not consumed in the recording process, and the inventor finds that the number of the bottom library images is always kept at a smaller level through experiments, but the finally obtained fingerprint bottom library still has higher usability. Secondly, all the bottom library images are reserved in the input process, and as the number of the bottom library images in the fingerprint bottom library is large, the number of times of image matching is also large, so that a good result (for example, the area of the finally obtained bottom library image is large) can be obtained, but more storage resources and calculation resources are consumed in the input process.
In one implementation manner of the first aspect, the method further includes obtaining a comparison image, wherein the comparison image is a fingerprint image to be verified, obtaining characteristics of the comparison image and characteristics of base images in an overall fingerprint base, matching the comparison image with the base images based on the obtained characteristics, and determining whether the comparison image passes verification according to a matching result, wherein the overall fingerprint base comprises at least one constructed fingerprint base, the matching of the comparison image with the base images means that an overlapping area exists between a fingerprint area in the comparison image and a fingerprint area in the base image, if the comparison image matches with any base image, the comparison image is determined to pass verification, otherwise, the comparison image is determined to not pass verification.
The fingerprint base constructed by the fingerprint input method provided by the embodiment of the application has the advantages of large base image area, high image quality and the like, so that the accuracy of fingerprint identification based on the overall fingerprint base (comprising at least one constructed fingerprint base) is higher.
The embodiment of the application provides a fingerprint input device, which comprises an image acquisition module, an image matching module and an image splicing module, wherein the image acquisition module is used for acquiring characteristics of an input image and characteristics of each base image in a fingerprint base, and respectively matching the input image with each base image based on the acquired characteristics, the input image is matched with the base image, namely that an overlapping area exists between a fingerprint area in the input image and a fingerprint area in the base image, the base image matched with the input image is a matched base image, and the image splicing module is used for splicing the input image and the matched base image based on the overlapping area in the input image and the matched base image when the matched base image exists, so that a new base image is obtained.
In a third aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when read and executed by a processor, perform the method provided by the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores computer program instructions that, when read and executed by the processor, perform the method provided by the first aspect or any one of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates one possible flow of a fingerprint entry method provided by an embodiment of the present application;
FIG. 2 illustrates another possible configuration of a fingerprint-entry device provided by an embodiment of the present application;
fig. 3 shows a possible structure of the electronic device provided by the embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The terms "first," "second," and the like, are used merely to distinguish one entity or action from another entity or action, and are not to be construed as indicating or implying any actual such relationship or order between such entities or actions.
Fig. 1 shows a possible flow of a fingerprint input method according to an embodiment of the present application. The method may be performed by, but is not limited to, the electronic device 300 shown in fig. 3, and reference may be made to the following explanation with respect to fig. 3 regarding the specific structure of the electronic device 300. Referring to fig. 1, the method includes:
Step S110, acquiring an input image.
The input image may refer to a fingerprint image to be input, in the scheme of the application, a plurality of fingerprint images need to be input for one finger of the same person, and a fingerprint base can be constructed based on the input fingerprint images, wherein the fingerprint base comprises a plurality of fingerprint images, which are called base images. In practice, even if a plurality of persons can enter fingerprints on the same device, each person may enter fingerprint images of a plurality of fingers, so that a plurality of fingerprint libraries may be generated, and the collection of the fingerprint libraries may be called as an overall fingerprint library.
The input image acquisition mode is not limited, and for example, the input image can be acquired in real time through a fingerprint acquisition module, one input image can be acquired and recorded, for example, a plurality of input images can be acquired and stored at one time, and then the acquired fingerprint images are sequentially read out to be used as the input images to execute the input process, and the like.
In the fingerprint input process, the processing procedure is similar for each input image, i.e., steps S110 to S170 are performed (of course, these steps are not all performed for each input image). For a particular entry process (meaning a person entering a fingerprint of a finger), its corresponding fingerprint base is determined without regard to the problem of crossing fingerprint bases.
And step S120, judging whether the fingerprint database is empty.
If the fingerprint database is empty, step S170 is executed, otherwise step S130 is executed.
And step S130, acquiring the characteristics of the input image and the characteristics of each base image in the fingerprint base, and respectively matching the input image with each base image based on the acquired characteristics.
The matching of the input image with a certain base image means that there is an overlapping area between a fingerprint area in the input image and a fingerprint area in the base image, where the fingerprint area can be understood as an area where a fingerprint in the image is located, and the overlapping area can be understood as an overlapping portion in the fingerprint area.
The matching process of the input image and each of the bottom library images is similar, and one of the bottom library images is taken as an example for explanation. In some implementations, the matching process may be divided into two phases, the first phase being referred to as feature matching and the second phase being referred to as spatial matching, described below separately:
Feature matching:
For an input image, its features may be extracted when step S130 is performed, assuming that at least one feature is extracted, each feature corresponding to a pixel point in the input image, indicating the position of the feature in the input image. The features may be in the form of values, vectors, matrices, etc., and the specific features that may be employed are described later.
For the bottom-library image, the features thereof may also be extracted when step S130 is performed, assuming that at least one feature is extracted, each feature corresponds to a pixel point in the bottom-library image, and indicates the position of the feature in the bottom-library image. In some implementations, if the features of a certain bottom library image are extracted when the kth (k > 1) input image is input, and the bottom library image is not removed in the process of inputting the kth input image (the removing operation is detailed in step S160), the features of the bottom library image may be stored in association with the bottom library image, and when the kth+1 input image is input, the features stored before the bottom library image may be directly read without re-extraction.
After the features of the input image and the bottom library image are obtained, the matched features of the two images can be determined. For example, assuming that the feature is represented by a vector, for the feature at the pixel point X in the input image, a similar feature can be found near the corresponding pixel point X' in the bottom library image, and the similarity criterion can be determined by using the cosine similarity between the two feature vectors and combining with a threshold value.
As mentioned earlier, each feature corresponds to a pixel in the image, so that two pixels corresponding to two matching features may constitute a point pair between the input image and the bottom library image, referred to as a matching point pair. For example, the input image and the bottom library image respectively extract 100 features, of which 40 features are matched with each other, so that a total of 40 matching point pairs can be obtained.
Ideally, a matching point pair between the input image and the base image is formed by the input image and two pixels in the base image corresponding to the same location in the fingerprint area, such that the set of matching point pairs generally defines the overlapping area between the input image and the base image.
Then, judging whether the total number of the matching point pairs is larger than a first threshold value, if so, indicating that the characteristics of the input image and the bottom library image are matched, and continuing to perform space matching, otherwise, indicating that the characteristics of the input image and the bottom library image are not matched, and not needing to continue to perform space matching, and determining that the input image and the bottom library image are not matched. For example, if the first threshold is 30, 40 matching point pairs are currently obtained, then it may be determined that the features of the input image and the bottom library image are matched.
Spatial matching:
From the obtained matching point pairs, a geometric transformation matrix between the input image and the bottom library image can be calculated. For example, if the fingerprint is assumed to be a rigid body, an RT matrix (rotation-translation matrix) between the input image and the bottom library image may be calculated from the matching point pairs. If the number of matching point pairs is too small, the geometric transformation matrix may not be calculated, and at this time, it may be determined that the input image and the bottom library image are not matched. For example, for a two-dimensional rigid body transformation, at least two matching point pairs are required to calculate the RT matrix.
Although the geometric transformation matrix is calculated by using the matching point pairs, since feature extraction is not necessarily completely accurate, there may be erroneous matching in the matching point pairs, and thus not every matching point pair necessarily conforms to the geometric transformation represented by the geometric transformation matrix. Thus, after the geometric transformation matrix is obtained, it is necessary to further verify whether each matching point pair conforms to the geometric transformation characterized by the geometric transformation matrix. One possible way of determining is illustrated below:
After a geometric transformation matrix is obtained, the A can be mapped to a coordinate system where the B is located (namely, the coordinate system where the bottom library image is located is mapped from the coordinate system where the input image is located) by utilizing the geometric transformation matrix, the obtained pixel point is assumed to be A ', then the Euclidean distance between the A' and the B is calculated, if the distance is smaller than a certain threshold value, the current matching point pair accords with the geometric transformation represented by the current geometric transformation matrix, otherwise, the current matching point pair does not accord with the geometric transformation represented by the current geometric transformation matrix.
After all the matching point pairs are verified, judging whether the total number of the matching point pairs conforming to the geometric transformation is larger than a second threshold value, if so, indicating that the input image and the bottom library image have geometric consistency, or that the input image and the bottom library image are spatially matched, or else, indicating that the input image and the bottom library image do not have geometric consistency, or that the input image and the bottom library image are spatially unmatched. For example, if the second threshold is 20, and the obtained 40 matching point pairs are verified to have 32 matching point pairs conforming to the geometric transformation, it may be determined that the input image and the bottom library image have geometric consistency.
Since the matching process between the images ends after the spatial matching, if the input image and the bottom library image have geometric consistency, it indicates that the input image and the bottom library image are matched, and if the input image and the bottom library image do not have geometric consistency, it indicates that the input image and the bottom library image are not matched.
By performing the spatial matching process, not only the matching between the input image and the base image is confirmed, but also a geometric transformation matrix between the two, which can be used in step S150 later, is found. However, in some implementations, the above calculated geometric transformation matrix may not be directly used in step S150, for example, the above calculated geometric transformation matrix is denoted as T1, a geometric transformation matrix T2 may be recalculated based on the matching point pair conforming to the geometric transformation characterized by T1, and then T2 is used in step S150, and so on.
The above-mentioned spatial matching process may be implemented using a random sample consensus (Random Sample Consensus, RANSAC) algorithm, an iterative closest point (ITERATIVE CLOSEST POINT, ICP) algorithm, a conditional expectation maximization (Expectation Conditional Maximization, ECM) algorithm, or the like. It should be noted that some algorithms determine whether the input image and the base image have geometric consistency and calculate the geometric transformation matrix synchronously (instead of successively), but in the above description, for convenience of understanding, a description mode of calculating the geometric transformation matrix first and then determining geometric consistency is adopted.
The features used in step S130 may include at least three types:
Image features, such as oriented FAST detection and rotated BRIEF descriptor (Oriented FAST and Rotated BRIEF, ORB for short) features, scale-invariant feature transform (Scale-INVARIANT FEATURE TRANSFORM, SIFT for short) features, etc.;
Fingerprint features, such as fingerprint minutiae features, minutiae cylindrical coding (Minutiae Cylinder Code, MCC for short) features, triangle features, etc.;
deep learning features, such as features extracted using neural network models, and the like.
When matching the input image with the base image based on the acquired features, one or more of the types of features (e.g., ORB features and SIFT features are both types of features) may be employed, the process of which has been described above. However, in some implementations, considering that each class of features may have a large difference in form, feature matching does not occur across the feature classes, where the matching process of the input image and the bottom library image may proceed as follows:
Firstly, acquiring multiple types of features of an input image and multiple types of features of a bottom library image, wherein each type of feature of each image comprises at least one feature, and each feature corresponds to one pixel point in the image.
Then, for each type of feature, the following steps (1) to (3) are performed:
(1) And determining the matched characteristics of the input image and the characteristics of the bottom library image. Wherein, two pixel points corresponding to the two matched features form a matching point pair between the input image and the bottom library image.
(2) If the total number of the matching point pairs is larger than the first threshold value, calculating an initial geometric transformation matrix between the input image and the bottom library image according to the matching point pairs.
(3) Judging whether each matching point pair accords with the geometric transformation represented by the initial geometric transformation matrix, and if the total number of the matching point pairs which accord with the geometric transformation is larger than a second threshold value, determining that the input image is matched with the bottom library image under the characteristic.
It will be appreciated that the process of determining whether an input image matches a base image under a certain class of characteristics is similar to the determination method set forth above, and that reference may be made directly to the foregoing without any detailed mention of the fact that the two do not match.
And finally, if the input image and the bottom library image are matched under each type of feature and the initial geometric transformation matrix calculated under each type of feature is consistent, determining that the input image and the bottom library image are matched, and if the input image and the bottom library image are not matched under any type of feature or the initial geometric transformation matrix calculated under each type of feature is not consistent, determining that the input image and the bottom library image are not matched. Wherein, the plurality of initial geometric transformation matrices are identical, that is, the geometric transformation matrices are equal or approximately equal, and as for the approximate equality, the judgment can be performed by calculating the deviation value (for example, mean square error) between the two matrices and combining the threshold value.
It can be seen that in these implementations, to determine whether the input image and the base image match, the required verification conditions are more complex, and thus the reliability of the obtained determination result is higher.
After confirming that the input image matches the base image, the geometric transformation matrix between the input image and the base image can be further calculated according to the initial geometric transformation matrix calculated under various characteristics for use in the subsequent step S150. For example, since the initial geometric transformation matrices calculated under various types of features are identical, one of them may be randomly selected as the geometric transformation matrix, or of course, one (for example, the first one) may be fixedly selected as the geometric transformation matrix, or the average matrix of these initial geometric transformation matrices may be calculated as the geometric transformation matrix.
Step S140, judging whether a bottom library image matched with the input image exists.
If there is a base image matching the input image, step S150 is performed, otherwise step S170 is performed. However, it should be noted that, according to step S130, the input image is matched with each of the base images in the fingerprint base, and the matching is not directly performed in step S150 because the matching of one base image is stopped. In other words, when step S150 is performed, it can be considered that all the bottom library images matching the input image have been determined.
For a base image that matches an input image, it may hereinafter also be referred to simply as a matching base image, and for a base image that does not match an input image, it may hereinafter also be referred to simply as a non-matching base image.
And step S150, based on the overlapping area in the input image and the matched bottom library image, splicing the input image and the matched bottom library image to obtain a new bottom library image.
The input image and the matching base image are aligned in their overlapping areas and then superimposed in some fashion, and it is likely that multiple images will be stitched into a larger area image (at least the stitched area will not shrink) that will serve as a new base image. In addition, the superposition of multiple images has a noise reduction effect, so that the new bottom library image has higher quality.
If the input image and a base image have overlapping areas, the input image and the base image are stitched together, and two matching base images that do not already contain overlapping areas are stitched together by the intermediation of the input image. According to the scheme of the application, the bottom library images do not contain overlapping areas in an ideal case, because every input image is input, the bottom library images (i.e. matched bottom library images) containing the overlapping areas between the input image and the input image are spliced with the bottom library images, and only the bottom library images (i.e. unmatched bottom library images) containing no overlapping areas between the input image and the input image are not spliced with the bottom library images. Therefore, the scheme of the application has spliced all input images as much as possible in the fingerprint input process.
In some implementations, the above-mentioned alignment and superposition of the input image and the matching base image according to their overlapping areas may be implemented using a geometric transformation matrix between the input image and the matching base image (as to the possible calculation of the geometric transformation matrix, already set forth in step S130), by:
Firstly, mapping the input image and the matching base image to a certain reference coordinate system by utilizing a geometric transformation matrix between the input image and the matching base image, namely aligning the input image and the matching base image according to the overlapping area. Then, the mapped input image and the mapped matching base image are spliced to obtain a new base image, namely, the aligned input image and the matching base image are overlapped.
The reference coordinate system may be a coordinate system in which the input image is located (for example, the upper left corner of the input image is taken as the origin of coordinates), or may be a coordinate system in which a certain matching base image (which may be called a target base image), although other coordinate systems are not excluded.
If the coordinate system where the input image is located is used as the reference coordinate system, the geometric transformation matrix between the input image and each matching base image is known, so that each matching base image is mapped to the coordinate system where the input image is located by directly using the geometric transformation matrix between the input image and each matching base image.
If the coordinate system in which the target base image is located is used as the reference coordinate system, the input image can be mapped to the coordinate system in which the target base image is located by using the geometric transformation matrix between the input image and the target base image. For the non-target base image (refer to the image except the target base image in the matched base image), as the geometric transformation matrix between the non-target base image and the target base image is unknown, the geometric transformation matrix between the non-target base image and the target base image is calculated according to the geometric transformation matrix between the input image and the non-target base image and the geometric transformation matrix between the input image and the target base image, and then the non-target base image is mapped to the coordinate system where the target base image is located by utilizing the geometric transformation matrix between the non-target base image and the target base image.
For example, assuming that M ia is the geometric transformation matrix of the base image i to the input image a, M ja is the geometric transformation matrix of the base image j (j+notei) to the input image a, and assuming that the base image j is the target base image and the base image i is the non-target base image, the geometric transformation matrix of the arbitrary base image i to the base image j can be calculated first according to the following formula:
Wherein, For the inverse matrix of M ja, i can be 1,2,..n and i+.j, n is the total number of matching bottom library images.
Then, the bottom library image i is mapped to the coordinate system where the bottom library image j is located by using the M ij, and the formula can be expressed as follows:
Iij=T(Ii,Mij)
Wherein I i represents the bottom library image I, T represents the geometric transformation corresponding to M ij, and I ij represents the mapped bottom library image I. If let i can also take a, the above formula also covers the case where the input image a is mapped to the coordinate system in which the base image j is located, and if let j can also take a, the above formula also covers the case where the base image i (in this case i can take 1, 2..once., n) is mapped to the coordinate system in which the input image a is located.
For the target base image, one can be selected from all the matching base images. However, the inventor has studied for a long time and found that, because there is an error between the geometric transformation represented by the calculated geometric transformation matrix and the actual geometric transformation, after mapping by using the geometric transformation matrix, the image generates a certain distortion, and the larger the image area before mapping is, the more serious the distortion degree is, so that if the mapping from small image to large image is adopted as much as possible, rather than the mapping from large image to small image, the distortion existing in the mapped image is facilitated to be reduced. Based on the finding, one of all the matched bottom library images with the largest area can be used as a target bottom library image, and other matched bottom library images are smaller than the bottom library image in area, so that distortion generated after mapping is also smaller. And, doing so also facilitates later stitching to produce a larger area bottom library image.
After the mapping of the input image and the matched bottom library image is completed, the mapped images can be fused in different modes, so that the splicing between the mapped input image and the mapped matched bottom library image is realized. For example, the pixel values of the images at the corresponding pixel points may be averaged as the pixel value of the new bottom-stock image at the pixel point, and for example, the pixel values of the images at the corresponding pixel points may be weighted averaged as the pixel value of the new bottom-stock image at the pixel point. In the overlapping area of the input image and the matching bottom library image, the multi-image fusion is truly performed, so that the image processing method has a certain noise reduction effect. The following description will take as an example a fusion method for weighting and averaging pixel values:
Before splicing the mapped input image and the mapped matching base image, firstly acquiring a weight matrix of the mapped input image and a weight matrix of the mapped matching base image. Wherein the size of the weight matrix is the same as its corresponding image, each element in the weight matrix is the weight of the co-located pixel in its corresponding image, in other words the weight in the weight matrix has a pixel-level accuracy.
It should be noted that, the obtaining of the weight matrix of the mapped input image and the weight matrix of the mapped matching bottom library image is not necessarily performed after the mapping of the input image and the matching bottom library image, "the weight matrix of the mapped input image" merely indicates that the weight matrix and the mapped input image have a corresponding relationship, and the "the weight matrix of the mapped matching bottom library image" merely indicates that the weight matrix and the mapped matching bottom library image have a corresponding relationship.
After the weight matrixes are obtained, the mapped input image and the mapped matched bottom library image can be fused pixel by pixel based on the weight matrixes, a new bottom library image is obtained, and the fusion method is to perform weighted average on pixel values according to the weights given in the weight matrixes. The formula can be expressed as:
Where I new represents a new bottom-stock image, I ij represents a mapped input image or a mapped matching bottom-stock image, and W ij represents a weight matrix of I ij. In the overlapping area of the input image and the matching bottom library image, a plurality of I ij are used for weighted summation (i.e. I can take a plurality of values), and outside the overlapping area of the input image and the matching bottom library image, I takes a specific value, and the above formula can be simplified as follows:
Inew=Ikj*Wkj/Wkj=Ikj
I.e. directly preserve the pixel value of I kj.
In some implementations, the mapped input image and the mapped matching base image may be filtered (e.g., median filtered, gaussian filtered) first, noise in the image reduced, and then weighted fused to improve the quality of the obtained base image. In other implementations, the filtering operation may also be performed on the input image and the matching bottom-library image (referred to as the pre-mapped image).
Similarly, in some implementations, the weight matrix of the mapped input image and the weight matrix of the mapped matching base image may be filtered (e.g., median filtered, gaussian filtered) first, noise in the weight matrix is reduced, and then weighted fusion is performed using these weight matrices to improve the quality of the obtained base image. In other implementations, the filtering operation may also be performed on a weight matrix of the input image and a weight matrix of the matching bottom-library image (referring to a weight matrix before mapping, see mode 3 below for details).
In some implementations, a certain weight in the weight matrix represents the fingerprint sharpness of the corresponding image of the weight matrix at the pixel point to which the weight corresponds. In other words, the higher the fingerprint definition, the larger the value of the weight, otherwise the smaller the value of the weight. If the weight matrix has such a property, the blurred image or the blurred region in the image will correspond to a smaller weight, and the clear image or the clear region in the image will correspond to a larger weight, so that after the fusion, the pixel value in the new bottom library image will mainly be derived from the pixel value of the clear image or the clear region in the image at the corresponding position, thereby significantly improving the quality of the bottom library image.
Taking fingerprint definition as an example, the following describes how to obtain a weight matrix of the mapped input image and a weight matrix of the mapped matching base image:
Mode 1. Input the mapped input image to the pre-trained neural network model, obtain the weight matrix of the mapped input image that the neural network model outputs, the neural network model is trained to the weight that the output can represent the fingerprint definition. And for the mapped matching bottom library image, the weight matrix of the matching bottom library image is obtained through the neural network model. The specific structure of the neural network model in the mode 1 is not limited, and for example, an FCN, a uiet, or other architecture may be used, and a training mode that may be used by the neural network model will be specifically described later.
Importantly, for the more blurred regions in the image, the neural network model will automatically assign less weight to them, so that even if the input image is more blurred (which may be referred to as dirty data), the quality impact on the new bottom library image during the fusion process is very limited due to the generally smaller weight in its corresponding weight matrix.
Mode 2 image sharpness at each pixel point in the mapped input image is calculated as a weight of its weight matrix at the corresponding position using some image sharpness calculation method (e.g., gradient operator based method). And for the mapped matching bottom library image, the weight matrix of the matching bottom library image is also obtained by an image definition calculation method. Mode 2 is very efficient in computing the weight matrix, but has a problem in that fingerprint sharpness is not completely equivalent to image sharpness, and thus there is a possibility that the weight computation is inaccurate.
Firstly, obtaining a weight matrix of an input image and a weight matrix of a matched bottom library image, and then mapping the weight matrix of the input image and the weight matrix of the matched bottom library image under a reference coordinate system by utilizing a geometric transformation matrix between the input image and the matched bottom library image to obtain a mapped weight matrix of the input image and a mapped weight matrix of the matched bottom library image, wherein the mapping mode can refer to the mapping of the image.
It should be noted that there are two points, one of which is to be consistent with the reference coordinate system used in the mapping of the image, for example, if the coordinate system where the bottom library image j is located is used as the reference coordinate system, the coordinate system where the weight matrix j (the weight matrix corresponding to the bottom library image j) is located should be used as the reference coordinate system in the mapping of the weight matrix, and the second is to be strictly obtained as the "weight matrix after the mapping of the input image" after the weight matrix of the input image is mapped, but since the weight matrix is actually used as the "weight matrix of the input image after the mapping", these two concepts are not strictly distinguished, and the weight matrix for the bottom library image is similar.
Referring to the formula for image mapping above, the mapping process of the weight matrix can be expressed by the following formula:
Wij=T(Wi,Mij)
Wherein W i represents a weight matrix i (weight matrix corresponding to the bottom library image i), M ij represents a geometric transformation matrix of the bottom library image i to the bottom library image j, T represents geometric transformation corresponding to M ij, and W ij represents the weight matrix i after mapping.
As to how to obtain the weight matrix of the input image and the weight matrix of the matching bottom library image in the mode 3, there are a plurality of modes:
Mode 3.1 input the input image to a pre-trained neural network model, obtain a weight matrix of the input image output by the neural network model, the neural network model trained to output weights capable of representing fingerprint sharpness. For matching bottom library images, the weight matrix of the bottom library images is obtained through the neural network model. Mode 3.1 is similar to mode 1 and will not be repeated.
Mode 3.2 image sharpness at each pixel point in the input image is calculated as the weight of its weight matrix at the corresponding position by some image sharpness calculation method. For matching bottom library images, the weight matrix is also obtained by an image definition calculation method. Mode 3.2 is similar to mode 2 and will not be repeated.
And 3.3, inputting the input image into the pre-trained neural network model to obtain a weight matrix of the input image output by the neural network model. And for the matching bottom library image, directly reading the weight matrix stored before. The following describes how the weight matrix of the bottom library image is calculated and saved:
For each bottom library image, after the weight matrix of each bottom library image is calculated for the first time, the weight matrix and the bottom library image can be associated and stored, so that the weight matrix of the bottom library image can be directly read when the weight matrix of the bottom library image is required to be acquired later. There are only two cases where the weight matrix of the bottom-library image needs to be calculated, the first is to calculate when generating a new bottom-library image in step S170 (see step S170 in detail), and the second is to calculate when generating a new bottom-library image based on the mapped input image and the mapped matching bottom-library image, where the calculation formula can be expressed as:
Where W new represents the weight matrix of the new bottom-stock image, and W ij represents the weight matrix of the mapped input image or the weight matrix of the mapped matching bottom-stock image. It can be seen that the weight matrix is obtained by element-by-element accumulation (i.e. summation of weights of the weight matrices at corresponding positions) of the weight matrix of the mapped input image and the weight matrix of the mapped matched bottom library image.
Of course, for the second case, the weight matrix of the new bottom-bank image may be calculated with reference to the mode 3.1 or the mode 3.2, but in the following, for simplicity, the weight matrix of the new bottom-bank image is mainly calculated by using the above accumulation formula as an example.
How the neural network model in mode 1 or mode 3.1 is trained is described further below. In some implementations, the training process is as follows:
Firstly, a training image is input into a neural network model to be trained, and a weight matrix of the training image output by the neural network model is obtained. Wherein the training image is a fingerprint image.
Then, a prediction loss is calculated according to the weight matrix of the training image and the label matrix of the training image, and parameters of the neural network model are updated by using a back propagation algorithm based on the prediction loss. The size of the label matrix is the same as the size of the training image and the size of the weight matrix of the training image, namely the pixel-by-pixel label. Each label in the label matrix takes one of a plurality of preset label values, and each label value represents a fingerprint definition. For example, the following 4 tag values may be included:
Tag value Fingerprint definition Description of the invention
0 Non-fingerprint Without fingerprint or with no fingerprint
1 Clear and clear Fingerprint lines are clearly visible
2 Comparing ambiguities The fingerprint lines are shallow and light
3 Very blurred The fingerprint is dirty and the lines are discontinuous
When the prediction loss is calculated, the label value can be converted into a corresponding weight, so that the weight matrix output by the neural network model can be calculated. For example, if each weight in the weight matrix is in the interval [0,1], the above four tag values and weights (the weights converted before the arrow are the tag values) may be converted such that 0→0,1→ 0.33,2 →0.66,3→1, although it is not excluded that in some implementations the tag values directly take the preset weights (e.g., 0, 0.33, 0.66, 1 here), and thus conversion is not required.
And iteratively executing the steps of inputting the training image into the neural network model and updating the model parameters until the training ending condition is met, and obtaining the trained neural network model. The training end condition may be one or more of a condition that the model converges, a sufficient period of time has been trained, a sufficient number of rounds has been trained, a performance of the model reaches a preset standard, and the like.
The label matrix can be generated by manual labeling, and because pixel-by-pixel labeling is difficult, in some implementations, a region-by-region labeling mode can be adopted, namely, for a training image, a plurality of labeling regions (for example, polygonal regions) are manually divided, the definition of fingerprints in each labeling region is considered to be approximately the same, and when the labeling regions are divided, the edges of the regions are overlapped with the edges of the fingerprints in the training image as much as possible, so that the situation that one labeling region contains both fingerprints and non-fingerprints is avoided. In the tag matrix (since the training image and the tag matrix have the same size, the labeling area divided on the training image may also correspond to the tag matrix), each tag in the same labeling area is set to the same tag value, and the tag value represents the definition of the fingerprint of the training image corresponding to the tag matrix in the labeling area.
After the labeling is completed, a special person can check and accept the labeling result, and a label matrix passing the checking and accepting can be put into an actual training process. For example, the acceptance criteria may include one or more of a labeling accuracy of a clear fingerprint (e.g., 99% or more), a labeling accuracy of other clear fingerprints (e.g., 95% or more), and a refinement of the edges of the labeled region (e.g., whether or not they coincide with the edges of the fingerprint in the training image).
For mode 3.3, after the weight matrix of the matched bottom library image is read, on one hand, the weight matrix is mapped and then used for weighting and calculating a new bottom library image, and on the other hand, the weight matrix is mapped and then used for accumulating and calculating the weight matrix of the new bottom library image. However, in some alternatives, the attenuation processing may be performed on the weight matrix of the matching bottom-bank image, and then the attenuated weight matrix is mapped and then used to calculate a new bottom-bank image and the weight matrix of the new bottom-bank image, where the attenuation processing may also be performed on the weight matrix of the mapped matching bottom-bank image. For simplicity, only the attenuation process of the weight matrix matching the bottom-bank image will be described below as an example.
Attenuating the weight matrix matching the bottom library image means attenuating all or part of the weights in the weight matrix, and attenuating the weights means reducing the weight values in some way. For example, if a certain weight is a positive number w, the attenuation process may be to multiply w by an attenuation coefficient having a certain value between [0, 1], or to subtract w by an attenuation value having a certain value between (0,w ], or the like. Two possible attenuation strategies are given below:
strategy 1-multiplying the weight matrix matching the bottom library image by the decay factor. The expression is as follows:
Wi′=Wi
Where W i denotes a weight matrix matching the bottom library image, α denotes an attenuation coefficient, and W i' denotes W i after the attenuation process.
The meaning of the attenuation process is described below in conjunction with strategy 1, assuming that α=0.9, after fingerprint entry is started, the weight matrix of the first input image a1, a1 is entered as W 1, and a1 is used as the bottom library image b1 because it is first entered (see step S170 for details); the weight matrix of the second input image a2 and a2 is recorded as W 2, the weight matrix of a2 and b1 are matched, the weight matrix of b2 and b2 is recorded as W 2+0.9*W1, so that b2 can be understood as a result of b1 fusion of a2 and 0.9 times, the weight matrix of a3 and a3 is recorded as W 3, the weight matrix of a3 and b2 is recorded as a result of b3 and b3 is recorded as W 3+0.9*W2+0.92*W1, so that b3 can be understood as a result of b2 fusion of a3 and 0.9 times and b1 fusion of 0.81 times, the weight matrix of a4 and a4 is recorded as W 4, the weight matrix of a4 and b3 is recorded as a result of b4 and b4 is recorded as W 4+0.9*W3+0.92*W2+0.93*W1, the weight matrix of b4 can be understood as b3 of a4 and 0.9 times, b2 and b1 of 0.81 times, and the subsequent input images are recorded as a result of b2 and 0.729 times, and so on.
It is clear that the earlier generated bottom library image has a more obvious reduction in the duty ratio in the newly generated bottom library image after the attenuation coefficient is added, that is, the effect on the weighted fusion result is smaller until the effect is basically negligible. In other words, the pixel values in the new bottom-library image depend substantially only on the number of recently generated bottom-library images, and the earlier generated bottom-library images will be "forgotten" as their number of participation in the fusion increases. Thus, even if an input image entered at a time is blurred, the negative effect of the blurred image on the bottom library image does not last too long, i.e. does not cause too serious contamination of the fingerprint bottom library.
Note that, in theory, even if the attenuation coefficient is not added (or the attenuation coefficient is considered to be 1), as the number of recorded images increases, the influence of each base image on the result of weighted fusion gradually decreases, but the process of reducing the influence is very slow, so that the pollution of the fingerprint base by the blurred image lasts for a long time. Also, in the actual entering process, it is likely that too many input images will not be entered, as this may cause a degradation of the user experience.
And (2) multiplying the weight larger than the third threshold value in the weight matrix of the matched bottom library image by an attenuation coefficient, and keeping the weight not larger than the third threshold value to be the original value. The expression is as follows:
where (p, q) represents any one position in the matrix, and th represents a third threshold. The reason for using strategy 2 is analyzed by comparison with strategy 1 as follows:
On the one hand, to achieve a good denoising effect, multiple image fusion must be performed sufficiently, but attenuating the weights would impair the degree of fusion, for example, if α=0.1, the bottom library image b2 is fused with the input image a2 and the bottom library image b1 of 0.1 times, and the difference between b2 obtained by using b1 is basically not great (in the overlapping region of the images).
On the other hand, since W new in the method 3.3 is obtained by accumulation, if a certain fingerprint region is entered multiple times (i.e., the region is included in all of the input images), the weight matrix of the bottom library image will be relatively large in the region, fusion denoising in the region is sufficient, and if a certain fingerprint region is entered fewer times, the weight matrix of the bottom library image will be relatively small in the region, and fusion denoising in the region is insufficient.
In the strategy 1, each weight in the weight matrix of the bottom library image is attenuated without screening, so that in fingerprint areas with fewer entries, the weight which is smaller originally is smaller due to attenuation, thus the fusion denoising of the image is not contributed to, and the quality of a new bottom library image generated by fusion is poor. If the strategy 2 is adopted, in fingerprint areas with fewer input times (the areas with the weights not larger than the third threshold value), the weight attenuation is not carried out, so that the denoising effect in the areas can be improved, and in fingerprint areas with more input times (the areas with the weights larger than the third threshold value), the weight attenuation is carried out, and the effect of resisting dirty data pollution is achieved.
In addition, it should be noted that, the attenuation processing is optional for the weight matrix of the bottom library image, if the weight matrix of the input image can accurately reflect the fingerprint definition in the image, for example, when the weight is automatically set to be small when the input image is blurred, the effect of executing the weight attenuation processing is not obvious, but this is only an ideal case, and the actually calculated weight does not necessarily accurately reflect the fingerprint definition in the image, so the weight attenuation processing has a high practical value.
For the attenuation coefficients in the strategies 1 and 2, a fixed value can be taken, and the attenuation coefficients can be dynamically adjusted along with the recording process. The larger the attenuation coefficient is, the more importance is given to the influence of the early bottom library image, and the smaller the attenuation coefficient is, the less importance is given to the influence of the early bottom library image. The attenuation coefficient can be determined according to actual requirements, for example, if the quality fluctuation of the input images is obvious, i.e. the input images are blurred for a while and are clear, the attenuation coefficient can be set larger, otherwise if the latest several input images are all blurred, and the weights of the latest several input images take absolute advantage in the fusion result, so that a bottom library image with very poor quality can be generated.
Step S160, removing the matching bottom library image from the fingerprint bottom library.
For the matching base images involved in the present entry, since their information is already contained in the newly generated base images, continuing to retain them in the fingerprint base may result in information redundancy in the fingerprint base, and may also interfere with the subsequent fingerprint entry process, these matching base images are removed from the fingerprint base in step S160. The way of removal may be to delete these matching base images or to mark them as no longer belonging to the current fingerprint base (so that they will no longer participate in matching with the input image during subsequent entry).
After the execution of step S160 is completed, if the recording process has not been completed, the process may return to step S110 to begin recording a new input image. The condition for ending the entering process may have different setting manners, for example, ending when a certain number of input images (for example, 15 images, 20 images, etc.) are entered, ending when the average area of all the bottom library images exceeds a certain threshold value, ending when the detected bottom library images already contain more complete fingerprints, etc.
Step S160 is optional, after step S150 is performed, if the recording process has not been completed, it may also return directly to step S110 to begin recording a new input image, which is simpler in terms of processing logic.
Step S170, determining the input image as a new bottom library image.
According to the foregoing, the step S170 is performed in two cases, namely, the step S120 judges that the base is empty, and the step S140 judges that the base image matched with the input image does not exist in the fingerprint base. In the latter, the fingerprint base has no base image matched with the input image, which indicates that the fingerprint area in the input image and the fingerprint area in any base image have no overlapping area, so that the input image and the base image cannot be spliced to enlarge the area of the base image, and the input image can only be added into the fingerprint base as a new base image.
In addition, if the weight matrix of the bottom library image is obtained in the manner 3.3 in the step S150, the weight matrix of the new bottom library image (herein, only the new bottom library image generated in the step S170) needs to be calculated in the step S170, and is associated with the new bottom library image for storage, so that the weight matrix can be directly read and used later, and the manner of calculating the weight matrix can be referred to as the manner 3.1 or the manner 3.2, and will not be repeated.
After the execution of step S170 is completed, if the recording process has not been completed, the process may return to step S110 to begin recording a new input image.
The fingerprint input method provided by the embodiment of the application has the following beneficial effects of simple analysis:
Firstly, the area of the bottom library image is enlarged through image stitching in the process of inputting the input image (the area of the new bottom library image is not smaller than that of the bottom library image or the input image before stitching), so that more fingerprint features are extracted from the bottom library image to meet the high-precision verification requirement. For example, some features are located at the edge of the base image before stitching, resulting in difficulty in extraction, but after stitching, the features may be located inside the new base image and so easily extracted.
At present, along with the requirement of the mobile phone under-screen fingerprint identification on high integration, the area of the fingerprint image collected by the fingerprint collection module is continuously reduced, and effective fingerprint features are difficult to extract on the small-area fingerprint image, so that negative influence is caused on fingerprint identification precision.
And secondly, splicing fingerprint images based on the overlapped areas is equivalent to performing multi-image fusion noise reduction in the overlapped areas, so that the quality of the bottom library image is improved, and the fingerprint identification precision is improved. Still take the fingerprint identification under the screen as an example, because of the interference brought by the screen, fingerprint signals and non-fingerprint signals are mixed mutually, so that the collected fingerprint image has serious noise, and the noise can be removed to a certain extent after the fingerprint input method provided by the embodiment of the application is applied, thereby avoiding the negative influence of the noise on the fingerprint identification precision as much as possible.
In addition, in some implementations, after the new bottom library image is generated by stitching, the bottom library image before stitching may be removed from the fingerprint bottom library (step S160), which is advantageous to reduce redundant storage of the fingerprint information of the same location in the bottom library image (i.e., the fingerprint information contained in the overlapping area is only stored in one copy in the new bottom library image), thereby reducing storage resources and computing resources used in the fingerprint input process and the subsequent fingerprint identification process.
In some implementations, only a predetermined number of base images are stored in the final fingerprint base in view of storage performance, computing performance, and the like. Two possible schemes are listed below:
After all input images are recorded, reserving a preset number of bottom library images with the largest area in the fingerprint bottom library, and removing other bottom library images from the fingerprint bottom library. For example, if the preset number is 3, 10 bottom library images are obtained in total in the recording process, only 3 bottom library images with the largest area are reserved, and the remaining 7 bottom library images are removed from the fingerprint bottom library, and the removal method is already described in step S160. The preset number of the bottom library images with the largest area in the fingerprint bottom library can be determined in a sorting mode, and of course, the sorting is not required to be performed after the input is finished, and the sorting can also be performed after one input image is input each time.
In the scheme 2, after one input image is input (after step S160 or step S170 is executed, a new bottom library image is necessarily added in the fingerprint bottom library), whether the total number of the bottom library images in the fingerprint bottom library exceeds the preset number is judged, if the total number does not exceed the preset number, no processing is performed, the input is continuously executed, and if the total number exceeds the preset number, the bottom library image with the smallest area in the fingerprint bottom library is removed from the fingerprint bottom library, for example, the bottom library image with the smallest area in the fingerprint bottom library can be found in a sorting mode.
Further, since the matching bottom library image is removed when step S160 is performed, the number of bottom library images is not increased, or may be reduced (in the case of a plurality of matching bottom library images) when step S160 is performed, that is, the number of bottom library images is not caused to exceed the preset number when step S160 is performed, so in some implementations, it may be determined whether the total number of bottom library images in the fingerprint bottom library has exceeded the preset number only after step S170 is performed.
In comparison, since the whole bottom library image is reserved in the recording process, the solution 1 may obtain better results (for example, the area of the bottom library image obtained finally is larger), but more storage resources and computing resources are consumed in the recording process. In the scheme 2, the bottom library images with smaller areas are continuously removed in the recording process, so that the number of the bottom library images in the fingerprint bottom library is always kept at a smaller level, more storage resources and calculation resources are not consumed in the recording process, and the inventor finds that although the number of the bottom library images in the scheme 2 is always kept at a smaller level, the finally obtained fingerprint bottom library still has higher usability.
Finally, it is briefly explained how to perform fingerprint identification based on the obtained fingerprint database, and the fingerprint identification may occur in the scenes of equipment login, screen unlocking, access identity verification and the like. It is assumed that an overall fingerprint base has been constructed prior to identification, the overall fingerprint base including at least one constructed fingerprint base. One possible identification procedure is as follows:
First, a comparison image is acquired. The comparison image refers to a fingerprint image to be verified, and for example, the comparison image can be acquired in real time through a fingerprint acquisition module.
And then, acquiring the characteristics of the comparison image and the characteristics of the bottom library images in the overall fingerprint bottom library, matching the comparison image with the bottom library images based on the acquired characteristics, and determining whether the comparison image passes verification according to a matching result. The matching of the comparison image with a certain base image means that an overlapping area exists between a fingerprint area in the comparison image and a fingerprint area in the base image, if the comparison image is matched with any base image, the comparison image is confirmed to pass verification, otherwise, the comparison image is confirmed to not pass verification.
Regarding the extraction of the image features, reference may be made to step S130. If the characteristics of each base image are already stored in the process of constructing each fingerprint base, the base image can be directly read and used in fingerprint identification without extraction. Assuming that no features of any bottom-library image are currently extracted, at least two approaches can be taken:
First, the features of each base image in the overall fingerprint base are extracted, and the base image and the comparison image are matched based on the extracted features. The features can also be saved, and then the features of the bottom library image are not required to be extracted after other comparison images are identified.
Secondly, each time the characteristics of one bottom library image in the overall fingerprint bottom library are extracted, the bottom library image is matched with the comparison image once, if the matching is successful, the characteristics of the next bottom library image are not required to be extracted continuously, if the matching is unsuccessful, the extracted characteristics can be saved, and then the characteristics of the bottom library images with the extracted characteristics are not required to be extracted repeatedly when other comparison images are identified.
Regarding the matching process of the comparison image and the base image, the relevant contents of the feature matching and the spatial matching in step S130 may be referred to, and are not repeated. The fingerprint base constructed by the fingerprint input method provided by the embodiment of the application has the advantages of large base image area, high image quality and the like, so that the accuracy of fingerprint identification based on the overall fingerprint base (comprising at least one constructed fingerprint base) is higher.
Fig. 2 shows a functional block diagram of a fingerprint input device 200 according to an embodiment of the present application. Referring to fig. 2, the fingerprint-entry device 200 includes:
an image acquisition module 201, configured to acquire an input image, where the input image is a fingerprint image to be recorded;
The image matching module 220 is configured to obtain characteristics of the input image and characteristics of each base image in the fingerprint base, and match the input image with each base image based on the obtained characteristics, where the matching of the input image with the base image means that there is an overlapping area between a fingerprint area in the input image and a fingerprint area in the base image, and the base image matched with the input image is a matched base image;
and the image stitching module 230 is configured to stitch the input image and the matching bottom library image based on an overlapping area in the input image and the matching bottom library image when the matching bottom library image exists, so as to obtain a new bottom library image.
In one implementation of the fingerprint input device 200, the image stitching module 230 stitches the input image and the matching base image based on the overlapping area in the input image and the matching base image to obtain a new base image, which includes mapping the input image and the matching base image to a reference coordinate system by using a geometric transformation matrix between the input image and the matching base image, wherein the geometric transformation matrix between the input image and any matching base image is calculated according to a matching point pair between the input image and the matching base image, the matching point pair between the input image and the matching base image is a point pair in the overlapping area in the two images, and stitching the mapped input image and the mapped matching base image to obtain the new base image.
In one implementation of the fingerprint input device 200, the image stitching module 230 maps the input image and the matching base image to a reference coordinate system using a geometric transformation matrix between the input image and the matching base image, including mapping the matching base image to a coordinate system in which the input image is located using a geometric transformation matrix between the input image and the matching base image, or mapping the input image and a non-target base image to a coordinate system in which a target base image is located using a geometric transformation matrix between the input image and the matching base image, wherein the target base image is a base image selected from the matching base images, and the non-target base image is an image of the matching base image other than the target base image.
In one implementation of the fingerprint-entry device 200, the target base image is the largest-area base image selected from the matching base images.
In one implementation of the fingerprint input device 200, the image stitching module 230 maps the input image and the non-target base image to a coordinate system where the target base image is located using a geometric transformation matrix between the input image and the matching base image, including mapping the input image to the coordinate system where the target base image is located using a geometric transformation matrix between the input image and the target base image, calculating a geometric transformation matrix between the non-target base image and the target base image based on a geometric transformation matrix between the input image and the non-target base image and a geometric transformation matrix between the input image and the target base image, and mapping the non-target base image to the coordinate system where the target base image is located using a geometric transformation matrix between the non-target base image and the target base image.
In one implementation of the fingerprint input device 200, the image matching module 220 obtains the features of the input image and the features of each base image in the fingerprint base, and respectively matches the input image with each base image based on the obtained features, and includes obtaining at least one feature of the input image and at least one feature of the base image for each base image in the fingerprint base, and determining the matched features of the input image and the features of the base image, wherein each feature corresponds to one pixel point in an image, two pixels corresponding to two matched features form one matching point pair between the input image and the base image, calculating a geometric transformation between the input image and the base image according to the matching point pair if the total number of the matching point pairs is greater than a first threshold, and determining whether each matching point pair accords with the geometric transformation represented by the geometric transformation matrix if the total number of the matching point pairs accords with the geometric transformation is greater than a second threshold, and determining the matching point pair with the input image if the total number of the matching point pairs accords with the geometric transformation is greater than the second threshold.
In one implementation of the fingerprint input device 200, the image matching module 220 obtains features of the input image and features of each base image in the fingerprint base, and respectively matches the input image with each base image based on the obtained features, and the fingerprint input device comprises obtaining multiple types of features of the input image and multiple types of features of the base image for each base image in the fingerprint base, determining for each type of features that the type of features of the input image matches features in the type of features of the base image, wherein each type of features of each image comprises at least one feature, each feature corresponds to one pixel point in the image, two pixel points corresponding to two matched features form a matching point pair between the input image and the base image, calculating an initial geometric transformation matrix between the input image and the base image according to the matching point pair if the total number of the matching point pair is larger than a first threshold, determining whether each matching point pair accords with the initial geometric transformation, determining if each matching point pair accords with the initial geometric transformation, calculating the initial geometric transformation matrix under the geometric transformation, and determining if the matching point pair accords with the initial geometric transformation under the base image, and calculating the initial geometric transformation matrix if the matching point pair corresponds with the initial geometric transformation matrix, and calculating the geometric transformation under the initial geometric transformation matrix, and the geometric transformation matrix if the initial geometric transformation is equal.
In one implementation of the fingerprint input device 200, the image stitching module 230 is further configured to obtain a weight matrix of the mapped input image and a weight matrix of the mapped matching base image before stitching the mapped input image and the mapped matching base image, where the weight matrix includes weights of pixels in corresponding images;
The image stitching module 230 stitches the mapped input image and the mapped matching base image to obtain a new base image, including fusing the mapped input image and the mapped matching base image pixel by pixel based on the obtained weight matrix to obtain a new base image.
In one implementation of the fingerprint input device 200, the image stitching module 230 obtains a weight matrix of the mapped input image and a weight matrix of the mapped matching base image, where the weight matrix includes weights of pixels in corresponding images, and maps the weight matrix of the input image and the weight matrix of the matching base image to the reference coordinate system by using a geometric transformation matrix between the input image and the matching base image to obtain a weight matrix of the mapped input image and a weight matrix of the matching base image.
In one implementation of the fingerprint-entry device 200, the image stitching module 230 is further configured to obtain a weight matrix of the new bottom-stock image, and store the weight matrix in association with the new bottom-stock image.
In one implementation of the fingerprint input device 200, the image stitching module 230 obtains a weight matrix of a new bottom-stock image, including accumulating the mapped weight matrix of the input image and the mapped weight matrix of the matching bottom-stock image element by element, to obtain a weight matrix of the new bottom-stock image.
In one implementation of the fingerprint-entry device 200, the weights in the weight matrix represent the fingerprint sharpness of the corresponding image of the weight matrix at the corresponding pixel points.
In one implementation of the fingerprint input device 200, the image stitching module 230 obtains a weight matrix of the input image, including inputting the input image to a pre-trained neural network model, obtaining a weight matrix of the input image output by the neural network model.
In one implementation of the fingerprint-entry device 200, the device further comprises:
The model training module is used for inputting a training image into a neural network model to be trained before the image stitching module 230 inputs the input image into a pre-trained neural network model to obtain a weight matrix of the input image output by the neural network model, obtaining a weight matrix of the training image output by the neural network model, wherein the training image is a fingerprint image, calculating a prediction loss according to the weight matrix of the training image and a label matrix of the training image, and updating parameters of the neural network model by using a back propagation algorithm based on the prediction loss, wherein each label in the label matrix takes one of a plurality of preset label values, each label value represents fingerprint definition, and continuously inputting the training image to train the neural network model until a training end condition is met, so as to obtain a trained neural network model.
In one implementation of the fingerprint input device 200, the tag matrix includes a plurality of tag areas, and each tag in the same tag area takes the same tag value.
In one implementation of the fingerprint input device 200, the image stitching module 230 is further configured to attenuate weights in the weight matrix of the matching base image after acquiring the weight matrix of the input image and the weight matrix of the matching base image, and before mapping the weight matrix of the input image and the weight matrix of the matching base image under the reference coordinate system using the geometric transformation matrix between the input image and the matching base image.
In one implementation of the fingerprint entry device 200, the image stitching module 230 performs an attenuation process on weights in the weight matrix of the matching base image, including multiplying the weight matrix of the matching base image by an attenuation coefficient, or multiplying weights in the weight matrix of the matching base image that are greater than a third threshold by an attenuation coefficient.
In one implementation of the fingerprint-entry device 200, the device further comprises:
the image removing module is configured to remove the matching base image from the fingerprint base after the image stitching module 230 obtains a new base image when the matching base image exists.
In one implementation of the fingerprint-entry device 200, the device further comprises:
And the direct input module is used for determining the input image as a new bottom library image when the fingerprint bottom library is empty or the matched bottom library image does not exist in the fingerprint bottom library, acquiring a weight matrix of the new bottom library image, and associating and storing the acquired weight matrix with the new bottom library image.
In one implementation of the fingerprint input device 200, the image removing module is further configured to determine, after each new base image is generated, whether the total number of base images in the fingerprint base has exceeded a preset number, and if the total number has exceeded the preset number, remove the base image with the smallest area in the fingerprint base from the fingerprint base, or after all the input images are processed, reserve the preset number of base images with the largest area in the fingerprint base, and remove other base images from the fingerprint base.
In one implementation of the fingerprint input device 200, the image acquisition module 210 is further configured to acquire a comparison image, where the comparison image is a fingerprint image to be verified;
The image matching module 220 is further configured to obtain features of the comparison image and features of a base image in an overall fingerprint base, match the comparison image with the base image based on the obtained features, and determine whether the comparison image passes verification according to a matching result, where the overall fingerprint base includes at least one constructed fingerprint base, and the matching of the comparison image with the base image means that there is an overlapping area between a fingerprint area in the comparison image and a fingerprint area in the base image, and if the comparison image matches any base image, it is determined that the comparison image passes verification, and if not, it is determined that the comparison image does not pass verification.
The model training apparatus 200 according to the embodiment of the present application has been described in the foregoing method embodiment, and for brevity, reference may be made to the corresponding contents of the method embodiment where the apparatus embodiment is not mentioned.
Fig. 3 shows a possible structure of an electronic device 300 according to an embodiment of the application. Referring to fig. 3, an electronic device 300 includes a processor 310, a memory 320, and a fingerprint acquisition module 330 that are interconnected and communicate with each other by a communication bus 340 and/or other forms of connection mechanisms (not shown).
The Memory 320 includes one or more (Only one is shown in the figure), which may be, but is not limited to, a random access Memory (Random Access Memory, abbreviated as RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, abbreviated as PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, abbreviated as EPROM), an electrically erasable programmable Read Only Memory (Electric Erasable Programmable Read-Only Memory, abbreviated as EEPROM), and the like. The processor 310, as well as other possible components, may access, read, and/or write data from, the memory 320.
The processor 310 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 310 may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a micro-control unit (Micro Controller Unit, MCU), a network Processor (Network Processor, NP) or other conventional Processor, or a special-purpose Processor including a graphics Processor (Graphics Processing Unit, GPU), a neural network Processor (Neural-network Processing Unit, NPU), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuits (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. Also, when the processor 310 is plural, some of them may be general-purpose processors, and another may be special-purpose processors.
The fingerprint acquisition module 330 is used for acquiring a fingerprint image as an input image in the fingerprint input method provided by the embodiment of the application, and can be an off-screen fingerprint acquisition module or a non-off-screen fingerprint acquisition module.
One or more computer program instructions may be stored in memory 320 that may be read and executed by processor 310 to implement the fingerprint entry method provided by embodiments of the present application, as well as other desired functions.
It is to be understood that the configuration shown in fig. 3 is illustrative only, and that electronic device 300 may also include more or fewer components than shown in fig. 3, or have a different configuration than shown in fig. 3. For example, in some implementations of the electronic device 300, the fingerprint acquisition module 330 may not be included, and the electronic device 300 acquires a set of input images through a network, and then performs the fingerprint input method provided by the embodiment of the present application to input the set of input images into the fingerprint base.
Furthermore, the components shown in FIG. 3 may be implemented in hardware, software, or a combination thereof. The electronic device 300 may be a physical device such as a PC, a notebook, a tablet, a cell phone, a server, an embedded device, etc., or may be a virtual device such as a virtual machine, a virtualized container, etc. The electronic device 300 is not limited to a single device, and may be a combination of a plurality of devices or a cluster of a large number of devices.
The embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium is stored with computer program instructions which execute the fingerprint input method provided by the embodiment of the application when being read and run by a processor of a computer. For example, the computer-readable storage medium may be implemented as memory 320 in electronic device 300 in FIG. 3.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (21)

1.一种指纹录入方法,其特征在于,包括:1. A fingerprint entry method, comprising: 获取输入图像,所述输入图像为待录入的指纹图像;Acquire an input image, where the input image is a fingerprint image to be recorded; 获取所述输入图像的特征以及指纹底库中的每张底库图像的特征,并基于获取到的特征将所述输入图像分别与每张底库图像进行匹配;其中,所述输入图像与所述底库图像匹配,是指所述输入图像中的指纹区域和所述底库图像中的指纹区域存在重叠区域,与所述输入图像匹配的底库图像为匹配底库图像;Acquire the features of the input image and the features of each base image in the fingerprint base database, and match the input image with each base image based on the acquired features; wherein the input image matches the base image, which means that there is an overlapping area between the fingerprint area in the input image and the fingerprint area in the base image, and the base image that matches the input image is the matching base image; 若存在所述匹配底库图像,则基于所述输入图像和所述匹配底库图像中的重叠区域,将所述输入图像和所述匹配底库图像进行拼接,获得新的底库图像;If the matching base image exists, based on the overlapping area between the input image and the matching base image, the input image and the matching base image are spliced to obtain a new base image; 所述基于所述输入图像和所述匹配底库图像中的重叠区域,将所述输入图像和所述匹配底库图像进行拼接,获得新的底库图像,包括:The step of splicing the input image and the matching base image based on the overlapping area between the input image and the matching base image to obtain a new base image includes: 利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像和所述匹配底库图像映射至一基准坐标系下;其中,所述输入图像和任一匹配底库图像之间的几何变换矩阵是根据所述输入图像和该底库图像之间的匹配点对计算出的,所述输入图像和该匹配底库图像之间的匹配点对是两张图像中的重叠区域内的点对;Mapping the input image and the matching base image to a reference coordinate system using a geometric transformation matrix between the input image and the matching base image; wherein the geometric transformation matrix between the input image and any matching base image is calculated based on matching point pairs between the input image and the base image, and the matching point pairs between the input image and the matching base image are point pairs within an overlapping area of the two images; 将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接,获得新的底库图像;Splicing the mapped input image and the mapped matching base image to obtain a new base image; 在所述将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接之前,所述方法还包括:Before stitching the mapped input image and the mapped matching base image, the method further includes: 获取映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵,所述权重矩阵包括其对应图像中各个像素的权重;Obtaining a weight matrix of the mapped input image and a weight matrix of the mapped matching base image, wherein the weight matrix includes a weight of each pixel in the corresponding image; 所述将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接,获得新的底库图像,包括:The step of splicing the mapped input image and the mapped matching base image to obtain a new base image includes: 基于获取到的权重矩阵将映射后的所述输入图像和映射后的所述匹配底库图像进行逐像素融合,获得新的底库图像;Based on the acquired weight matrix, the mapped input image and the mapped matching base image are fused pixel by pixel to obtain a new base image; 所述获取映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵,包括:The step of obtaining the weight matrix of the mapped input image and the weight matrix of the mapped matching base image comprises: 获取所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵,所述权重矩阵包括其对应图像中的各个像素的权重;Obtaining a weight matrix of the input image and a weight matrix of the matching base library image, wherein the weight matrix includes the weight of each pixel in the corresponding image; 利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵映射至所述基准坐标系下,获得映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵。The weight matrix of the input image and the weight matrix of the matching base image are mapped to the reference coordinate system by utilizing the geometric transformation matrix between the input image and the matching base image to obtain the weight matrix of the mapped input image and the weight matrix of the mapped matching base image. 2.根据权利要求1所述的指纹录入方法,其特征在于,所述利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像和所述匹配底库图像映射至一基准坐标系下,包括:2. The fingerprint entry method according to claim 1, characterized in that the step of mapping the input image and the matching base image to a reference coordinate system by using a geometric transformation matrix between the input image and the matching base image comprises: 利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述匹配底库图像映射至所述输入图像所在的坐标系下;或者,Mapping the matching base image to the coordinate system of the input image using a geometric transformation matrix between the input image and the matching base image; or, 利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像以及非目标底库图像映射至目标底库图像所在的坐标系下;其中,所述目标底库图像是从所述匹配底库图像中选择出的一张底库图像,所述非目标底库图像是所述匹配底库图像中除所述目标底库图像以外的图像。The input image and the non-target base image are mapped to the coordinate system of the target base image by using the geometric transformation matrix between the input image and the matching base image; wherein the target base image is a base image selected from the matching base images, and the non-target base image is an image in the matching base image other than the target base image. 3.根据权利要求2所述的指纹录入方法,其特征在于,所述目标底库图像是从所述匹配底库图像中选择出的面积最大的底库图像。3. The fingerprint entry method according to claim 2, characterized in that the target base image is a base image with the largest area selected from the matching base images. 4.根据权利要求2所述的指纹录入方法,其特征在于,所述利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像以及非目标底库图像映射至目标底库图像所在的坐标系下,包括:4. The fingerprint entry method according to claim 2, characterized in that the step of mapping the input image and the non-target base image to the coordinate system of the target base image by using the geometric transformation matrix between the input image and the matching base image comprises: 利用所述输入图像和所述目标底库图像之间的几何变换矩阵,将所述输入图像映射至所述目标底库图像所在的坐标系下;Mapping the input image to the coordinate system of the target base image using a geometric transformation matrix between the input image and the target base image; 根据所述输入图像和所述非目标底库图像之间的几何变换矩阵以及所述输入图像和所述目标底库图像之间的几何变换矩阵,计算所述非目标底库图像和所述目标底库图像之间的几何变换矩阵;Calculating a geometric transformation matrix between the non-target base image and the target base image according to a geometric transformation matrix between the input image and the non-target base image and a geometric transformation matrix between the input image and the target base image; 利用所述非目标底库图像和所述目标底库图像之间的几何变换矩阵,将所述非目标底库图像映射至所述目标底库图像所在的坐标系下。The non-target base image is mapped to the coordinate system of the target base image by using the geometric transformation matrix between the non-target base image and the target base image. 5.根据权利要求1-4中任一项所述的指纹录入方法,其特征在于,所述获取所述输入图像的特征以及指纹底库中的每张底库图像的特征,并基于获取到的特征将所述输入图像分别与每张底库图像进行匹配,包括:5. The fingerprint entry method according to any one of claims 1 to 4, characterized in that the step of acquiring the features of the input image and the features of each base image in the fingerprint base database, and matching the input image with each base image based on the acquired features, comprises: 针对所述指纹底库中的每张底库图像,获取所述输入图像的至少一个特征以及该底库图像的至少一个特征,并确定所述输入图像的特征与该底库图像的特征中匹配的特征;其中,每个特征对应图像中的一个像素点,两个匹配的特征对应的两个像素点构成所述输入图像和该底库图像之间的一个匹配点对;For each base image in the fingerprint base database, at least one feature of the input image and at least one feature of the base image are obtained, and matching features of the input image and the base image are determined; wherein each feature corresponds to a pixel point in the image, and two pixel points corresponding to two matching features constitute a matching point pair between the input image and the base image; 若所述匹配点对的总数量大于第一阈值,则根据所述匹配点对计算所述输入图像与该底库图像之间的几何变换矩阵;If the total number of the matching point pairs is greater than a first threshold, calculating a geometric transformation matrix between the input image and the base library image according to the matching point pairs; 判断每个匹配点对是否符合所述几何变换矩阵表征的几何变换,若符合所述几何变换的匹配点对的总数量大于第二阈值,则确定所述输入图像与该底库图像匹配。It is determined whether each matching point pair conforms to the geometric transformation represented by the geometric transformation matrix; if the total number of matching point pairs conforming to the geometric transformation is greater than a second threshold, it is determined that the input image matches the base library image. 6.根据权利要求1-4中任一项所述的指纹录入方法,其特征在于,所述获取所述输入图像的特征以及指纹底库中的每张底库图像的特征,并基于获取到的特征将所述输入图像分别与每张底库图像进行匹配,包括:6. The fingerprint entry method according to any one of claims 1 to 4, characterized in that the step of acquiring the features of the input image and the features of each base image in the fingerprint base database, and matching the input image with each base image based on the acquired features, comprises: 针对所述指纹底库中的每张底库图像,获取所述输入图像的多类特征以及该底库图像的多类特征;For each base image in the fingerprint base, obtaining multiple categories of features of the input image and multiple categories of features of the base image; 针对每一类特征,确定所述输入图像的该类特征与该底库图像的该类特征中匹配的特征;其中,每张图像的每一类特征均包括至少一个特征,每个特征对应图像中的一个像素点,两个匹配的特征对应的两个像素点构成所述输入图像和该底库图像之间的一个匹配点对;For each type of feature, determine the matching features between the features of the input image and the features of the base image; wherein each type of feature of each image includes at least one feature, each feature corresponds to a pixel point in the image, and two pixel points corresponding to two matching features constitute a matching point pair between the input image and the base image; 若所述匹配点对的总数量大于第一阈值,则根据所述匹配点对计算所述输入图像与该底库图像之间的初始几何变换矩阵;If the total number of the matching point pairs is greater than a first threshold, calculating an initial geometric transformation matrix between the input image and the base library image according to the matching point pairs; 判断每个匹配点对是否符合所述初始几何变换矩阵表征的几何变换,若符合所述几何变换的匹配点对的总数量大于第二阈值,则确定在该类特征下所述输入图像与该底库图像匹配;Determine whether each matching point pair conforms to the geometric transformation represented by the initial geometric transformation matrix, and if the total number of matching point pairs conforming to the geometric transformation is greater than a second threshold, determine that the input image matches the base library image under this type of feature; 若所述输入图像与该底库图像在每一类特征下均匹配,并且在每一类特征下计算出的所述初始几何变换矩阵均一致,则确定所述输入图像与该底库图像匹配,并根据在各类特征下计算出的所述初始几何变换矩阵计算所述几何变换矩阵。If the input image matches the base image under each type of feature, and the initial geometric transformation matrix calculated under each type of feature is consistent, then it is determined that the input image matches the base image, and the geometric transformation matrix is calculated based on the initial geometric transformation matrix calculated under each type of feature. 7.根据权利要求1所述的指纹录入方法,其特征在于,所述方法还包括:7. The fingerprint entry method according to claim 1, characterized in that the method further comprises: 获取新的底库图像的权重矩阵,并将该权重矩阵与新的底库图像关联保存。A weight matrix of a new base image is obtained, and the weight matrix is associated with the new base image and saved. 8.根据权利要求7所述的指纹录入方法,其特征在于,所述获取新的底库图像的权重矩阵,包括:8. The fingerprint entry method according to claim 7, characterized in that the step of obtaining a weight matrix of a new base image comprises: 将映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵进行逐元素累加,获得新的底库图像的权重矩阵。The weight matrix of the mapped input image and the weight matrix of the mapped matching base image are accumulated element by element to obtain a new weight matrix of the base image. 9.根据权利要求1所述的指纹录入方法,其特征在于,所述权重矩阵中的权重表示该权重矩阵的对应图像在对应像素点处的指纹清晰度。9. The fingerprint entry method according to claim 1, characterized in that the weights in the weight matrix represent the fingerprint clarity of the corresponding image of the weight matrix at the corresponding pixel point. 10.根据权利要求1所述的指纹录入方法,其特征在于,所述获取所述输入图像的权重矩阵,包括:10. The fingerprint entry method according to claim 1, wherein the step of obtaining a weight matrix of the input image comprises: 将所述输入图像输入至预训练的神经网络模型,获得所述神经网络模型输出的所述输入图像的权重矩阵。The input image is input into a pre-trained neural network model to obtain a weight matrix of the input image output by the neural network model. 11.根据权利要求10所述的指纹录入方法,其特征在于,在所述将所述输入图像输入至预训练的神经网络模型,获得所述神经网络模型输出的所述输入图像的权重矩阵之前,所述方法还包括:11. The fingerprint entry method according to claim 10, characterized in that before inputting the input image into a pre-trained neural network model to obtain a weight matrix of the input image output by the neural network model, the method further comprises: 将训练图像输入至待训练的神经网络模型,获得所述神经网络模型输出的所述训练图像的权重矩阵;其中,所述训练图像为指纹图像;Inputting a training image into a neural network model to be trained, and obtaining a weight matrix of the training image output by the neural network model; wherein the training image is a fingerprint image; 根据所述训练图像的权重矩阵与所述训练图像的标签矩阵计算预测损失,并基于所述预测损失利用反向传播算法更新所述神经网络模型的参数;其中,所述标签矩阵中的每个标签取多个预设的标签值之一,每个标签值表示一种指纹清晰度;Calculating the prediction loss according to the weight matrix of the training image and the label matrix of the training image, and updating the parameters of the neural network model based on the prediction loss by using a back propagation algorithm; wherein each label in the label matrix takes one of a plurality of preset label values, and each label value represents a fingerprint clarity; 持续输入训练图像训练所述神经网络模型直至满足训练结束条件,获得训练好的神经网络模型。The training images are continuously input to train the neural network model until the training end condition is met to obtain a trained neural network model. 12.根据权利要求11所述的指纹录入方法,其特征在于,所述标签矩阵中包括多个标注区域,同一标注区域中每个标签取同一标签值。12 . The fingerprint entry method according to claim 11 , wherein the label matrix comprises a plurality of label regions, and each label in the same label region has the same label value. 13.根据权利要求8所述的指纹录入方法,其特征在于,在所述获取所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵之后,以及在所述利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵映射至所述基准坐标系下之前,所述方法还包括:13. The fingerprint entry method according to claim 8, characterized in that after acquiring the weight matrix of the input image and the weight matrix of the matching base image, and before mapping the weight matrix of the input image and the weight matrix of the matching base image to the reference coordinate system using the geometric transformation matrix between the input image and the matching base image, the method further comprises: 对所述匹配底库图像的权重矩阵中的权重进行衰减处理。The weights in the weight matrix of the matching base library image are attenuated. 14.根据权利要求13所述的指纹录入方法,其特征在于,所述对所述匹配底库图像的权重矩阵中的权重进行衰减处理,包括:14. The fingerprint entry method according to claim 13, characterized in that the attenuation processing of the weights in the weight matrix of the matching base database image comprises: 将所述匹配底库图像的权重矩阵乘上衰减系数;或者,Multiply the weight matrix of the matching base library image by the attenuation coefficient; or, 将所述匹配底库图像的权重矩阵中大于第三阈值的权重乘上衰减系数。The weights in the weight matrix of the matching base library image that are greater than the third threshold are multiplied by the attenuation coefficient. 15.根据权利要求1所述的指纹录入方法,其特征在于,若存在所述匹配底库图像,则在所述获得新的底库图像之后,所述方法还包括:15. The fingerprint entry method according to claim 1, characterized in that if the matching base image exists, after obtaining the new base image, the method further comprises: 将所述匹配底库图像从所述指纹底库中移除。The matching base image is removed from the fingerprint base. 16.根据权利要求1所述的指纹录入方法,其特征在于,所述方法还包括:16. The fingerprint entry method according to claim 1, characterized in that the method further comprises: 若所述指纹底库为空或者其中不存在所述匹配底库图像,则将所述输入图像确定为新的底库图像获取该新的底库图像的权重矩阵,并将获得的权重矩阵与该新的底库图像关联保存。If the fingerprint base database is empty or does not contain the matching base database image, the input image is determined as a new base database image to obtain a weight matrix of the new base database image, and the obtained weight matrix is associated with the new base database image and saved. 17.根据权利要求1所述的指纹录入方法,其特征在于,在每产生一张新的底库图像之后,所述方法还包括:判断所述指纹底库中底库图像的总数量是否已经超过预设数量,若已经超过预设数量,则将所述指纹底库中面积最小的底库图像从所述指纹底库中移除;或者,17. The fingerprint entry method according to claim 1, characterized in that after each new base image is generated, the method further comprises: determining whether the total number of base images in the fingerprint base database has exceeded a preset number, and if so, removing the base image with the smallest area in the fingerprint base database from the fingerprint base database; or, 在处理完所有的输入图像后,所述方法还包括:保留所述指纹底库中面积最大的预设数量张底库图像,并将其他底库图像从所述指纹底库中移除。After processing all input images, the method further includes: retaining a preset number of base images with the largest areas in the fingerprint base, and removing other base images from the fingerprint base. 18.根据权利要求1所述的指纹录入方法,其特征在于,所述方法还包括:18. The fingerprint entry method according to claim 1, characterized in that the method further comprises: 获取比对图像,所述比对图像为待验证的指纹图像;Obtaining a comparison image, where the comparison image is a fingerprint image to be verified; 获取所述比对图像的特征以及总体指纹底库中的底库图像的特征,基于获取到的特征将所述比对图像与所述底库图像进行匹配,并根据匹配结果确定所述比对图像是否通过验证;Acquire features of the comparison image and features of base database images in the overall fingerprint base database, match the comparison image with the base database images based on the acquired features, and determine whether the comparison image passes verification according to the matching result; 其中,所述总体指纹底库包括至少一个构建好的指纹底库,所述比对图像与所述底库图像匹配,是指所述比对图像中的指纹区域和所述底库图像中的指纹区域存在重叠区域,若所述比对图像与任一底库图像匹配,则确定所述比对图像通过验证,否则确定所述比对图像未通过验证。Among them, the overall fingerprint base database includes at least one constructed fingerprint base database, and the comparison image matches the base database image, which means that there is an overlapping area between the fingerprint area in the comparison image and the fingerprint area in the base database image. If the comparison image matches any base database image, it is determined that the comparison image has passed the verification, otherwise it is determined that the comparison image has not passed the verification. 19.一种指纹录入装置,其特征在于,包括:19. A fingerprint entry device, comprising: 图像获取模块,用于获取输入图像,所述输入图像为待录入的指纹图像;An image acquisition module is used to acquire an input image, where the input image is a fingerprint image to be recorded; 图像匹配模块,用于获取所述输入图像的特征以及指纹底库中的每张底库图像的特征,并基于获取到的特征将所述输入图像分别与每张底库图像进行匹配;其中,所述输入图像与所述底库图像匹配,是指所述输入图像中的指纹区域和所述底库图像中的指纹区域存在重叠区域,与所述输入图像匹配的底库图像为匹配底库图像;An image matching module is used to obtain the features of the input image and the features of each base image in the fingerprint base database, and match the input image with each base image based on the obtained features; wherein the input image matches the base image, which means that there is an overlapping area between the fingerprint area in the input image and the fingerprint area in the base image, and the base image that matches the input image is the matching base image; 图像拼接模块,用于在存在所述匹配底库图像时,基于所述输入图像和所述匹配底库图像中的重叠区域,将所述输入图像和所述匹配底库图像进行拼接,获得新的底库图像;An image stitching module, configured to stitch the input image and the matching base image based on an overlapping area between the input image and the matching base image to obtain a new base image when the matching base image exists; 所述图像拼接模块基于所述输入图像和所述匹配底库图像中的重叠区域,将所述输入图像和所述匹配底库图像进行拼接,获得新的底库图像,包括:利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像和所述匹配底库图像映射至一基准坐标系下;其中,所述输入图像和任一匹配底库图像之间的几何变换矩阵是根据所述输入图像和该底库图像之间的匹配点对计算出的,所述输入图像和该匹配底库图像之间的匹配点对是两张图像中的重叠区域内的点对;将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接,获得新的底库图像;The image stitching module stitches the input image and the matching base image based on the overlapping area in the input image and the matching base image to obtain a new base image, including: mapping the input image and the matching base image to a reference coordinate system using a geometric transformation matrix between the input image and the matching base image; wherein the geometric transformation matrix between the input image and any matching base image is calculated based on matching point pairs between the input image and the base image, and the matching point pairs between the input image and the matching base image are point pairs in the overlapping area of the two images; stitching the mapped input image and the mapped matching base image to obtain a new base image; 所述图像拼接模块还用于:在将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接之前,获取映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵,所述权重矩阵包括其对应图像中各个像素的权重;所述图像拼接模块将映射后的所述输入图像和映射后的所述匹配底库图像进行拼接,获得新的底库图像,包括:基于获取到的权重矩阵将映射后的所述输入图像和映射后的所述匹配底库图像进行逐像素融合,获得新的底库图像;The image stitching module is also used for: before stitching the mapped input image and the mapped matching base image, obtaining a weight matrix of the mapped input image and a weight matrix of the mapped matching base image, wherein the weight matrix includes the weight of each pixel in the corresponding image; the image stitching module stitches the mapped input image and the mapped matching base image to obtain a new base image, including: based on the acquired weight matrix, fusing the mapped input image and the mapped matching base image pixel by pixel to obtain a new base image; 所述图像拼接模块获取映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵,包括:获取所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵,所述权重矩阵包括其对应图像中各个像素的权重;利用所述输入图像和所述匹配底库图像之间的几何变换矩阵,将所述输入图像的权重矩阵和所述匹配底库图像的权重矩阵映射至所述基准坐标系下,获得映射后的所述输入图像的权重矩阵和映射后的所述匹配底库图像的权重矩阵。The image stitching module obtains the weight matrix of the mapped input image and the weight matrix of the mapped matching base image, including: obtaining the weight matrix of the input image and the weight matrix of the matched base image, the weight matrix including the weight of each pixel in its corresponding image; using the geometric transformation matrix between the input image and the matched base image, mapping the weight matrix of the input image and the weight matrix of the matched base image to the reference coordinate system, and obtaining the weight matrix of the mapped input image and the weight matrix of the mapped matching base image. 20.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序指令,所述计算机程序指令被处理器读取并运行时,执行如权利要求1-18中任一项所述的方法。20. A computer-readable storage medium, characterized in that computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor, the method according to any one of claims 1 to 18 is executed. 21.一种电子设备,其特征在于,包括存储器以及处理器,所述存储器中存储有计算机程序指令,所述计算机程序指令被所述处理器读取并运行时,执行权利要求1-18中任一项所述的方法。21. An electronic device, characterized in that it comprises a memory and a processor, wherein the memory stores computer program instructions, and when the computer program instructions are read and executed by the processor, the method according to any one of claims 1 to 18 is executed.
CN202011054188.XA 2020-09-29 2020-09-29 Fingerprint entry method and device, storage medium and electronic device Active CN112329528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011054188.XA CN112329528B (en) 2020-09-29 2020-09-29 Fingerprint entry method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011054188.XA CN112329528B (en) 2020-09-29 2020-09-29 Fingerprint entry method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN112329528A CN112329528A (en) 2021-02-05
CN112329528B true CN112329528B (en) 2025-03-21

Family

ID=74314416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011054188.XA Active CN112329528B (en) 2020-09-29 2020-09-29 Fingerprint entry method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112329528B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004744B (en) * 2021-10-15 2023-04-28 深圳市亚略特科技股份有限公司 Fingerprint splicing method and device, electronic equipment and medium
CN119274242A (en) * 2024-01-29 2025-01-07 荣耀终端有限公司 Fingerprint image processing method and terminal device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650569A (en) * 2016-09-05 2017-05-10 北京小米移动软件有限公司 Fingerprint entering method and device
CN107958443A (en) * 2017-12-28 2018-04-24 西安电子科技大学 A kind of fingerprint image joining method based on crestal line feature and TPS deformation models

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9251396B2 (en) * 2013-01-29 2016-02-02 Diamond Fortress Technologies, Inc. Touchless fingerprinting acquisition and processing application for mobile devices
CN105205439B (en) * 2015-02-13 2017-05-03 比亚迪股份有限公司 Method for calculating area of fingerprint overlapping region and electronic device
CN105069750B (en) * 2015-08-11 2019-02-22 电子科技大学 A Method for Determining Optimal Projection Cylinder Radius Based on Image Feature Points
CN105354463B (en) * 2015-09-30 2018-06-15 宇龙计算机通信科技(深圳)有限公司 A kind of fingerprint identification method and mobile terminal
CN106548129A (en) * 2016-09-30 2017-03-29 无锡小天鹅股份有限公司 Fingerprint identification method, device and household electrical appliance
CN110866418B (en) * 2018-08-27 2023-05-09 阿里巴巴集团控股有限公司 Image base generation method, device, equipment, system and storage medium
CN109858464B (en) * 2019-02-26 2021-03-23 北京旷视科技有限公司 Base database data processing method, face recognition method, device and electronic equipment
CN110717429B (en) * 2019-09-27 2023-02-21 联想(北京)有限公司 Information processing method, electronic equipment and computer readable storage medium
CN110807423B (en) * 2019-10-31 2022-04-22 北京迈格威科技有限公司 Method and device for processing fingerprint image under screen and electronic equipment
CN111241314B (en) * 2020-01-13 2024-09-17 天津极豪科技有限公司 Fingerprint base input method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650569A (en) * 2016-09-05 2017-05-10 北京小米移动软件有限公司 Fingerprint entering method and device
CN107958443A (en) * 2017-12-28 2018-04-24 西安电子科技大学 A kind of fingerprint image joining method based on crestal line feature and TPS deformation models

Also Published As

Publication number Publication date
CN112329528A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
TWI737040B (en) Fingerprint recognition method, chip and electronic device
US8064685B2 (en) 3D object recognition
US9141871B2 (en) Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space
JP6393230B2 (en) Object detection method and image search system
US20150371077A1 (en) Fingerprint recognition for low computing power applications
CN112651380B (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN110751024A (en) User identity identification method and device based on handwritten signature and terminal equipment
CN112329528B (en) Fingerprint entry method and device, storage medium and electronic device
CN110738222B (en) Image matching method and device, computer equipment and storage medium
US10127681B2 (en) Systems and methods for point-based image alignment
CN111507288A (en) Image detection method, image detection device, computer equipment and storage medium
JP2009129237A (en) Image processing apparatus and its method
CN113992812A (en) Method and apparatus for activity detection
CN107545215A (en) A kind of fingerprint identification method and device
CN108647640A (en) The method and electronic equipment of recognition of face
Xu et al. Multiscale contour extraction using a level set method in optical satellite images
El-Abed et al. Quality assessment of image-based biometric information
CN114120377B (en) A fingerprint control instrument application method and system for accurate identification of spatial pyramid fingerprints
CN113673477B (en) Palm vein non-contact three-dimensional modeling method, device and authentication method
CN113051901B (en) Identification card text recognition method, system, medium and electronic terminal
CN112446428B (en) An image data processing method and device
Cai et al. An adaptive symmetry detection algorithm based on local features
Svärm et al. Improving robustness for inter-subject medical image registration using a feature-based approach
Arseneau et al. An improved representation of junctions through asymmetric tensor diffusion
Kusban et al. Image enhancement in palmprint recognition: a novel approach for improved biometric authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20250205

Address after: No. 257, 2nd Floor, Building 9, No. 2 Huizhu Road, Liangjiang New District, Yubei District, Chongqing, China 401123

Applicant after: Force Map New (Chongqing) Technology Co.,Ltd.

Country or region after: China

Address before: 316-318, block a, Rongke Information Center, No.2, South Road, Academy of Sciences, Haidian District, Beijing 100090

Applicant before: MEGVII (BEIJING) TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant