CN114202778A - Method and system for estimating three-dimensional gesture of finger by planar fingerprint - Google Patents
Method and system for estimating three-dimensional gesture of finger by planar fingerprint Download PDFInfo
- Publication number
- CN114202778A CN114202778A CN202111301866.2A CN202111301866A CN114202778A CN 114202778 A CN114202778 A CN 114202778A CN 202111301866 A CN202111301866 A CN 202111301866A CN 114202778 A CN114202778 A CN 114202778A
- Authority
- CN
- China
- Prior art keywords
- plane
- fingerprint image
- fingerprint
- training
- planar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
本申请提出一种由平面指纹估计手指三维姿态的方法和系统,属于人机交互技术领域,其中,由平面指纹估计手指三维姿态的方法,通过采集待测对象的平面指纹图像;利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。采用上述方案的本申请通过预先训练一个精确的平面姿态估计模型,并以此为基础通过参数学习或者统计建模方法确定与待测对象的平面指纹姿态匹配的完整的手指三维姿态,解决了目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。
The present application proposes a method and system for estimating the three-dimensional posture of a finger from a planar fingerprint, which belongs to the technical field of human-computer interaction. The plane posture estimation model determines the plane fingerprint posture corresponding to the plane fingerprint image; according to the plane fingerprint posture, the complete three-dimensional posture of the finger matching the object to be measured is determined by parameter learning or statistical modeling. The present application adopting the above solution solves the current problem by pre-training an accurate plane posture estimation model, and determining the complete three-dimensional posture of the finger matching the plane fingerprint posture of the object to be measured through parameter learning or statistical modeling based on this. Fingerprint three-dimensional pose estimation technology is not ideal in terms of function and convenience.
Description
技术领域technical field
本申请涉及人机交互技术领域,尤其涉及一种由平面指纹估计手指三维姿态的方法和系统。The present application relates to the technical field of human-computer interaction, and in particular, to a method and system for estimating a three-dimensional gesture of a finger from a planar fingerprint.
背景技术Background technique
指纹的手指三维姿态估计领域,根据输入指纹的模态(采集技术)将现有技术划分成如下几个类型:In the field of finger three-dimensional pose estimation of fingerprints, the existing technologies are divided into the following types according to the mode (collection technology) of the input fingerprint:
1)基于电容感应式指纹的姿态估计,电容感应指纹是利用指纹接触屏幕与非接触情况下的传感器电容差异成像,由于其原理相对简单并且广泛应用于触屏设备中,(Xiaoet al.)从电容感应式指纹中提取了42个特征,并训练了一个高斯回归模型来估计俯仰角和偏转角,并在智能手机和智能手表上取得了不错的成效,但是他们无法预测滚动角,因此得到的指纹姿态是不完整的,而且由于这类传感器的分辨率低,无法分辨指纹脊线和谷线,姿态估计的精度受限;1) Based on the gesture estimation of capacitive sensing fingerprints, capacitive sensing fingerprints use the difference imaging of the sensor capacitance between the fingerprint touching the screen and the non-contact situation. Because of its relatively simple principle and widely used in touch screen devices, (Xiao et al.) from 42 features were extracted from capacitive sensing fingerprints, and a Gaussian regression model was trained to estimate pitch and yaw angles, and achieved good results on smartphones and smartwatches, but they could not predict the roll angle, so the obtained The fingerprint pose is incomplete, and due to the low resolution of such sensors, the fingerprint ridge and valley lines cannot be distinguished, and the accuracy of pose estimation is limited;
2)融合其他模态信息的姿态估计,通过使用外部安装的深度相机获取指纹的深度信息,并融合先验知识进而约束俯仰角的范围在0~90度之间,在实验对象的指尖绑定一个摄像机,通过检测指甲盖的光照强度变化来计算俯仰角和偏转角,此方案通过利用平面指纹之外的模态信息,进而获得指纹完整的三维姿态,但是需要引入额外的硬件,给实际应用场景带来阻碍;2) Integrate the pose estimation of other modal information, obtain the depth information of the fingerprint by using the externally installed depth camera, and integrate the prior knowledge to constrain the range of the pitch angle to be between 0 and 90 degrees. Set a camera, and calculate the pitch angle and deflection angle by detecting the change of the light intensity of the fingernail. This scheme obtains the complete three-dimensional posture of the fingerprint by using the modal information other than the planar fingerprint, but it needs to introduce additional hardware to provide practical Application scenarios bring obstacles;
3)基于平面按捺指纹的姿态估计,目前为止,仅有一个方案直接基于平面按捺指纹去估计手指三维姿态,(Holz and Baudisch)提出在数据库中注册具体手指的各角度平面指纹图像及其对应真实角度,在测试阶段通过查找与输入指纹最相似的库指纹,推测得出当前输入指纹的三维姿态,包括俯仰角、滚动角和偏转角,但是此方案需要预先建立每个手指的各角度指纹库及其角度真值,而这是一件冗杂繁琐的过程,且需要利用获取手指真值的技术(例如光学跟踪技术、惯导技术),给此技术方案的实际落地带来了困难。3) Pose estimation based on plane-pressed fingerprints. So far, there is only one solution to estimate the three-dimensional posture of fingers directly based on plane-pressed fingerprints. Angle, in the test phase, by finding the library fingerprint that is most similar to the input fingerprint, the three-dimensional attitude of the current input fingerprint is inferred, including the pitch angle, roll angle and yaw angle, but this solution needs to pre-establish the fingerprint database of each angle of each finger and the true value of its angle, which is a tedious and tedious process, and requires the use of technologies (such as optical tracking technology, inertial navigation technology) to obtain the true value of the finger, which brings difficulties to the actual implementation of this technical solution.
综上所述,现有指纹的手指三维姿态估计技术方案要么无法精准估计完整的指纹姿态,要么需要额外的传感器设备进行辅助,或者需要用户进行复杂的指纹注册环节,在功能和便利性方面都不理想。因此,亟需一种基于平面指纹图像直接估计指纹三维姿态的技术方案,给指纹的交互式应用带来较大的促进。To sum up, the existing technical solutions for 3D finger pose estimation of fingerprints either cannot accurately estimate the complete fingerprint pose, or require additional sensor equipment to assist, or require users to perform complex fingerprint registration links, both in terms of functionality and convenience. not ideal. Therefore, there is an urgent need for a technical solution for directly estimating the three-dimensional pose of a fingerprint based on a planar fingerprint image, which will greatly facilitate the interactive application of fingerprints.
发明内容SUMMARY OF THE INVENTION
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本申请的第一个目的在于提出一种由平面指纹估计手指三维姿态的方法,以解决目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。Therefore, the first purpose of the present application is to propose a method for estimating the three-dimensional posture of a finger from a planar fingerprint, so as to solve the technical problem that the current fingerprint three-dimensional posture estimation technology is not ideal in terms of function and convenience.
本申请的第二个目的在于提出一种由平面指纹估计手指三维姿态的系统。The second purpose of this application is to propose a system for estimating the three-dimensional gesture of a finger from a planar fingerprint.
为达到上述目的,本申请第一方面实施例提出的一种由平面指纹估计手指三维姿态的方法,包括:In order to achieve the above purpose, a method for estimating the three-dimensional posture of a finger from a planar fingerprint proposed by the embodiment of the first aspect of the present application includes:
采集待测对象的平面指纹图像;Collect the planar fingerprint image of the object to be tested;
利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;Use the pre-trained plane pose estimation model to determine the plane fingerprint pose corresponding to the plane fingerprint image;
根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。According to the plane fingerprint posture, the complete three-dimensional posture of the finger that matches the object to be measured is determined by means of parameter learning or statistical modeling.
可选地,在本申请的一个实施例中,所述平面指纹姿态包括平面指纹图像的位置以及偏转角;Optionally, in an embodiment of the present application, the planar fingerprint pose includes a position and a deflection angle of the planar fingerprint image;
所述通过参数学习或者统计建模的方式确定与所述待测对象匹配的完整的手指三维姿态,包括:The complete three-dimensional gesture of the finger that matches the object to be measured is determined by parameter learning or statistical modeling, including:
根据所述参数学习或者统计建模的方式确定与所述平面指纹图像对应的三维姿态映射函数;Determine the three-dimensional pose mapping function corresponding to the planar fingerprint image according to the parameter learning or statistical modeling;
根据所述三维姿态映射函数和所述平面指纹图像的位置,确定所述平面指纹图像的滚动角和俯仰角;Determine the roll angle and the pitch angle of the planar fingerprint image according to the three-dimensional attitude mapping function and the position of the planar fingerprint image;
根据所述平面指纹图像的偏转角、滚动角和俯仰角确定与所述待测对象匹配的完整三维姿态。A complete three-dimensional pose matching the object to be measured is determined according to the yaw angle, roll angle and pitch angle of the planar fingerprint image.
可选地,在本申请的一个实施例中,所述根据所述参数学习或者统计建模的方式确定与所述平面指纹图像对应的三维姿态映射函数,包括:Optionally, in an embodiment of the present application, the determining of the three-dimensional pose mapping function corresponding to the planar fingerprint image according to the parameter learning or statistical modeling includes:
通过机器学习的方式构建第一平面指纹图像数据库,其中,所述第一平面指纹图像数据库包括特征描述子;Build a first planar fingerprint image database by means of machine learning, wherein the first planar fingerprint image database includes feature descriptors;
从第一平面指纹图像数据库中确定与所述特征描述子匹配的多个平面指纹图像,并确定与每个所述多个平面指纹图像对应的第一映射参数;determining a plurality of planar fingerprint images matching the feature descriptors from the first planar fingerprint image database, and determining a first mapping parameter corresponding to each of the multiple planar fingerprint images;
对多个所述第一映射参数进行融合以得到第二映射参数,并根据所述第二映射参数确定所述平面指纹图像对应的三维姿态映射函数,其中,通过计算多个所述第一映射参数的平均值,或者加权平均值,或者取预设个数个最匹配的第一映射参数的平均值的方式获取所述第二映射参数;Fusing a plurality of the first mapping parameters to obtain second mapping parameters, and determining a three-dimensional attitude mapping function corresponding to the planar fingerprint image according to the second mapping parameters, wherein, by calculating a plurality of the first mappings The average value of the parameters, or the weighted average value, or the average value of the preset number of the most matching first mapping parameters to obtain the second mapping parameter;
或者,通过仿真系统和实际采集的数据集确定至少一个描述手指的概率模型,根据所述至少一个描述手指的概率模型确定当前平面指纹图像对应的三维姿态映射函数。Alternatively, at least one probability model describing the finger is determined through the simulation system and the actually collected data set, and the three-dimensional attitude mapping function corresponding to the current planar fingerprint image is determined according to the at least one probability model describing the finger.
可选地,在本申请的一个实施例中,在利用预先训练的平面姿态估计模型确定与所述平面指纹图像对应的平面指纹姿态之前,还包括:Optionally, in an embodiment of the present application, before using a pre-trained plane pose estimation model to determine the plane fingerprint pose corresponding to the plane fingerprint image, the method further includes:
采集训练用平面指纹图像和与所述训练用平面指纹图像对应的训练用三维角度数据;Collecting a plane fingerprint image for training and three-dimensional angle data for training corresponding to the plane fingerprint image for training;
根据所述训练用平面指纹图像和所述训练用三维角度数据对平面姿态估计模型进行训练,以得到所述预先训练的平面姿态估计模型。The plane pose estimation model is trained according to the training plane fingerprint image and the training three-dimensional angle data to obtain the pre-trained plane pose estimation model.
可选地,在本申请的一个实施例中,所述采集训练用平面指纹图像和与所述训练用平面指纹图像对应的训练用三维角度数据,包括:Optionally, in an embodiment of the present application, the acquisition of the training plane fingerprint image and the training three-dimensional angle data corresponding to the training plane fingerprint image includes:
通过平面指纹采集仪获取所述训练用平面指纹图像;Obtain the training plane fingerprint image by using a plane fingerprint collector;
利用两个三轴陀螺仪获取与所述训练用平面指纹图像对应的训练用三维角度数据,或者利用光学跟踪测量系统获取与所述训练用平面指纹图像对应的训练用三维角度数据,或者利用所述仿真系统以及数据生成器获取与所述训练用平面指纹图像对应的训练用三维角度数据;其中,Use two three-axis gyroscopes to obtain the three-dimensional angle data for training corresponding to the planar fingerprint image for training, or use an optical tracking measurement system to obtain the three-dimensional angle data for training corresponding to the planar fingerprint image for training, or use the The simulation system and the data generator obtain the three-dimensional angle data for training corresponding to the planar fingerprint image for training; wherein,
所述训练用三维角度数据包括俯仰角真值、偏转角真值和滚动角真值,所述俯仰角真值的范围不大于20°且不小于-80°,所述偏转角真值的范围不大于90°且不小于-90°,所述滚动角真值的范围不大于75°且不小于-75°。The three-dimensional angle data for training includes the true value of the pitch angle, the true value of the yaw angle and the true value of the roll angle. Not more than 90° and not less than -90°, the range of the true value of the roll angle is not more than 75° and not less than -75°.
可选地,在本申请的一个实施例中,在所述采集训练用平面指纹图像和与所述训练用平面指纹图像对应的训练用三维角度数据之后,还包括:Optionally, in an embodiment of the present application, after the acquisition of the training plane fingerprint image and the training three-dimensional angle data corresponding to the training plane fingerprint image, the method further includes:
将所述训练用平面指纹图像存储至第二平面指纹图像数据库;storing the planar fingerprint image for training in a second planar fingerprint image database;
根据预设条件从所述第二平面指纹图像数据库中获取多个训练用平面指纹图像,以及与每个所述多个训练用平面指纹图像的特征描述子和第一映射参数;Acquire a plurality of planar fingerprint images for training from the second planar fingerprint image database according to preset conditions, and a feature descriptor and a first mapping parameter with each of the multiple planar fingerprint images for training;
将所述多个训练用平面指纹图像、所述第一映射参数和所述多个训练用平面指纹图像的特征描述子分别存储至所述第一平面指纹图像数据库。The plurality of training plane fingerprint images, the first mapping parameter and the feature descriptors of the plurality of training plane fingerprint images are respectively stored in the first plane fingerprint image database.
可选地,在本申请的一个实施例中,所述根据所述训练用平面指纹图像和所述训练用三维角度数据对平面姿态估计模型进行训练,包括:Optionally, in an embodiment of the present application, the training of the plane pose estimation model according to the plane fingerprint image for training and the three-dimensional angle data for training includes:
利用函数拟合的方式获取所述第一平面指纹图像数据库中每个训练用平面指纹图像的第一映射参数,所述第一映射参数包括俯仰角映射参数、滚动角映射参数。The first mapping parameters of each training plane fingerprint image in the first plane fingerprint image database are obtained by means of function fitting, and the first mapping parameters include pitch angle mapping parameters and roll angle mapping parameters.
可选地,在本申请的一个实施例中,Optionally, in an embodiment of the present application,
通过下式确定所述第一平面指纹图像数据库中每个训练用平面指纹图像的俯仰角映射参数:Determine the pitch angle mapping parameter of each training plane fingerprint image in the first plane fingerprint image database by the following formula:
fpitch(x,y)=b1x2+b2y2+b3x+b4y+b5ln(x)+b6ln(y)+b7 f pitch (x,y)=b 1 x 2 +b 2 y 2 +b 3 x+b 4 y+b 5 ln(x)+b 6 ln(y)+b 7
其中,(x,y)为指纹的位置,fpitch(x,y)为指纹的俯仰角的真值,为平面指纹图像对应的俯仰角映射参数;Among them, (x, y) is the position of the fingerprint, f pitch (x, y) is the true value of the pitch angle of the fingerprint, is the pitch angle mapping parameter corresponding to the planar fingerprint image;
通过下式确定所述第一平面指纹图像数据库中每个训练用平面指纹图像的滚动角映射参数:Determine the rolling angle mapping parameter of each training plane fingerprint image in the first plane fingerprint image database by the following formula:
froll(x,y)=a1x2+a2y2+a3x+a4y+a5ln(x)+a6ln(y)+a7 f roll (x,y)=a 1 x 2 +a 2 y 2 +a 3 x+a 4 y+a 5 ln(x)+a 6 ln(y)+a 7
其中,(x,y)为指纹的位置,froll(x,y)为指纹的滚动角的真值,为平面指纹图像对应的滚动角映射参数。Among them, (x, y) is the position of the fingerprint, f roll (x, y) is the true value of the roll angle of the fingerprint, It is the rolling angle mapping parameter corresponding to the planar fingerprint image.
可选地,在本申请的一个实施例中,通过下述模型对所述平面姿态估计模型进行训练:Optionally, in an embodiment of the present application, the plane pose estimation model is trained by the following model:
Lall=Lpos+λLyaw L all =L pos +λL yaw
Lyaw=‖θ-θ′‖2 L yaw = ‖θ-θ′‖ 2
其中,Lall表示损失函数值,Lpos表示位置预测的损失函数值,Lyaw表示偏航角的损失函数值,λ为加权系数,表示指纹的位置的真值,表示平面姿态估计模型对指纹的位置的预测值,表示指纹的偏转角的真值,表示平面姿态估计模型对指纹的偏转角的预测值。Among them, L all represents the loss function value, L pos represents the loss function value of position prediction, L yaw represents the loss function value of the yaw angle, λ is the weighting coefficient, the truth value representing the location of the fingerprint, represents the predicted value of the position of the fingerprint by the plane pose estimation model, represents the true value of the deflection angle of the fingerprint, Represents the predicted value of the deflection angle of the fingerprint by the plane pose estimation model.
综上,本申请第一方面实施例提出的方法,通过采集待测对象的平面指纹图像;利用预先训练的平面姿态估计模型确定与所述平面指纹图像对应的平面指纹姿态;根据所述平面指纹姿态,通过参数学习或者统计建模的方式确定与所述待测对象匹配的完整的手指三维姿态。本申请通过预先训练一个精确的平面姿态估计模型,并以此为基础映射或拟合的方式确定与所述待测对象的平面指纹姿态匹配的完整的手指三维姿态,解决了目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。To sum up, the method proposed by the embodiment of the first aspect of the present application collects the planar fingerprint image of the object to be tested; uses a pre-trained planar pose estimation model to determine the planar fingerprint pose corresponding to the planar fingerprint image; Gesture, the complete three-dimensional gesture of the finger that matches the object to be measured is determined by means of parameter learning or statistical modeling. In the present application, by pre-training an accurate plane posture estimation model, and based on it, the complete three-dimensional posture of the finger matching the plane fingerprint posture of the object to be measured is determined by mapping or fitting, so as to solve the problem of the three-dimensional finger of the current fingerprint. The technical problem that pose estimation techniques are not ideal in terms of functionality and convenience.
为达到上述目的,本申请第二方面实施例提出的一种由平面指纹估计手指三维姿态的系统,包括:In order to achieve the above purpose, a system for estimating the three-dimensional posture of a finger from a planar fingerprint proposed in the second aspect of the present application includes:
采集模块,用于采集待测对象的平面指纹图像;The acquisition module is used to acquire the planar fingerprint image of the object to be tested;
平面姿态估计模块,用于利用预先训练的平面姿态估计模型确定与所述平面指纹图像对应的平面指纹姿态;a plane pose estimation module for determining a plane fingerprint pose corresponding to the plane fingerprint image by using a pre-trained plane pose estimation model;
确定模块,用于根据所述平面指纹姿态,通过参数学习或者统计建模的方式确定与所述待测对象匹配的完整的手指三维姿态。The determining module is configured to determine the complete three-dimensional gesture of the finger matched with the object to be measured by means of parameter learning or statistical modeling according to the gesture of the plane fingerprint.
综上,本申请第二方面实施例提出的系统,通过采集模块采集待测对象的平面指纹图像;平面姿态估计模块利用预先训练的平面姿态估计模型确定与所述平面指纹图像对应的平面指纹姿态;确定模块根据所述平面指纹姿态,通过参数学习或者统计建模的方式确定与所述待测对象匹配的完整的手指三维姿态。本申请通过预先训练一个精确的平面姿态估计模型,并以此为基础映射或拟合的方式确定与所述待测对象的平面指纹姿态匹配的完整的手指三维姿态,解决了目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。To sum up, the system proposed in the second aspect of the present application collects the planar fingerprint image of the object to be measured through the acquisition module; the planar pose estimation module uses the pre-trained planar pose estimation model to determine the planar fingerprint pose corresponding to the planar fingerprint image. ; The determining module determines the complete three-dimensional gesture of the finger matched with the object to be measured by means of parameter learning or statistical modeling according to the plane fingerprint gesture. In the present application, by pre-training an accurate plane posture estimation model, and based on it, the complete three-dimensional posture of the finger matching the plane fingerprint posture of the object to be measured is determined by mapping or fitting, so as to solve the problem of the three-dimensional finger of the current fingerprint. The technical problem that pose estimation techniques are not ideal in terms of functionality and convenience.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1为本申请实施例所提供的手指在按捺屏幕时三个姿态角度示意图;1 is a schematic diagram of three posture angles when a finger is pressed against a screen provided by an embodiment of the present application;
图2为本申请实施例所提供的一种由平面指纹估计手指三维姿态的方法的流程图;FIG. 2 is a flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application;
图3本申请实施例所提供的数据采集示意图;3 is a schematic diagram of data collection provided by an embodiment of the present application;
图4本申请实施例所提供的手指三维姿态拟合示意图;4 is a schematic diagram of a three-dimensional gesture fitting of a finger provided by an embodiment of the present application;
图5本申请实施例所提供的深度神经网络结构示意图;5 is a schematic structural diagram of a deep neural network provided by an embodiment of the present application;
图6为本申请实施例所提供的一种由平面指纹估计手指三维姿态的方法的流程示意图;6 is a schematic flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application;
图7为本申请实施例所提供的一种由平面指纹估计手指三维姿态的系统的结构示意图。FIG. 7 is a schematic structural diagram of a system for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。相反,本申请的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary and are only used to explain the present application, but should not be construed as a limitation on the present application. On the contrary, the embodiments of the present application include all changes, modifications and equivalents falling within the spirit and scope of the appended claims.
在人机交互领域,触屏输入作为一种简单、快速的交互方式受到大量智能移动设备的青睐。毫不夸张地说,智能设备之所以能够普及,除了芯片制造,手机系统等技术的发展,也离不开触屏这一交互方式的迭代更新。传统的智能设备可以感应到用户的手指按压区域,并给予相应的反馈。比如在进行图片放大、旋转、缩放等操作的情况下,用户需要使用到至少两根手指才能顺利完成上述操作。如果能够准确估计出手指在接触触屏时的角度作为额外的输入信息,将极大地提高用户体验,简化部分操作的逻辑,也能够拓宽指纹输入的应用场景,带来更多有趣的交互形式,并且如果能够直接由平面指纹估计得到三维姿态,那将不会给手机等智能设备带来额外的硬件更新负担。In the field of human-computer interaction, touch screen input is favored by a large number of smart mobile devices as a simple and fast interaction method. It is no exaggeration to say that the popularization of smart devices, in addition to the development of technologies such as chip manufacturing and mobile phone systems, is also inseparable from the iterative update of the interactive method of touch screen. Traditional smart devices can sense the user's finger pressing area and give corresponding feedback. For example, in the case of performing operations such as image enlargement, rotation, and zooming, the user needs to use at least two fingers to successfully complete the above operations. If the angle of the finger when touching the touch screen can be accurately estimated as additional input information, it will greatly improve the user experience, simplify the logic of some operations, broaden the application scenarios of fingerprint input, and bring more interesting interactive forms. And if the three-dimensional pose can be estimated directly from the planar fingerprint, it will not bring additional hardware update burden to smart devices such as mobile phones.
手指在按捺屏幕时三个姿态角度如图1所示,基于滚动角、俯仰角、偏转角(偏移角)这三个姿态角度,可以要求用户向着某一个特定的姿态倾斜,通过判断用户反馈的动作是否符合某一角度阈值来判断当前指纹是否为伪造指纹,以此为基础,智能设备系统开发者能够增强指纹识别系统的安全性和可靠性;同时还能够在现有交互信息的基础上额外提供手指的三个姿态角度,创造出更多的组合手势,丰富触屏信息输入,增加交互趣味,在保持简单易用的优势下弥补触屏设备与传统物理键入式设备之间的差距。The three attitude angles when the finger is pressing the screen are shown in Figure 1. Based on the three attitude angles of roll angle, pitch angle, and yaw angle (offset angle), the user can be asked to tilt toward a specific attitude, and by judging the user feedback Whether the action of the fingerprint recognition meets a certain angle threshold to determine whether the current fingerprint is a fake fingerprint, based on this, the smart device system developer can enhance the security and reliability of the fingerprint recognition system; at the same time, based on the existing interactive information Three additional gesture angles of the fingers are provided to create more combined gestures, enrich the touch screen information input, increase the interaction interest, and make up for the gap between touch screen devices and traditional physical typing devices while maintaining the advantages of simplicity and ease of use.
实施例1Example 1
图2为本申请实施例所提供的一种由平面指纹估计手指三维姿态的方法的流程图。FIG. 2 is a flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
如图2所示,本申请实施例提供的一种由平面指纹估计手指三维姿态的方法,包括以下步骤:As shown in FIG. 2 , a method for estimating a three-dimensional gesture of a finger from a planar fingerprint provided by an embodiment of the present application includes the following steps:
步骤210,采集待测对象的平面指纹图像;
步骤220,利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;
步骤230,根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。
在本申请实施例中,平面指纹姿态包括平面指纹图像的位置以及偏转角;In this embodiment of the present application, the planar fingerprint pose includes the position and deflection angle of the planar fingerprint image;
通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态,包括:Determine the complete three-dimensional gesture of the finger that matches the object to be tested by means of parameter learning or statistical modeling, including:
根据参数学习或者统计建模的方式确定与平面指纹图像对应的三维姿态映射函数;Determine the three-dimensional pose mapping function corresponding to the planar fingerprint image according to parameter learning or statistical modeling;
根据三维姿态映射函数和平面指纹图像的位置,确定平面指纹图像的滚动角和俯仰角;Determine the roll angle and pitch angle of the plane fingerprint image according to the three-dimensional attitude mapping function and the position of the plane fingerprint image;
根据平面指纹图像的偏转角、滚动角和俯仰角确定与待测对象匹配的完整三维姿态。According to the yaw angle, roll angle and pitch angle of the planar fingerprint image, the complete three-dimensional pose matching the object to be measured is determined.
在本申请实施例中,根据参数学习或者统计建模的方式确定与平面指纹图像对应的三维姿态映射函数,包括:In the embodiment of the present application, the three-dimensional attitude mapping function corresponding to the planar fingerprint image is determined according to parameter learning or statistical modeling, including:
通过机器学习的方式构建第一平面指纹图像数据库,其中,第一平面指纹图像数据库包括特征描述子;Build a first planar fingerprint image database by means of machine learning, wherein the first planar fingerprint image database includes feature descriptors;
从第一平面指纹图像数据库中确定与特征描述子匹配的多个平面指纹图像,并确定与每个多个平面指纹图像对应的第一映射参数;determining a plurality of planar fingerprint images matching the feature descriptors from the first planar fingerprint image database, and determining a first mapping parameter corresponding to each of the multiple planar fingerprint images;
对多个第一映射参数进行融合以得到第二映射参数,并根据第二映射参数确定平面指纹图像对应的三维姿态映射函数,其中,通过计算多个第一映射参数的平均值,或者加权平均值,或者取预设个数个最匹配的第一映射参数的平均值的方式获取第二映射参数;A plurality of first mapping parameters are fused to obtain a second mapping parameter, and a three-dimensional attitude mapping function corresponding to the planar fingerprint image is determined according to the second mapping parameter. value, or obtain the second mapping parameter by taking the average value of the preset number of the most matching first mapping parameters;
或者,通过仿真系统和实际采集的数据集确定至少一个描述手指的概率模型,根据至少一个描述手指的概率模型确定当前平面指纹图像对应的三维姿态映射函数。Alternatively, at least one probability model describing the finger is determined through the simulation system and the actually collected data set, and the three-dimensional attitude mapping function corresponding to the current planar fingerprint image is determined according to the at least one probability model describing the finger.
具体地,当根据统计建模的方式确定与平面指纹图像对应的三维姿态映射函数时,通过仿真系统和实际采集的数据集确定至少一个描述手指的概率模型,将若干可能的空间变换作用于描述手指的概率模型,对比投影得到的平面指纹图像的位置以及平面指纹姿态,取最接近的描述手指的概率模型对应的空间变换,根据最接近的描述手指的概率模型计算当前平面指纹图像对应的三维姿态映射函数。Specifically, when the three-dimensional attitude mapping function corresponding to the planar fingerprint image is determined according to statistical modeling, at least one probability model describing the finger is determined through the simulation system and the actually collected data set, and several possible spatial transformations are applied to the description The probability model of the finger, compare the position of the projected planar fingerprint image and the planar fingerprint posture, take the spatial transformation corresponding to the closest probability model describing the finger, and calculate the three-dimensional corresponding to the current planar fingerprint image according to the closest probability model describing the finger. Attitude mapping function.
在本申请实施例中,在利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态之前,还包括:In the embodiment of the present application, before using the pre-trained plane posture estimation model to determine the plane fingerprint posture corresponding to the plane fingerprint image, the method further includes:
采集训练用平面指纹图像和与训练用平面指纹图像对应的训练用三维角度数据;Collecting a plane fingerprint image for training and three-dimensional angle data for training corresponding to the plane fingerprint image for training;
根据训练用平面指纹图像和训练用三维角度数据对平面姿态估计模型进行训练,以得到预先训练的平面姿态估计模型。The plane pose estimation model is trained according to the plane fingerprint image for training and the 3D angle data for training to obtain a pre-trained plane pose estimation model.
在本申请实施例中,采集训练用平面指纹图像和与训练用平面指纹图像对应的训练用三维角度数据,包括:In the embodiment of the present application, collecting the planar fingerprint image for training and the three-dimensional angle data for training corresponding to the planar fingerprint image for training, including:
通过平面指纹采集仪获取训练用平面指纹图像;Acquire a plane fingerprint image for training through a plane fingerprint collector;
利用两个三轴陀螺仪获取与训练用平面指纹图像对应的训练用三维角度数据,或者利用光学跟踪测量系统获取与训练用平面指纹图像对应的训练用三维角度数据,或者利用仿真系统以及数据生成器获取与训练用平面指纹图像对应的训练用三维角度数据;其中,Use two three-axis gyroscopes to obtain the training 3D angle data corresponding to the training plane fingerprint image, or use the optical tracking measurement system to obtain the training 3D angle data corresponding to the training plane fingerprint image, or use the simulation system and data generation The device obtains the three-dimensional angle data for training corresponding to the planar fingerprint image for training; wherein,
训练用三维角度数据包括俯仰角真值、偏转角真值和滚动角真值,俯仰角真值的范围不大于20°且不小于-80°,偏转角真值的范围不大于90°且不小于-90°,滚动角真值的范围不大于75°且不小于-75°。The three-dimensional angle data for training includes the true value of the pitch angle, the true value of the yaw angle and the true value of the roll angle. Less than -90°, the range of the true value of the roll angle is not greater than 75° and not less than -75°.
具体地,平面姿态估计模型的精准程度取决于采集样本的丰富程度,因此数据采集是本申请至关重要的一部分,具体采用以下三种数据采集方案:Specifically, the accuracy of the plane pose estimation model depends on the richness of the collected samples, so data collection is a crucial part of this application, and the following three data collection schemes are specifically adopted:
第一种数据采集方案通过采用平面指纹采集仪采集平面指纹图像以及两个三轴陀螺仪读取三维角度数据,数据采集如图3所示,在采集数据时,两个三轴陀螺仪分别被固定在指纹传感器和待采集手指上,采集人员将绑有陀螺仪的手指在指纹传感器上由一边滚动到另一边,通过程序控制指纹采集仪以50Hz的频率采集若干平面指纹图像,三轴陀螺仪则同步将角度读取出来,通过两个三轴陀螺仪数值的差值得到平面指纹图像对应的三维角度数据;The first data acquisition scheme uses a plane fingerprint collector to collect plane fingerprint images and two three-axis gyroscopes to read three-dimensional angle data. The data acquisition is shown in Figure 3. When collecting data, the two three-axis gyroscopes are respectively Fixed on the fingerprint sensor and the finger to be collected, the collector rolls the finger with the gyroscope on the fingerprint sensor from one side to the other, and controls the fingerprint collector through the program to collect several planar fingerprint images at a frequency of 50Hz. The three-axis gyroscope Then the angle is read out synchronously, and the three-dimensional angle data corresponding to the planar fingerprint image is obtained by the difference between the values of the two three-axis gyroscopes;
第二种数据采集方案利用光学跟踪测量系统结合指纹采集仪,同步获取三维角度数据和对应的平面指纹图像。The second data acquisition scheme uses an optical tracking measurement system combined with a fingerprint acquisition instrument to simultaneously acquire three-dimensional angle data and corresponding planar fingerprint images.
第三种数据采集方案,预先构造一个三维指纹数据库,通过三维指纹数据库来合成任意三维角度数据下的平面指纹图像。The third data collection scheme is to construct a three-dimensional fingerprint database in advance, and use the three-dimensional fingerprint database to synthesize the planar fingerprint image under any three-dimensional angle data.
进一步地,为了确定样本数量足够丰富,采集的指纹序列不小于500,采集的平面指纹图像不小于40000张,并且记录每一张平面指纹图像对应的三维角度数据。Further, in order to determine that the number of samples is sufficiently abundant, the collected fingerprint sequences are not less than 500, the collected planar fingerprint images are not less than 40,000, and the three-dimensional angle data corresponding to each planar fingerprint image is recorded.
具体地,数据采集仅需要进行一次,本实施例所提供的由平面指纹估计手指三维姿态的方法就可以适用于任何人的任何手指,因此本申请并不需要进行手指注册即可由平面指纹估计手指三维姿态,这是与(Holz and Baudisch)提出的方法的本质区别。Specifically, data collection only needs to be performed once, and the method for estimating the three-dimensional posture of a finger from a planar fingerprint provided in this embodiment can be applied to any finger of any person. Therefore, the application does not require finger registration to estimate a finger from a planar fingerprint. 3D pose, which is the essential difference from the method proposed by (Holz and Baudisch).
在本申请实施例中,在采集训练用平面指纹图像和与训练用平面指纹图像对应的训练用三维角度数据之后,还包括:In the embodiment of the present application, after collecting the plane fingerprint image for training and the three-dimensional angle data for training corresponding to the plane fingerprint image for training, the method further includes:
将训练用平面指纹图像存储至第二平面指纹图像数据库;storing the plane fingerprint image for training in the second plane fingerprint image database;
根据预设条件从第二平面指纹图像数据库中获取多个训练用平面指纹图像,以及与每个多个训练用平面指纹图像的特征描述子和第一映射参数;Obtain a plurality of training plane fingerprint images from the second plane fingerprint image database according to preset conditions, and the feature descriptor and the first mapping parameter with each of the plurality of training plane fingerprint images;
将多个训练用平面指纹图像、第一映射参数和多个训练用平面指纹图像的特征描述子分别存储至第一平面指纹图像数据库。The plurality of training plane fingerprint images, the first mapping parameters and the feature descriptors of the plurality of training plane fingerprint images are respectively stored in the first plane fingerprint image database.
在本申请实施例中,根据训练用平面指纹图像和训练用三维角度数据对平面姿态估计模型进行训练,包括:In the embodiment of the present application, the plane pose estimation model is trained according to the plane fingerprint image for training and the three-dimensional angle data for training, including:
利用函数拟合的方式获取第一平面指纹图像数据库中每个训练用平面指纹图像的第一映射参数,第一映射参数包括俯仰角映射参数、滚动角映射参数。The first mapping parameters of each training plane fingerprint image in the first plane fingerprint image database are obtained by means of function fitting, and the first mapping parameters include pitch angle mapping parameters and roll angle mapping parameters.
具体地,手指三维姿态拟合如图4所示,其中,接触式指纹图像先通过指纹配准算法即平面姿态估计模型获得其在滚动指纹中的相对位置,进而由第二平面指纹图像数据库即训练数据库中已知姿态的若干指纹图像拟合得到手指的三维曲面,最终获得手指的三维角度。Specifically, the three-dimensional gesture fitting of the finger is shown in Figure 4, wherein, the contact fingerprint image first obtains its relative position in the rolling fingerprint through the fingerprint registration algorithm, that is, the plane attitude estimation model, and then uses the second plane fingerprint image database, namely, the relative position of the fingerprint image. The three-dimensional surface of the finger is obtained by fitting several fingerprint images of known poses in the training database, and finally the three-dimensional angle of the finger is obtained.
在本申请实施例中,通过下式确定第一平面指纹图像数据库中每个训练用平面指纹图像的俯仰角映射参数:In the embodiment of the present application, the pitch angle mapping parameter of each training plane fingerprint image in the first plane fingerprint image database is determined by the following formula:
fpitch(x,y)=b1x2+b2y2+b3x+b4y+b5ln(x)+b6ln(y)+b7 f pitch (x,y)=b 1 x 2 +b 2 y 2 +b 3 x+b 4 y+b 5 ln(x)+b 6 ln(y)+b 7
其中,(x,y)为指纹的位置,fpitch(x,y)为指纹的俯仰角的真值,为平面指纹图像对应的俯仰角映射参数;Among them, (x, y) is the position of the fingerprint, f pitch (x, y) is the true value of the pitch angle of the fingerprint, is the pitch angle mapping parameter corresponding to the planar fingerprint image;
通过下式确定第一平面指纹图像数据库中每个训练用平面指纹图像的滚动角映射参数:Determine the rolling angle mapping parameters of each training plane fingerprint image in the first plane fingerprint image database by the following formula:
froll(x,y)=a1x2+a2y2+a3x+a4y+a5ln(x)+a6ln(y)+a7 f roll (x,y)=a 1 x 2 +a 2 y 2 +a 3 x+a 4 y+a 5 ln(x)+a 6 ln(y)+a 7
其中,(x,y)为指纹的位置,froll(x,y)为指纹的滚动角的真值,为平面指纹图像对应的滚动角映射参数。Among them, (x, y) is the position of the fingerprint, f roll (x, y) is the true value of the roll angle of the fingerprint, It is the rolling angle mapping parameter corresponding to the planar fingerprint image.
具体地,手指的三维角度数据直接影响平面指纹图像的水平和垂直位置,也就是说本申请利用构建的第一平面指纹图像数据库能够推理出平面指纹位置到手指三维姿态的映射关系,不同的手指表面建模将影响映射函数的具体表达形式,但整体思路和实现流程相同。Specifically, the three-dimensional angle data of the finger directly affects the horizontal and vertical positions of the planar fingerprint image, that is to say, the first planar fingerprint image database constructed in this application can infer the mapping relationship between the planar fingerprint position and the three-dimensional posture of the finger. Surface modeling will affect the specific expression of the mapping function, but the general idea and implementation process are the same.
在本申请实施例中,方法还包括通过下述模型对平面姿态估计模型进行训练:In the embodiment of the present application, the method further includes training the plane pose estimation model through the following model:
Lall=Lpos+λLyaw L all =L pos +λL yaw
Lyaw=‖θ-θ′‖2 L yaw = ‖θ-θ′‖ 2
其中,Lall表示损失函数值,Lpos表示位置预测的损失函数值,Lyaw表示偏航角的损失函数值,λ为加权系数,表示指纹的位置的真值,表示平面姿态估计模型对指纹的位置的预测值,表示指纹的偏转角的真值,表示平面姿态估计模型对指纹的偏转角的预测值。Among them, L all represents the loss function value, L pos represents the loss function value of position prediction, L yaw represents the loss function value of the yaw angle, λ is the weighting coefficient, the truth value representing the location of the fingerprint, represents the predicted value of the position of the fingerprint by the plane pose estimation model, represents the true value of the deflection angle of the fingerprint, Represents the predicted value of the deflection angle of the fingerprint by the plane pose estimation model.
具体地,平面姿态估计模型为深度神经网络,深度神经网络结构如图5所示,为了能够充分利用平面指纹图像中的信息、提高姿态预测的准确度,按照功能将深度神经网络划分成以下三个模块:Specifically, the plane posture estimation model is a deep neural network, and the structure of the deep neural network is shown in Figure 5. In order to make full use of the information in the plane fingerprint image and improve the accuracy of posture prediction, the deep neural network is divided into the following three according to functions. modules:
特征提取主干网络,基于目前表达能力最强的骨干网络框架即(Yin et al.)的骨干网络,并对其做了一些修改,同时充分利用了多尺度融合的技巧,最终输出固定维度的特征描述子;The feature extraction backbone network is based on the backbone network of the most expressive backbone network framework (Yin et al.), and some modifications have been made to it. At the same time, the skills of multi-scale fusion are fully utilized, and finally the features of fixed dimensions are output. descriptor;
注意力机制模块,结合并修改了(Yin et al.)中最新的注意力机制模块,将本模块的输出作为一个掩模,增强特征提取模块输出中的前景区域,从而让深度神经网络更加关注到指纹图像的前景区域,忽视无效的背景信息;The attention mechanism module, which combines and modifies the latest attention mechanism module in (Yin et al.), uses the output of this module as a mask to enhance the foreground area in the output of the feature extraction module, so that the deep neural network can pay more attention to the foreground area of the fingerprint image, ignoring invalid background information;
三维角度预测模块,用于输出深度神经网络对指纹的位置的预测值以及深度神经网络对指纹的偏转角即偏移角的预测值。The three-dimensional angle prediction module is used to output the predicted value of the position of the fingerprint by the deep neural network and the predicted value of the deflection angle, that is, the offset angle of the fingerprint by the deep neural network.
进一步地,为了提高深度神经网络的精度和泛化能力,从第一平面指纹图像数据库中选取具有代表性的平面指纹图像放入第二平面指纹图像数据库中,从第二平面指纹图像数据库中选取不少于400个指纹序列,不少于25000张平面指纹图像对深度神经网络进行训练,训练用的数据量不少于第二平面指纹图像数据库中数据量的70%,同时另外选取不少于4000张指纹图像作为验证集,来调整深度神经网络训练的超参数,进一步提高深度神经网络的泛化能力和鲁棒性。Further, in order to improve the precision and generalization ability of the deep neural network, a representative planar fingerprint image is selected from the first planar fingerprint image database and placed in the second planar fingerprint image database, and selected from the second planar fingerprint image database. No less than 400 fingerprint sequences and no less than 25,000 plane fingerprint images are used to train the deep neural network. The amount of data used for training is not less than 70% of the data in the second plane fingerprint image database, and at the same time, no less than 25,000 plane fingerprint images are selected. 4000 fingerprint images are used as the validation set to adjust the hyperparameters of the deep neural network training, and further improve the generalization ability and robustness of the deep neural network.
具体地,图6为本申请实施例提出的一种由平面指纹估计手指三维姿态的方法的流程示意图,其中,包括以下四个步骤:Specifically, FIG. 6 is a schematic flowchart of a method for estimating a three-dimensional gesture of a finger from a planar fingerprint proposed by an embodiment of the present application, which includes the following four steps:
步骤210,采集指纹,并将采样数据放入第一平面指纹图像数据库中;
步骤220,从第一平面指纹图像数据库中选取具有代表性的采样数据放入第二平面指纹图像数据库中,利用第二平面指纹图像数据库内的采样数据对深度神经网络进行训练,同时利用损失函数对深度神经网络进行训练;
步骤230,利用函数拟合通过深度神经网络得到的平面指纹图像的位置信息以及采样得到的滚动角以及俯仰角数据,得到从平面指纹图像到手指三维姿态的拟合参数,并将拟合参数存储到第一平面指纹图像数据库中;
步骤240,测试阶段,利用深度神经网络得到测试图像对应的深度描述子、偏移角以及位置数据,基于深度描述子进行紧邻搜索得到测试图像对应的映射参数,根据映射参数以及位置数据得到测试图像对应的滚动角以及俯仰角数据。Step 240, in the testing phase, use the deep neural network to obtain the depth descriptor, offset angle and position data corresponding to the test image, perform an adjacent search based on the depth descriptor to obtain the mapping parameter corresponding to the test image, and obtain the test image according to the mapping parameter and the position data. Corresponding roll angle and pitch angle data.
综上,本申请实施例提出的方法,通过采集待测对象的平面指纹图像;利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。本申请通过预先训练一个精确的平面姿态估计模型,并以此为基础映射或拟合的方式确定与待测对象的平面指纹姿态匹配的完整的手指三维姿态,解决了目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。To sum up, the method proposed in the embodiment of the present application collects the planar fingerprint image of the object to be tested; uses a pre-trained planar pose estimation model to determine the planar fingerprint pose corresponding to the planar fingerprint image; The way of modeling determines the complete three-dimensional gesture of the finger that matches the object to be measured. The present application solves the current fingerprint three-dimensional gesture estimation by pre-training an accurate plane pose estimation model, and mapping or fitting based on it to determine the complete three-dimensional finger pose matching the plane fingerprint pose of the object to be measured. A technical issue where technology is not ideal in terms of functionality and convenience.
为了实现上述实施例,本申请还提出一种由平面指纹估计手指三维姿态的系统。In order to realize the above embodiments, the present application also proposes a system for estimating the three-dimensional gesture of a finger from a planar fingerprint.
图7为本申请实施例提供的一种由平面指纹估计手指三维姿态的系统的结构示意图。FIG. 7 is a schematic structural diagram of a system for estimating a three-dimensional gesture of a finger from a planar fingerprint according to an embodiment of the present application.
如图7所示,一种由平面指纹估计手指三维姿态的系统,包括:As shown in Figure 7, a system for estimating the three-dimensional gesture of a finger from a planar fingerprint includes:
采集模块710,用于采集待测对象的平面指纹图像;A
平面姿态估计模块720,用于利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;The plane pose
确定模块740,用于根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。The determining module 740 is configured to determine the complete three-dimensional gesture of the finger that matches the object to be measured by means of parameter learning or statistical modeling according to the plane fingerprint gesture.
综上,本申请实施例提出的系统,通过采集模块采集待测对象的平面指纹图像;平面姿态估计模块利用预先训练的平面姿态估计模型确定与平面指纹图像对应的平面指纹姿态;确定模块根据平面指纹姿态,通过参数学习或者统计建模的方式确定与待测对象匹配的完整的手指三维姿态。本申请通过预先训练一个精确的平面姿态估计模型,并以此为基础映射或拟合的方式确定与待测对象的平面指纹姿态匹配的完整的手指三维姿态,解决了目前指纹的手指三维姿态估计技术在功能和便利性方面均不理想的技术问题。To sum up, the system proposed in the embodiments of the present application collects the planar fingerprint image of the object to be measured through the acquisition module; the planar pose estimation module uses the pre-trained planar pose estimation model to determine the planar fingerprint pose corresponding to the planar fingerprint image; Fingerprint posture, the complete three-dimensional posture of the finger that matches the object to be measured is determined through parameter learning or statistical modeling. The present application solves the current fingerprint three-dimensional gesture estimation by pre-training an accurate plane pose estimation model, and mapping or fitting based on it to determine the complete three-dimensional finger pose matching the plane fingerprint pose of the object to be measured. A technical issue where technology is not ideal in terms of functionality and convenience.
需要说明的是,在本申请的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。It should be noted that, in the description of the present application, the terms "first", "second" and the like are only used for the purpose of description, and should not be construed as indicating or implying relative importance. Also, in the description of this application, unless otherwise specified, "plurality" means two or more.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or a combination of the following techniques known in the art: Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those skilled in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program can be stored in a computer-readable storage medium. When executed, one or a combination of the steps of the method embodiment is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111301866.2A CN114202778B (en) | 2021-11-04 | 2021-11-04 | Method and system for estimating three-dimensional gesture of finger by planar fingerprint |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111301866.2A CN114202778B (en) | 2021-11-04 | 2021-11-04 | Method and system for estimating three-dimensional gesture of finger by planar fingerprint |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114202778A true CN114202778A (en) | 2022-03-18 |
| CN114202778B CN114202778B (en) | 2024-07-02 |
Family
ID=80646831
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111301866.2A Active CN114202778B (en) | 2021-11-04 | 2021-11-04 | Method and system for estimating three-dimensional gesture of finger by planar fingerprint |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114202778B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116229571A (en) * | 2023-02-22 | 2023-06-06 | 清华大学 | Finger pose estimation method and device based on touch screen |
| CN119418373A (en) * | 2024-09-24 | 2025-02-11 | 清华大学 | A method for synchronously collecting contact finger images and finger state true values |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103902970A (en) * | 2014-03-03 | 2014-07-02 | 清华大学 | Automatic fingerprint gesture estimation method and system |
| CN104361315A (en) * | 2014-10-27 | 2015-02-18 | 浙江工业大学 | 3D (three-dimensional) fingerprint recognition device based on monocular and multi-view stereoscopic machine vision |
| US10102629B1 (en) * | 2015-09-10 | 2018-10-16 | X Development Llc | Defining and/or applying a planar model for object detection and/or pose estimation |
| CN109934847A (en) * | 2019-03-06 | 2019-06-25 | 视辰信息科技(上海)有限公司 | The method and apparatus of weak texture three-dimension object Attitude estimation |
| CN112232155A (en) * | 2020-09-30 | 2021-01-15 | 墨奇科技(北京)有限公司 | Non-contact fingerprint identification method and device, terminal and storage medium |
| CN113569638A (en) * | 2021-06-24 | 2021-10-29 | 清华大学 | Method and device for estimating three-dimensional gesture of finger by planar fingerprint |
-
2021
- 2021-11-04 CN CN202111301866.2A patent/CN114202778B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103902970A (en) * | 2014-03-03 | 2014-07-02 | 清华大学 | Automatic fingerprint gesture estimation method and system |
| CN104361315A (en) * | 2014-10-27 | 2015-02-18 | 浙江工业大学 | 3D (three-dimensional) fingerprint recognition device based on monocular and multi-view stereoscopic machine vision |
| US10102629B1 (en) * | 2015-09-10 | 2018-10-16 | X Development Llc | Defining and/or applying a planar model for object detection and/or pose estimation |
| CN109934847A (en) * | 2019-03-06 | 2019-06-25 | 视辰信息科技(上海)有限公司 | The method and apparatus of weak texture three-dimension object Attitude estimation |
| CN112232155A (en) * | 2020-09-30 | 2021-01-15 | 墨奇科技(北京)有限公司 | Non-contact fingerprint identification method and device, terminal and storage medium |
| CN113569638A (en) * | 2021-06-24 | 2021-10-29 | 清华大学 | Method and device for estimating three-dimensional gesture of finger by planar fingerprint |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116229571A (en) * | 2023-02-22 | 2023-06-06 | 清华大学 | Finger pose estimation method and device based on touch screen |
| CN119418373A (en) * | 2024-09-24 | 2025-02-11 | 清华大学 | A method for synchronously collecting contact finger images and finger state true values |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114202778B (en) | 2024-07-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113569638B (en) | Method and apparatus for estimating three-dimensional finger pose from planar fingerprints | |
| US10572072B2 (en) | Depth-based touch detection | |
| RU2708027C1 (en) | Method of transmitting motion of a subject from a video to an animated character | |
| TWI671740B (en) | Indoor positioning system and method based on geomagnetic signals in combination with computer vision | |
| JP2022504704A (en) | Target detection methods, model training methods, equipment, equipment and computer programs | |
| CN109891491A (en) | interactive display | |
| CN109211277B (en) | State determination method and device of visual inertial odometer and electronic equipment | |
| WO2019196476A1 (en) | Laser sensor-based map generation | |
| KR20220004009A (en) | Key point detection method, apparatus, electronic device and storage medium | |
| WO2022188259A1 (en) | Dynamic gesture recognition method, gesture interaction method, and interaction system | |
| CN114202778A (en) | Method and system for estimating three-dimensional gesture of finger by planar fingerprint | |
| JP2025529785A (en) | IDENTIFICATION IMAGE PROCESSING METHOD, APPARATUS, COMPUTER DEVICE, AND COMPUTER PROGRAM | |
| CN116661604A (en) | Man-machine interaction recognition system based on Media Pipe frame acquisition gesture | |
| US20260004451A1 (en) | Gaze estimation method and apparatus, readable storage medium, and electronic device | |
| CN114719759A (en) | Object surface perimeter and area measurement method based on SLAM algorithm and image instance segmentation technology | |
| CN117523659A (en) | Skeleton-based multi-feature multi-stream real-time action recognition method, device and medium | |
| CN112912889B (en) | Image template updating method, device and storage medium | |
| CN116562234B (en) | Multi-source data fusion voice indoor positioning method and related equipment | |
| KR20200107356A (en) | User-customized motion recognition method using a mobile device | |
| CN111798000A (en) | Data optimization method, device, storage medium and electronic device | |
| CN116092110A (en) | Gesture semantic recognition method, electronic device, storage medium and program product | |
| CN111796663B (en) | Scene recognition model updating method and device, storage medium and electronic equipment | |
| CN119806333B (en) | Control method and system of intelligent mouse | |
| CN114048333A (en) | Multi-source fusion voice interactive indoor positioning method, terminal and storage medium | |
| US20250182517A1 (en) | Detection of body part and associated body in an image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |





















