Handwritten Signature Verification FULLTEXT01
Handwritten Signature Verification FULLTEXT01
Handwritten Signature Verification FULLTEXT01
Electrical Engineering
September 2017
Shashidhar Sanda
Sravya Amirisetti
Contact Information:
Author(s):
Shashidhar Sanda
E-mail: shsa16@student.bth.se
Sravya Amirisetti
E-mail: sram16@student.bth.se
Thesis Superviser:
Dr. Josef Ström Bartunek
Dept. Applied Signal Processing
E-mail:josef.strombartunek@bth.se
University Examiner:
Dr. Sven Johansson
Dept. Applied Signal Processing
E-mail:sven.johansson@bth.se
Last but not the least, I would like to thank our friends, our family:
our parents and to our brothers and sister for supporting us spiritually
throughout writing this thesis and our life in general.
ii
Contents
Abstract i
Acknowledgments ii
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Problem Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Aim and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Literature Study 5
3 Background 7
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Signature Database . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Proposed Method 12
4.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.2.1 First order difference of basic features . . . . . . . . . . . . 14
4.2.2 Second order difference of spatial co-ordinates . . . . . . . 14
4.2.3 Sine and Cosine Measures . . . . . . . . . . . . . . . . . . 15
4.2.4 Length based features . . . . . . . . . . . . . . . . . . . . 15
4.3 GMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3.1 Maximum Likelihood Parameter Estimation . . . . . . . . 16
4.3.2 EM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4 LCSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4.1 Basic Similarity Measure for Time Series . . . . . . . . . . 18
4.4.2 Multivariate Time Series . . . . . . . . . . . . . . . . . . . 18
iii
4.5 DTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.6 LCSS vs DTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Performance Evaluation 22
5.1 Setup of the Evaluation . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 Performance Evaluation Factors . . . . . . . . . . . . . . . . . . . 24
References 36
iv
List of Figures
6.1 Plot showing FAR curves for both combinations GMM-LCSS and
GMM-DTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 Plot showing FRR curves for both combinations GMM-LCSS and
GMM-DTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.3 Plot showing FAR and FRR curves for combination GMM-LCSS . 30
6.4 Plot showing FAR and FRR curves for combination GMM-DTW . 30
6.5 Plot showing ROC curve for combination GMM-LCSS . . . . . . 31
6.6 Plot showing ROC curve for combination GMM-DTW . . . . . . 31
6.7 Plot showing the comparison of ROC curves for both combinations
GMM-LCSS and GMM-DTW . . . . . . . . . . . . . . . . . . . . 32
v
List of Tables
6.1 Table showing FAR percentage values for different thresholds for
both combinations GMM-LCSS and GMM-DTW . . . . . . . . . 28
6.2 Table showing FRR percentage values for different thresholds for
both combinations GMM-LCSS and GMM-DTW . . . . . . . . . 28
vi
List of Abbreviations
EM Expectation Maximization
FN False Negative
FP False Positive
M AP Maximum A Posteriori
NN Neural Networks
TN True Negative
vii
TP True Positive
viii
Chapter 1
Introduction
1.1 Motivation
Signature is the most socially and legally accepted means for person authenti-
cation and is therefore a modality confronted with high level attacks. Signa-
ture verification plays an important role in identification of forgery signature and
bio-metric application. Bio-metrics measures individuals unique physical or be-
havioral characteristics with the aim of recognizing or authenticating identity.
Physical characteristics in a bio-metric attribute include iris, hand geometry, face
and fingerprints. Among these iris and fingerprints do not change over time and
thus have very small intra-class variation, they require special and relatively ex-
pensive hardware to capture the bio-metric image. Behavioral characteristics in a
bio-metric attribute include signature, voice, keystroke pattern, and gait [1]. The
most developed characteristics among these are the signature and voice technolo-
gies.
Handwritten signature is a well know bio-metric attribute. An important advan-
tage of hand written signature over other identification verification technologies
is that it can only be applied when the person is conscious and willing to write
unlike the finger print technology where it can be taken when the person is un-
conscious also [2]. HSV is classified into two types online and offline. Offline
signature verification includes a document where the signature is present, it is
scanned to obtain digitalized image representation. Online signature verification
uses a special hardware, such as a digitalized tablet or a pressure sensitive pen.
The shape and the dynamics of writing are captured in the online signature ver-
ification [3].
1
Chapter 1. Introduction 2
the overall speed of the signature, the pen pressure at each point, etc. and make
the signature more unique and more difficult to forge [4].
In an online signature verification system figure 1.1, the users are first enrolled by
providing signature samples (reference signatures). When a user presents a signa-
ture (test signature) claiming to be an individual, this test signature is compared
with the reference signatures for that individual. If the dissimilarity is above a
certain threshold, the user is rejected. During verification, the test signature is
compared to all the signatures in the reference set, resulting in several distance
values [5]. One must choose a method to combine these distance values into a
single value representing the dissimilarity of the test signature to the reference
set, and compare it to a threshold to decide. The single dissimilarity value can
be obtained from the minimum, maximum or the average of all the distance val-
ues. Typically, a verification system chooses one of these and discards the others.
In evaluating the performance of a signature verification system, there are two
important factors: the FRR of genuine signatures and the FAR of forgery signa-
tures. As these two errors are inversely related, th EER where FAR equals FRR
is often reported [6].
estimation technique finds the parameters that maximize the joint likelihood of
the data which are supposed to be independent and identically distributed. In
the Gaussian mixture, it captures the underlying statistical variability’s of the
point based features, being used for describing the online trace of the signatures.
Then LCSS detection algorithm which measures the similarity of signature time
series. A threshold value is set and a decision is made comparing the test signa-
ture values with the database signature values whether the signature is genuine
or forgery. To evaluate LCSS performance, it is compared with the most widely
used technique called DTW.
4. Determining FAR, FRR, ERR and ROC curves to know the performance
of signature verification.
1. What are the former methods used for online handwritten signature verifi-
cation?
Chapter 6: In this chapter the results obtained and answers to the research
questions are discussed in detail.
5
Chapter 2. Literature Study 6
Abhishek Sharma and Suresh Sundaram has proposed a new model based ap-
proach, GMM into the DTW framework to verify the online signatures [10].
Firstly, they extracted the writer dependent statistical characteristics for sig-
nature matching. Then the characteristics of a warping path is being analyzed
using a derivation in warping path based feature which is useful for verification.
Later a fusion of the proposed warping path based feature with the normalize
DTW score for enhancing the verification performance of DTW based system is
done. This new model based method has been demonstrated successfully on the
signature data from the available MCYT data base and is the first one that uses
the features derived from a GMM in a DTW matching algorithm for improved
verification of online signatures [11] [12].
Gabriel [13] proposed the problem of training on-line signature verification sys-
tems when the number of training samples is small, where the number of avail-
able signatures per user is limited. Nine different classification strategies based
on GMM, and the UBM are evaluated. These models are designed to work under
small-sample size conditions and tested using three different experiments. The
performance of these methods degraded faster when the training set included less
than 50% of the samples (around 12 signatures per user). The decision was made
by estimating the likelihood ratio and comparing it with respect to the EER deci-
sion threshold. The accuracy obtained by the GMM-SVM models, is considerable
better than the GMM-UBM models when the available training subset is at least
50% of the whole database.
3.1 Introduction
Online hand written signature verification is a process of testing whether a signa-
ture is genuine or forgery. A signature can easily be forged. Forgeries of signatures
are classified into three types: simple, random and skilled forgery [15] [16].
Simple Forgery: In which the forger has no idea what the signature to be
forged looks like. This is the easiest type of forgery to detect because it is usu-
ally not close to the appearance of a genuine signature. This type of forgery
will sometimes allow an examiner to identify who made the forgery based on the
handwriting habits that are present in the forged signature.
Skilled forgery: In which the forger has a sample of the signature to be forged.
The quality of a simulation depends on how much the forger practices before at-
tempting the actual forgery, the ability of the forger, and the forger’s attention to
detail in simulating the signature. A skilled forgery looks more like the genuine
signature. The problem of signature verification becomes more and more diffi-
cult when passing from simple to skilled forgery. Currently, there is a growing
demand for the processing of individual identification to be faster and more accu-
rate, therefore the design of a signature verification system becomes an important
challenge.
7
Chapter 3. Background 8
3.3 Preprocessing
Preprocessing of online signatures is commonly done to remove variations that
are thought to be irrelevant to the verification performance. Re-sampling, size,
and rotation normalization are among the common preprocessing steps. In the
preprocessing phase, the signature is undergone some enhancement process for
extracting features. The signature images require some manipulation before the
application of any recognition technique. This process prepares the image and
improves its quality to eliminate irrelevant information and to enhance the selec-
tion of the important features for recognition and to improve the robustness of
features to be extracted. Moreover, Preprocessing steps are performed to reduce
noise in the input images, and to remove most of the variability of the handwrit-
ing [15].
For online signatures, some important preprocessing algorithms are filtering, noise
reduction, and smoothing. They are also other preprocessing steps like the pen-up
duration’s, and drift and mean removal, time normalization and stroke concate-
nation before feature extraction.
To compare the spatial of a signature, time dependencies must be eliminated from
the representation. Certain points in the signature such as the start points and
the end points of a stroke and the points of a trajectory change, carry important
information. These points are the critical points and are extracted and remained
throughout the process [16].
Chapter 3. Background 10
6 Velocity in x νx (t)
7 Velocity in y νq
y (t)
Global features are features related to the signature for instance the average sign-
ing speed, the signature bounding box, and Fourier descriptors of the signatures
trajectory. Local features correspond to a specific sample point along the trajec-
tory of the signature. Examples of local features include distance and curvature
change between successive points on the signature trajectory. Most commonly
used online signature acquisition devices are pressure sensitive tablets capable of
measuring forces exerted at the pen-tip, in addition to the coordinates of the pen.
The pressure information at each point along the signature trajectory is another
example of commonly used local feature. In some of these features are compared
to find the more robust ones for signature verification purposes. Other systems
have used genetic algorithms to find the most useful features [2][15][16]
3.5 Verification
After applying the feature extraction process the test signature and the reference
signature are compared with the minimum of the dissimilarities values,Average of
all the dissimilarities and the maximum of all the dissimilarities. Choosing any
of the above dissimilarity values the a decision is made whether it is a forgery
signature or a genuine signature . this comparison is done using a threshold value
for all the reference and test signature. if the value is approximately equal to
the reference signal value then it is assumed to be a genuine signature and if
the dissimilarities is above that threshold value the signature is rejected. This
threshold value is can be identical to all the signature or it can also be different
for each of them [15][16].
Common Threshold
Common threshold is more advantageous because it has the optimal threshold
for all the writers.This value is selected after computing the dissimilarities of the
data signatures and a common threshold is selected based on the minimum error
criterion.
This chapter deals with proposed method for signature verification. The proce-
dure of implementation of online handwritten signature is based on GMM, LCSS,
DTW is shown in the figure 4.1.
4.1 Pre-processing
The data must be pre-processed before it is analyzed. In data pre-processing we
normalize the data using normalization technique. So, what is normalization?,
we often want to compare scores or sets of scores obtained on different scales.
For example, how do we compare a score of 85 in a cooking contest with a score
of 100 on an I.Q. test?. In order to do so, we need to “eliminate” the unit of
measurement, this operation is called to normalize the data.
Min-Max Normalization
In our case, min-max normalization is used to normalize each of the basic feature
data to the range [0,1]. Basic features before normalization and after normaliza-
tion is shown in figure 4.2. The formulae that is used for min-max normalization is
x − min(x)
z= (4.1)
max(x) − min(x)
where, x is the data vector, min(x) is the minimum of x, max(x) is the maximum
of x.
12
Chapter 4. Proposed Method 13
(a) (b)
Figure 4.2: Basic features (a) Before Normalization, (b) After Normalization
Chapter 4. Proposed Method 14
Sine and Cosine measures of the angle α computed with respect to horizontal
axis, defined for t=1,2,...,n-1.
The length based features defined for t=1,2,....,n-1. The feature ∆l(t) relates to
the change in length obtained between successive pen positions.
4.3 GMM
A GMM [18][19] is a probabilistic model that assumes all the data points are gen-
erated from a mixture of a finite number of Gaussian distributions with unknown
parameters. GMMs are used as a parametric model of the probability distribu-
tion of the continuous measurements or features in a bio metric system, such as
vocal tract spectral features in a speaker recognition system. GMM parameters
are estimated from training data using the iterative EM algorithm or Maximum
Likelihood Parameter estimation from a well-trained prior model [15].
A GMM is a weighted sum of M components Gaussian densities as
M
X
p(x|λ) = ωi g(x|µi , Σi ) (4.6)
i=1
Chapter 4. Proposed Method 16
with mean vector µi and co-variance matrix Σi . The mixture weights satisfy
M
the constraint that ωi = 1. The complete GMM is parametrized by the mean
P
i=1
vectors, co-variance matrices and mixture weights from all component densities.
These parameters are collectively represented by the following notation
4.3.2 EM Algorithm
Unfortunately, the expression (equation 4.9) is a non-linear function of the param-
eters λ and direct maximization is not possible. However, EM algorithm[20][18]
is an iterative method to find Maximum Likelihood or Maximum A Posteriori
estimates of parameters in statistical models, where the model depends on unob-
served latent variables. The basic idea of the EM algorithm is, beginning with
an initial model λ, to estimate a new model λ, such that p(X|λ̄) ≥ p(X|λ). The
new model then becomes the initial model for the next iteration and the process
is repeated until some convergence threshold is reached. The steps involved in
estimating parameters of GMM model are as follows
4.4 LCSS
LCSS [8][9] is an algorithm for finding the longest sub-sequence common to all
sequences in a set of sequences (often just two sequences) or measuring the simi-
larity of two sequences.
From example figure 4.3, one of the parameter value = 0.2 can be known. The
length of common sequences is seven and length of shorter sequence is ten. So,
from equation 4.16 the similarity score is Sim0.2 (S, T ) = 2/3.
Figure 4.3: Example of how two sequences are compared using LCSS
4.5 DTW
DTW [11][12] is an algorithm for measuring similarity between two temporal
sequences, which may vary in speed. For instance, similarities in walking could
be detected using DTW, even if one person was walking faster than the other, or
if there were accelerations and deceleration’s during the course of an observation.
DTW applications include in speech, speaker, online signature recognition and
also in shaping matching.
Suppose, there are two sequences S and T of lengths ns −2 and nt −2 , a cost
matrix C is constructed with (r,s)th element in C corresponds to dissimilarity
value d(r,s) between rth point of T and sth point of S. By utilizing the elements
in matrix C, we perform the following recursion to compute the DTW distance
between T and S [10],
ψ(r, s − 1)
ψ(r, s) = d(r, s) + min ψ(r − 1, s − 1). (4.18)
ψ(r − 1, s)
the number of aligned pairs along the warping path Wp∗ is denoted by lW ∗p . The no-
tation (ai , bi ) indicates that the feature vector corresponding to ati h sample point
of T is aligned to that of bti h sample point of Sp . Owing to the boundary condi-
tions, we have (a1 , b1 ) = (1, 1) and (al W ∗p , bl W ∗p )=(nt −2, ns −2). The continuity
and monotonicity conditions necessitate that ai−1 ≤ai ≤ai−1 +1, bi−1 ≤bi ≤bi−1 +1,
1≤ai ≤nt -2 and 1≤bi ≤ns -2. The warping path Wp∗ , at times, can give rise to one
to many or many to one alignment [10][21][22]. The DTW score or similarity
score is denoted as Dsim and it is calculated as follows
lW ∗p
P
d(ai , bi )
ψ(nt − 2, ns − 2) i=1
Dsim = = . (4.20)
lW ∗p lW ∗p
Disadvantages of DTW
Advantages of LCSS
Figure 4.4: Example of how comparison takes place in DTW and LCSS.
Chapter 5
Performance Evaluation
This chapter deals with the setup of the evaluation and the factors used for the
evaluating performance of the signature verification system.
22
Chapter 5. Performance Evaluation 23
(a) (b)
(c) (d)
• FAR
• FRR
• EER
• ROC Curve
FAR
The false accept rate is the percentage of invalid inputs that are incorrectly ac-
cepted (match between input and a non-matching template),
EER
The EER indicates the accuracy of the system. The false accept rate and false
reject rate intersect at a certain point which is called the EER (the point in which
the FAR and FRR have the same value).
In theory, the correct users should always score higher than the impostors. A
single threshold could then be used to separate the correct user from the impos-
tors. In general, the matching algorithm performs a decision based on a threshold
which determines how close to a template the input needs to be for it to be con-
sidered a match. If the threshold is reduced, there will be less false non-matches
but more false accepts. Correspondingly, a higher threshold will reduce the false
accept rating but increase the false reject rating.In some cases impostor patterns
generate scores that are higher than the patterns from the user.For that reason
that however the threshold is chosen, some classification errors occur.
Chapter 5. Performance Evaluation 25
ROC Curve
In statistics, a receiver operating characteristic curve, i.e. ROC curve, is a graph-
ical plot that illustrates the diagnostic ability of a binary classifier system as its
discrimination threshold is varied.
The ROC curve is created by plotting the TPR against the FPR at various
threshold settings. Let us consider a two-class prediction problem (binary classi-
fication), in which the outcomes are labeled either as positive (p) or negative (n).
There are four possible outcomes from a binary classifier. If the outcome from a
prediction is p and the actual value is also p, then it is called a TP; however if
the actual value is n then it is said to be a FP. Conversely, a TN has occurred
when both the prediction outcome and the actual value are n, and FN is when
the prediction outcome is n while the actual value is p.
ROC analysis provides tools to select possibly optimal models and to discard
sub-optimal ones independently from (and prior to specifying) the cost context
or the class distribution. ROC analysis is related in a direct and natural way to
cost/benefit analysis of diagnostic decision making.
Chapter 5. Performance Evaluation 26
Figure 5.3: ROC Curve created by plotting the TPR against the FPR at various
threshold settings
Chapter 6
Results and Discussion
This chapter deals with the results and the effects of the results and also deals
with the answers to the research questions and problem faced during the imple-
mentation.
6.1 Results
The experiment is conducted on 25 genuine signatures and 25 forgery signatures
with 5 genuine signatures taken as reference signatures. The performance metric
curves FAR, FRR and ROC curves are shown in the below figures. There are
also tables showing the FAR and FRR percentage values for different threshold
values for both combinations of GMM-LCSS and GMM-DTW.
27
Chapter 6. Results and Discussion 28
Table 6.1: Table showing FAR percentage values for different thresholds for both
combinations GMM-LCSS and GMM-DTW
Threshold using LCSS—>FAR using DTW—>FAR
0.1 0.96 1
0.2 0.96 1
0.3 0.96 1
0.4 0.96 0.92
0.5 0.96 0.84
0.6 0.68 0.68
0.7 0.4 0.36
0.8 0.32 0.2
0.9 0.04 0.08
Table 6.2: Table showing FRR percentage values for different thresholds for both
combinations GMM-LCSS and GMM-DTW
Threshold using LCSS—>FRR using DTW—>FRR
0.1 0.04 0.16
0.2 0.04 0.16
0.3 0.04 0.2
0.4 0.04 0.2
0.5 0.08 0.28
0.6 0.2 0.6
0.7 0.4 0.76
0.8 0.52 0.92
0.9 0.94 1
The figure 6.1 and figure 6.2 are FAR and FRR curve plotted for different
threshold. The figure 6.1 showing the FAR curve for both combinations GMM-
DTW and GMM-LCSS. The two FAR curves are nearly same with small differ-
ence.The figure 6.2 showing the FRR curve for both combinations GMM-DTW
and GMM-LCSS. For combination GMM-LCSS the system obtained very good
curve compared to GMM-DTW.
Chapter 6. Results and Discussion 29
Figure 6.1: Plot showing FAR curves for both combinations GMM-LCSS and
GMM-DTW
Figure 6.2: Plot showing FRR curves for both combinations GMM-LCSS and
GMM-DTW
Figure 6.3: Plot showing FAR and FRR curves for combination GMM-LCSS
The figure 6.3 shows the plot for FAR and FRR curves for combination GMM-
LCSS. from the plot the two curves intersect at threshold 0.69 where the equal
error rate is 0.4. The figure 6.4 shows the plot for FAR and FRR curves for
Figure 6.4: Plot showing FAR and FRR curves for combination GMM-DTW
combination GMM-DTW. from the plot the two curves intersect at 0.61 where
equal error rate is 0.6.From both EER curves GMM-LCSS has low EER when
compared to GMM-DTW.
Chapter 6. Results and Discussion 31
If the input is given a genuine signature and output obtained as positive i.e.,
signature is genuine, then it’s called a TP. If the input is given a forgery signature
and output obtained as negative i.e., signature is not genuine, then it’s a FP. The
example of ROC curve is shown in figure 5.3.
The figure 6.5, figure 6.6, are the ROC curves obtained at various threshold levels.
To judge whether the classifier model is good or bad the ROC curve must be in
straight line and also without any variations.
Chapter 6. Results and Discussion 32
Figure 6.7: Plot showing the comparison of ROC curves for both combinations
GMM-LCSS and GMM-DTW
From figure 6.7. the model GMM-LCSS has good curve compared to the model
GMM-DTW because GMM-LCSS has straight curve and also the starts early
when compared to GMM-DTW.
6.2 Discussion
6.2.1 Answers to Research Questions
Research Question 1: What are the former methods used for Online
Signature Verification?
Answer: Many methods are proposed for signature verification since past three
decades. Global features based and local features are the two strategies which
are used to extract the relevant information/features from the signatures. These
features are discussed in section 3.3. Model based approach and distance based
approach are the classifier methodologies which are used for signature verifica-
tion. In model based approach, HMM, MLP, SVM and other NN models are
used to build a statistical profile of signature and evaluate the relation of the
features which are used in making decision. In distance based approach, most
widely used technique is DTW which aligns the sample points of two signatures
having different lengths. Other matching algorithms include LCSS, Edit Distance
and also classical distance computation techniques like Euclidean and City Block.
Chapter 6. Results and Discussion 33
Another problem faced during implementation is the time taken for the executing
the matlab script. Since the signature data is large and also the testing it with
reference signature became a time consuming factor.
Chapter 7
Conclusion and Future Work
7.1 Conclusion
A system for online HSV system is implemented. At various threshold values the
performance of verification system is evaluated. The aim of the work is to propose
a new approach for online signature verification system and also to evaluate the
performance by comparing with the mostly widely used technique for comparison
of two sequences i.e., DTW in the bio-metric verification. The performance of sys-
tem is evaluated by calculating FAR, FRR, EER and ROC curves. GMM-LCSS,
is able to authenticate persons very reliably even if only five genuine signatures
are used for training. It turned out that the LCSS-based similarity assessment of
online signature data performance is matching with the DTW-based technique.
GMM-LCSS provides more security because its FAR is low compared to GMM-
DTW. Equal Error rate is low i.e., 0.4 for GMM-LCSS model when compared
Equal Error rate i.e., 0.6 GMM-DTW model. From ROC curve, it is known that
GMM-LCSS is efficient classifier compared to GMM-DTW.One main difference
of LCSS and DTW is that in LCSS distance is less distorted because outlying
values are ignored and in DTW distance is distorted because outlying values are
matched. Finally, Our experiments shown that GMM with the LCSS authenti-
cate persons very reliably and with performance better and matching with best
comparing technique, DTW with small equal error rate difference.
34
Chapter 7. Conclusion and Future Work 35
MLP model, SVM and other NN models. These classifier models can also be used
in combination with other distance based approaches like edit distance, euclidean,
city block distance computation techniques.
References
36
References 37
[9] C. Gruber, T. Gruber, and B. Sick, Online Signature Verification with New
Time Series Kernels for Support Vector Machines. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2005, pp. 500–508.
[10] A. Sharma and S. Sundaram, “A novel online signature verification system
based on gmm features in a dtw framework,” IEEE Transactions on Infor-
mation Forensics and Security, vol. 12, no. 3, pp. 705–718, March 2017.
[11] M. Faundez-Zanuy, “On-line signature recognition based on vq-dtw,” Pattern
Recogn., vol. 40, no. 3, pp. 981–992, Mar. 2007.
[12] B. Kar, P. K. Dutta, T. K. Basu, C. VielHauer, and J. Dittmann, “Dtw
based verification scheme of biometric signatures,” in 2006 IEEE Interna-
tional Conference on Industrial Technology, Dec 2006, pp. 381–386.
[13] G. Zapata, J. D. Arias-Londoño, J. Vargas-Bonilla, and J. R. Orozco, “On-
line signature verification using gaussian mixture models and small-sample
learning strategies,” Revista Facultad de Ingeniería, vol. 2016, 06 2016.
[14] B. Drott and T. Hassan-Reza, “On-line handwritten signature verification
using machine learning techniques with a deep learning approach,” 2015,
student Paper.
[15] S. Z. Li, Encyclopedia of Biometrics, 1st ed. Springer Publishing Company,
Incorporated, 2009.
[16] A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recog-
nition,” IEEE Trans. Cir. and Sys. for Video Technol., vol. 14, no. 1, pp.
4–20, Jan. 2004.
[17] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, J. Gonzalez, M. Faundez-
Zanuy, V. Espinosa, A. Satue, I. Hernaez, J. J. Igarza, C. Vivaracho,
D. Escudero, and Q. I. Moro, “Mcyt baseline corpus: a bimodal biomet-
ric database,” IEE Proceedings - Vision, Image and Signal Processing, vol.
150, no. 6, pp. 395–401, Dec 2003.
[18] D. A. Reynolds and R. C. Rose, “Robust text-independent speaker identifica-
tion using gaussian mixture speaker models,” IEEE Transactions on Speech
and Audio Processing, vol. 3, no. 1, pp. 72–83, Jan 1995.
[19] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker verification using
adapted gaussian mixture models,” Digit. Signal Process., vol. 10, no. 1, pp.
19–41, Jan. 2000.
[20] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from
incomplete data via the em algorithm,” Journal of the Royal Statistical So-
ciety, Series B, vol. 39, no. 1, pp. 1–38, 1977.
References 38