[go: up one dir, main page]

CN101366060B - Media validation - Google Patents

Media validation Download PDF

Info

Publication number
CN101366060B
CN101366060B CN2006800472788A CN200680047278A CN101366060B CN 101366060 B CN101366060 B CN 101366060B CN 2006800472788 A CN2006800472788 A CN 2006800472788A CN 200680047278 A CN200680047278 A CN 200680047278A CN 101366060 B CN101366060 B CN 101366060B
Authority
CN
China
Prior art keywords
image
mrow
media
media item
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2006800472788A
Other languages
Chinese (zh)
Other versions
CN101366060A (en
Inventor
何超
佳里·罗斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCR Voyix Corp
Original Assignee
NCR Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/366,147 external-priority patent/US20070140551A1/en
Application filed by NCR Corp filed Critical NCR Corp
Publication of CN101366060A publication Critical patent/CN101366060A/en
Application granted granted Critical
Publication of CN101366060B publication Critical patent/CN101366060B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

A method of creating a classifier for media validation is described. Information from all of a set of training images from genuine media items is used to form a segmentation map which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed, for example when the media is a banknote for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A media validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment a plurality of segmentation maps are formed, having different numbers of segments. If higher quality counterfeit media items come into the population of media items, the media validator is able to react immediately by switching to using a segmentation map having a higher number of segments without the need for re-training.

Description

Media validation
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. patent application No. 11/366,147 filed on 3/2/2006 and U.S. patent application No. 11/366,147 is a continuation-in-part application of U.S. patent application No. 11/305,537 filed on 16/12/2005. Application 11/366,147 filed on 3/2/2006 and 11/305,537 filed on 12/16/2005 are hereby incorporated by reference.
Technical Field
The present invention relates to a method and apparatus for media authentication. And in particular, but not exclusively, to the authentication of media such as banknotes, passports, checks, bonds, stocks and the like.
Background
There is an increasing need to automatically check and verify banknotes of different denominations and denominations in a simple, reliable and cost-effective manner. This is necessary, for example, in self-service devices that receive banknotes (e.g., self-service kiosks, ticket machines, automated teller machines arranged to transact deposits, self-service currency exchanges, etc.).
Previously, manual methods of banknote validation have involved image inspection of banknotes, perspective effects such as watermarks and grain alignment marks, and hand feel and even smell. Other known methods rely on semi-public features that require semi-manual interrogation. For example, magnetic devices, ultraviolet sensors, fluorescence, infrared detectors, capacitors, metal strips, image patterns, and the like are used. However, by themselves, these methods are manual or semi-manual and are not suitable for many applications where manual intervention is not possible for a long time. For example in a self-service device.
There are significant problems to overcome in creating an automatic banknote validator. For example, there are many different types of currency with different security features and even substrate types. It is also common to have different levels of security features at different denominations. It is therefore desirable to provide a generic method for banknote validation that is easy and convenient to perform for those different currencies and denominations.
Previous automatic validation methods typically require a relatively large number of known counterfeit banknote samples to train the classifier. In addition, those previous classifiers were trained to detect only known counterfeit banknotes. This is problematic because there is little or no information available for possible counterfeit banknotes. This is problematic, for example, in particular for newly introduced denominations or newly introduced currencies.
In earlier papers published in Pattern Recognition stage 37 (Pattern Recognition 37) (2004), page 1085-1096, by Chao He, MarkGirolami and Gary Ross (two of which are the inventors of the present application), entitled "applying optimized combinations of one-class classifiers for automated currency validation", an automatic banknote validation method for classifying banknotes as genuine or counterfeit is described (patent nos. EP1484719, US 20042447169). This involves using a grid structure to segment the image of the entire note into regions. A separate "one-class" classifier is constructed for each region, and a small subset of the region-specific classifiers are combined to provide a comprehensive description (the term "single-class" is explained in more detail below). Segmentation and combination of region-specific classifiers is achieved by employing genetic algorithms to achieve good performance. This method requires a small amount of counterfeit samples at the genetic algorithm stage, and as such, it is not applicable when no counterfeit data is available.
Previously, currency validation typically involved classifying banknotes as genuine or counterfeit. Recently, however, there is a need to sort banknotes into more than two categories, true or false. For example, the additional classes include whether the note is "suspect," i.e., falls between the true class and the counterfeit class. Various regulatory provisions in different jurisdictions typically specify classes to be used in a banknote validation system. For example, cash accepting or cash recycling automated teller machines or other self-service devices such as vending machines, automated kiosks, and the like.
The sorting of "suspect" banknotes as opposed to genuine or counterfeit has financial implications for users of automatic banknote validation apparatus. In addition, regulatory and commercial requirements have increased the need to distinguish between suspect notes and genuine or counterfeit notes.
There is also a need to perform automatic currency validation in a manner that is computationally inexpensive to perform in real-time. Many of the problems mentioned above also apply to the validation of other types of media such as passports and checks.
Disclosure of Invention
Media validators that classify media into three or more classes are described. Information from all images in a training set of images of a genuine medium is used to form one or more segmentation maps, and each training set image is then segmented using the segmentation maps. Features are extracted from the segments and one or more classifiers are formed using the features. In this way classifiers can be quickly and easily formed for different types of media items, such as different currencies and denominations of banknotes, without the need to counterfeit samples of the media items. In some instances, the classifier(s) are arranged to operate at a plurality of pre-specified significance levels. In other examples, multiple classifiers are formed from feature information obtained from different segments. In other instances, the segmentation maps relate to different regions of the image of the media item. The media validator may be incorporated into a self-service device such as an automated teller machine.
The method may be performed by software in machine-readable form on a storage medium. The steps of the methods may be performed in any suitable order and/or in parallel as will be clear to a person skilled in the art.
This means that the software can be a valuable, separately marketable commodity. This means that software running or controlling "dumb" or standard software is included to perform the desired function (thus, software essentially defines the function of the registers and may therefore be referred to as registers even before combining it with its standard hardware). For similar reasons, software that "describes" or defines the construction of hardware, such as HDL (hardware description language) software used to design silicon chips or to construct general purpose programmable chips, is also included to perform the desired functions.
As will be clear to the skilled person, the preferred features may be combined as appropriate and with any of the various aspects of the invention.
Drawings
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
FIG. 1 is a flow chart of a method of creating a classifier for banknote validation;
FIG. 2 is a flow chart of a method of creating a banknote validator for classifying banknotes into three or more classes;
FIG. 3 is a flow chart of a method of classifying a banknote into three or more classes using a plurality of classifiers, each of which is associated with a segment of a segmentation map;
FIG. 4 is a schematic illustration of the sorting of notes using the same sorter with different levels of prominence;
FIG. 5 is a flow chart of a method of classifying notes into three or more classes at each of two significant levels using the same classifier;
FIG. 6 is a schematic illustration of a banknote divided into regions;
FIG. 7 is a flow chart of a method of classifying a banknote into three or more classes using a plurality of classifiers, each of which is associated with a different region of the banknote;
FIG. 8 is a flow chart of a method of classifying notes into three or more classes using a combination of local segmentation maps and different levels of significance of the classifier;
FIG. 9 is a flow chart of a method of classifying notes into three or more classes using a segmentation-based classifier and a combination of different significance levels of the classifier;
FIG. 10 is a flow chart of a method of classifying notes into three or more classes using a segment and note region based classifier and a combination of different significance levels of the classifier;
FIG. 11 is a schematic diagram of an apparatus for creating a classifier for banknote validation;
FIG. 12 is a schematic view of a bill validator;
FIG. 13 is a flow chart of a method of validating a banknote;
FIG. 14 is a schematic diagram of a self-service device with a bill validator.
Detailed Description
Embodiments of the present invention are described below, by way of example only. These examples represent the best modes of practicing the invention presently known to the applicant, and are not the only modes in which the invention can be practiced.
Although the present examples are described and illustrated herein as being implemented in a banknote validation system, the described system is provided herein as an example and not a limitation. Those skilled in the art will appreciate that the present examples are suitable for application in a variety of different types of media validation systems, including but not limited to passport validation systems, check validation systems, bond validation systems, and stock validation systems.
The term "single class classifier" is used to denote a classifier that is formed or constructed using information about samples from only a single class, but which is used to assign newly appearing samples to or not to the single class. This is different from a conventional binary classifier that is created by using information about samples of two classes and is used to assign new samples to one or the other of the two classes. A single class classifier may be considered to define a boundary around a known class, such that samples that fall off the boundary are considered not to belong to the known class.
As mentioned above, it is necessary to classify banknotes into more than counterfeit or genuine types. For example, the additional classes include whether the note is "suspect," i.e., falls between the true and counterfeit classes. Examples of four categories are given in the following table. In this example, the banknote is classified as unidentified (type 1), counterfeit (type 2), genuine (type 4), or suspect (type 3).
Species of Classification Properties
1 Not paper money, not discerned No note was detected because: wrong image or format-transport error (e.g. double feed, etc.) -large, corner or missing part-handwritten bond, broken card, etc. -wrong currency
2 Elements identified as being counterfeit Images and formats are approved, but one or more authentication features are missing or significantly out of specification
3 Elements that are not unambiguously authenticated. Suspicious bank notes Images and formats are approved, but not all authentication features are approved because of quality and/or tolerances. In most cases, damaged or contaminated banknotes.
4 Bank notes certified as genuine All certification checks represent positive results
FIG. 1 is a schematic flow chart of a method of creating a classifier for banknote validation.
First, we obtain a training set of images of genuine banknotes (see block 10 of fig. 1). These are the same type of images taken of banknotes of the same currency and denomination. The type of image relates to how the image is obtained, which may be in any manner known in the art. For example, a reflectance image, a transmission image, an image on any channel of red, blue or green, a thermal image, an infrared image, an ultraviolet image, an x-ray image or other image types. The images in the training set are aligned and are the same size. As is known in the art, preprocessing may be performed to align the images and scale the images, if necessary.
Next, we create a segmentation map by using information from the training set images (see block 12 of FIG. 1). The segmentation map includes information on how to divide the image into a plurality of segments. The segments may be discontinuous, i.e. a given segment may comprise more than one patch (batch) in different regions of the image. Preferably, but not necessarily, the segmentation map also includes a certain number of segments to be used.
Using the segmentation map, we segment each image in the training set (see block 14 of fig. 1). Then, we extract one or more features from each segment in each training set image (see block 16 of fig. 1). By the term "feature" we mean any statistic or other characteristic of the segment. For example, mean pixel intensity (intensity), median pixel intensity, pattern of pixel intensities, texture, histogram, fourier transform descriptor, wavelet transform descriptor, and/or any other statistic in the segment.
Then, a classifier is formed by using the feature information (see block 18 of fig. 1). Any suitable type of classifier may be used as is known in the art. In a particularly preferred embodiment of the invention, the classifier is a one-class classifier, which does not require information about counterfeit banknotes. However, a binary classifier or any other type of classifier of any suitable type as known in the art may also be used. For example, if it is desired to classify banknotes into three or more classes (e.g., true, counterfeit, and suspect), a classifier classified into the appropriate number of classes may be used.
The method of figure 1 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly, efficiently and automatically. To create classifiers for other currencies or denominations, the method is repeated with the appropriate training set images.
In a particular example, a single class classifier is formed that provides classification into only two classes (true or false). In this case, it is sometimes necessary to provide means so that additional classes are possible, such as the "suspect" classes mentioned above. To make this possible, we modify the method of FIG. 1 to form more than one classifier, each classifier relating to only one segment of the segmentation map (see FIG. 2). This results in two or more classifiers (assuming two or more segments in the segmentation map). The outputs of the classifiers are then combined to provide more than one class of classification, as will be described below with reference to fig. 3.
FIG. 2 shows how the method of FIG. 1 can be modified to produce more than one classifier. The method is the same as the method of fig. 1, except that a plurality of classifiers are formed instead of one classifier. Each classifier is formed by using feature information from a single segment.
As shown in fig. 3, allows us to sort notes into more than two categories. Banknotes to be sorted (or validated) are input to an automatic banknote validator (see block 30). As described above, one or more images of the banknote are taken and pre-processed. The image of the banknote is then segmented into K segments using a segmentation map (which has been formed using any of the methods described herein or other suitable methods) (see block 32), where K is 2 or an integer value greater than 2.
Information is extracted from the K segments (see block 33) and input to each of K classifiers, which have been formed as described herein or in any other suitable manner. If the outputs from all of the classifiers indicate that the note is true, an indication is made that the note is true (see block 35). If the outputs from all of the classifiers indicate that the note is counterfeit, an indication is made that the note is counterfeit (see block 36). If one or more of the classifiers indicates that the banknote is genuine and one or more other classifiers indicates that the banknote is counterfeit, an indication is made that the banknote is "suspect" (see block 37).
More details now are given here regarding the formation of the segmentation map.
Previously, in EP1484719 and US20042447169 (as mentioned in the background section) we used segmentation techniques and genetic algorithm methods involving the use of a mesh structure for the image plane to form the segmentation map. This necessarily uses some information about counterfeit banknotes and incurs increased computational costs when performing genetic algorithm searches.
The present invention uses a different method of forming segmentation maps that does not require the use of genetic algorithms or equivalent methods to search for good segmentation maps among a large number of possible segmentation maps. This reduces computational cost and improves performance. In addition, no information about counterfeit banknotes is required.
It is believed that in a counterfeiting process it is often difficult to provide consistent quality emulation of the entire note, and therefore, certain regions of the note are more difficult to replicate successfully than others. We have therefore recognised that instead of using a strictly uniform grid segmentation, we can improve banknote validation by using more complex segmentations. This is the case in fact when we perform empirical tests indicating the above-mentioned case. Segmentation based on morphological characteristics such as mode, color and texture results in better performance in detecting counterfeit banknotes. However, when applying a conventional image segmentation method, such as using an edge detector, to each image in the training set, it is difficult to use the conventional image segmentation method. This is because different results are obtained for each training set item and it is difficult to align the corresponding features in the different training set images. To avoid the problem of alignment segmentation, in a preferred embodiment we use so-called "spatio-temporal image decomposition".
Details regarding the method of forming the segmentation map are now given. In summary, the method can be considered to specify how to divide an image plane into a plurality of segments, each segment comprising a plurality of specified pixels. As mentioned above, the segments may be non-contiguous. For example, the present specification is written based on information from all images in the training set. In contrast, segmentation using a strict mesh structure does not require information from the images in the training set.
For example, each segmentation map includes information about the relationship of corresponding image elements between all images in the training set.
The images in the training set are considered to be stacked and aligned with each other in the same orientation. A given pixel in the banknote image plane is acquired, which pixel is considered to have a "pixel intensity profile" (profile) that includes information about the intensity of the pixel at a particular pixel location in each training set image. The pixel locations in the image plane are clustered into segments using any suitable clustering algorithm, with pixel locations in the segments having similar or related pixel intensity profiles.
In a preferred example, we use these pixel intensity profiles. However, it is not necessary to use a pixel brightness profile. Other information from all images in the training set may also be used. For example, a luminance profile of a block of 4 neighboring pixels or an average of the pixel luminance of pixels at the same location in each training set image.
A specific preferred embodiment of our method of forming a segmentation map will now be described in detail. This is based on the approach taught in the following publications: feature Notes in computer science, 2352: 747 Eigensegments of Avidan, S. in 758,2002: a spatial-temporal composition of an ensemble of sensitive objects.
Given an image ensemble that has been aligned and scaled to the same size r x c(ensemble){IiI ═ 1, 2, Λ, N, per image IiCan be represented by its pixels as alpha in vector form1i,α2i,Λ,αMi]TWherein α isji(j ═ 1, 2, Λ, M) is the luminance of the jth pixel in the ith image, and M ═ r · c is the total number of pixels in the image. The vectors I for all images in the ensemble may then be stacked (stacking)i(using mean return to zero) to generate a design matrix
Figure S2006800472788D00091
Thus, a ═ I1,I2,Λ,IN]. Row vector in A [ alpha ]j1,αj2,Λ,αjN]Can be seen as the luminance profile of a particular pixel (jth) of the N images. If two pixels are from the same pattern region of the image, they may have similar luminance values and therefore a strong temporal correlation. Note that the term "time" herein need not correspond exactly to the time axis, but is used to indicate the axis in the ensemble that passes through the different images. Our algorithm attempts to find these correlations and spatially divides the image plane into regions of pixels with similar temporal behavior. We measure this correlation by defining a matrix between the luminance profiles. A simple way is to use Euclidean distance, i.e. the time correlation between two pixels j and k can be expressed as <math><mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>ji</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>ki</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>.</mo> </mrow></math> The smaller d (j, k), the stronger the correlation between two pixels.
To spatially resolve the image plane using the temporal correlation between pixels, we perform a clustering algorithm on the pixel intensity profile (the rows of the design matrix a). This will result in clusters of temporally related pixels. The most straightforward choice is to use the K-means algorithm, but any other clustering algorithm may be used. As a result, the image plane is divided into segments of temporally related pixels. This segment can then be used as a template to segment all the images in the training set; and a classifier can be constructed with respect to features extracted from those segments of all images in the training set.
In order to achieve training without using counterfeit banknotes, a one-class classifier is preferred. Any suitable type of single-class classifier known in the art may be used. Such as neural network based single class classifiers and statistical based single class classifiers.
Suitable statistical methods for single class classification are typically based on log-likelihood ratio maximization under the null assumption of extracting the considered observations from the target class, and these methods include assuming the target class as a multivariate gaussian distributed D2The test (described in Morrison, DF: multivariate statistical Methods (third edition.) McGraw-Hill publishing Co., New York, 1990). In the case of any non-Gaussian distribution, the density of the target class can be estimated by using, for example, a semi-parametric mixture of Gauss (described in Bishop, CM: Neural Networks for Pattern recognition, Oxford university Press, New York, 1995) or a non-parametric Parzen window (described in Duda, RO, Hart, PE, Stork, DG: Pattern Classification (second edition), John Wiley and Sons, INC, New York, 2001) and can be estimated by, for example, self-sampling (bootstrap) (described in Wada, S, Woord, WA, Gary, HL et al: A new test for outlier detection from a mixture distribution of variables, of Joom, HL, et al: A new test for outlier detection from a mixture of distributions of variables, of origin)Computational and Graphical Statistics (journal of computational and Graphical Statistics), 6 (3): 285, 299, 1997) to obtain the distribution of log-likelihood ratios under the null hypothesis.
Other methods that may be used for the classification of single classes are Support Vector data field description (SVDD) (described in Tax, DMJ, Duin, RPW: Support Vector field description), Pattern Recognition Letters (Pattern Recognition prompt), 20 (11-12): 1191, 1199, 1999), and also the known "Support estimation" (described in Hayton, P, Scholkopf, B, tarassanko, L, antibiotics, P: Support Vector Detection Applied to Jet Engine vibration spectroscopy) (described in Support Vector new Detection Applied to Jet Engine vibration spectroscopy), advance in Neural Information Processing Systems (Neural network Information Processing Systems), 13, intensity, and hierarchy of intensity, intensity and hierarchy of intensity, and intensity Detection using the theory of arrival, SJ, Detection extreme values (SJ, R, image (Image)&Signal Processing (IEE conference record on vision, image and Signal Processing), 146 (3): 124-129, 1999). In SVDD, the distribution of supported data is estimated, while the EVT estimates the distribution of extreme values. For this particular application, a large number of genuine banknote samples are used, and therefore, in this case, a reliable estimate of the distribution of the target classes can be obtained. Thus, in a preferred embodiment, we choose a single class classification method that can unambiguously estimate the density distribution, although this is not essential. In a preferred embodiment, we use parameter-based D2And (4) a single classification method of the test.
In the preferred embodiment, the statistics hypothesis test for our single class classifier is detailed as follows:
assuming N independent, uniformly distributed p-dimensional vector samples (feature sets per note), x1,Λ,xNC has a base density function p (x | theta) with respect to a parameter theta. For new point xN+1The following hypothesis test is given such that H0:xN+1Is e.g. C, and <math><mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mo>:</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&NotElement;</mo> <mi>C</mi> <mo>,</mo> </mrow></math> where C denotes a region where invalid hypothesis is true, and C is defined by p (x | θ). Normal log-likelihood ratio of null and alternative hypotheses assuming uniform distribution under the alternative hypothesis
<math><mrow> <mi>&lambda;</mi> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mi>&theta;</mi> </munder> <msubsup> <mi>&Pi;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>|</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mi>&theta;</mi> </munder> <msubsup> <mi>&Pi;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>|</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
May be used as test statistics for invalid hypotheses. In the preferred embodiment, we can use log-likelihood ratios as test statistics for the validation of newly presented notes.
1) Feature vector with multivariate gaussian density: assuming that the feature vectors describing individual points in the sample are Multivariate gaussians, the test presented from the above likelihood ratio (1) evaluates whether each point in the sample shares a common mean value (described in (Morrison, DF: Multivariate statistical methods (third edition): McGraw-Hill publishing company, new york, 1990)). Assuming N independent, uniformly distributed p-dimensional vector samples x1, Λ, xNFrom multivariate Normal distribution with mean μ and covariance C, the sample estimate is
Figure S2006800472788D00122
Andthe random selection of samples is denoted x0Distance of related squared Mahalanobis (Mahalanobis)
<math><mrow> <msup> <mi>D</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow></math>
Can be expressed as a central F distribution distributed with p and N-p-1 degrees of freedom
F = ( N - p - 1 ) N D 2 p ( N - 1 ) 2 - Np D 2 - - - ( 3 )
Then, if
F>Fα;p,N-p-1 (4)
Then the common population mean vector x0And x remainsiWill be rejected, wherein Fα;p,N-P-1Is the upper α.100% point of the F distribution with degree of freedom (p, N-p-1).
Now assume that x is selected0As having a maximum of D2A vector of observations of the statistics. Maximum D from random samples of size N2The distribution of (2) is complicated. However, a conservative approximation of 100 α percent above this critical value can be obtained by the penultimate (Bonferroni) inequality. Therefore, if
<math><mrow> <mi>F</mi> <mo>></mo> <msub> <mi>F</mi> <mrow> <mfrac> <mi>&alpha;</mi> <mi>N</mi> </mfrac> <mo>;</mo> <mi>p</mi> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mi>p</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow></math>
Then we can conclude that x0Is an outlier.
In fact, both equation (4) and equation (5) may be used for outlier detection.
When adding data xN+1When available, we can use the following increases in mean and covariance in designing tests that do not form segments of the original sampleQuantity estimation, i.e. mean value
<math><mrow> <mrow> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <mo>{</mo> <mi>N</mi> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow></math>
Sum covariance
<math><mrow> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mi>N</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>+</mo> <mfrac> <mi>N</mi> <msup> <mrow> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow></math>
By using expressions (6) and (7) and the matrix inversion theorem, equation (2) for the N sampled reference sets and the N +1 th checkpoint becomes
<math><mrow> <msup> <mi>D</mi> <mn>2</mn> </msup> <mo>=</mo> <msubsup> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow></math>
Wherein,
<math><mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mi>N</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow></math>
<math><mrow> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <mfrac> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>N</mi> </mfrac> <mrow> <mo>(</mo> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mfrac> <mrow> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> </mrow> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> <msup> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow></math>
by DN+1,N 2To represent
Figure S2006800472788D00136
Then
D 2 = ND N + 1 , N 2 N + 1 + D N + 1 , N 2 - - - ( 11 )
Therefore, the mean value can be based on a common estimationSum covarianceTo test for a new point xN+1. Although the assumption of multivariate gaussian feature vectors has been found to be a suitable practical choice for many applications, the assumption of multivariate gaussian feature vectors is often not true in practice. In the following section we abandon this assumption and consider arbitrary densities.
2) Feature vectors with arbitrary density: finite data samples extracted from any arbitrary density p (x) can be extracted using any suitable semi-parametric (e.g., gaussian mixture model) or non-parametric (e.g., parever window method) density estimation method known in the art
Figure S2006800472788D001310
Figure S2006800472788D001311
Obtaining probability density estimates
Figure S2006800472788D001312
This density can then be used in calculating the log-likelihood ratio (1). Unlike the case of multivariate gaussian distributions, under the null assumption, there is no analytical distribution of the test statistic (λ). Therefore, to obtain such a distribution, a digital self-sampling method may be employed to obtain an otherwise non-analytic null distribution at the estimated density, and therefore, various threshold values λ may be established from the obtained empirical distributioncrit. It can be seen that at the limit value N → ∞, the likelihoodThe ratio can be estimated by
<math><mrow> <mi>&lambda;</mi> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&RightArrow;</mo> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>;</mo> <msub> <mover> <mi>&theta;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow></math>
Wherein,
Figure S2006800472788D00142
representing x under a model estimated from the original N samplesN+1The probability density of (c).
Reference to self-sampling in generating B sets of N samples from a reference data set and using them to estimate a density distributionNumber ofThereafter, the number of samples N +1 can be calculated by randomly selecting the sample <math><mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>;</mo> <msubsup> <mover> <mi>&theta;</mi> <mo>^</mo> </mover> <mi>N</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>&ap;</mo> <msubsup> <mi>&lambda;</mi> <mi>crit</mi> <mi>i</mi> </msubsup> </mrow></math> To obtain B-self-sampled replicated test statistic lambdacrit iI is 1, K, B. By pairing λ in ascending ordercrit iOrdering, a threshold α can be defined such that if λ ≦ λαThen the null hypothesis is rejected at the desired significance level, where λαIs λcrit iAnd α ═ j/(B + 1).
Preferably, the method of forming the classifier is repeated for different numbers of segments and verified using an image of a banknote known to be authentic. Then, the number of segments that give the best performance is selected, and the classifier that uses the number of segments is used. We have found that the optimum number of segments is from about 2 to 15, although any suitable number of segments may be used.
As described above, in one set of embodiments, a single class classifier is used. The class classifier may be considered to define a boundary around a known class, such that samples that fall off the boundary are considered not to belong to the known class. However, single-class classifiers typically classify objects into only two classes. This is problematic in situations where, for example, banknotes need to be classified as counterfeit, genuine or suspect. We propose a method to solve this problem by changing the significance level or confidence used by the single class classifier.
FIG. 4 is a schematic diagram showing the effect of different saliency levels on a single class classifier. Assume that a given single-class classifier has a saliency level α 1 indicated by the elliptical boundary 41 in fig. 4. In figure 4 the banknote is represented by dots or crosses depending on whether the banknote is actually genuine or actually counterfeit. Most genuine banknotes in this example fall within the boundary 41 and are classified as genuine by the single class classifier. Suppose that the significance level of the classifier is now reduced to α 2, indicated by the boundary 40 in fig. 4. Some counterfeit banknotes now fall within the boundary 40 and are therefore incorrectly classified as true. We can also use two significance levels to indicate a third classification. Those notes that fall between the boundary 40 and the boundary 41 may be classified as suspect. In this way, we can add the number of classes to the classification made, introducing a number of different levels of significance to the single class classifier.
Advantageously, when a significant level is changed, the example single-class classifier described in detail herein need not be retrained.
FIG. 5 is a flow chart of a method of validating notes using single-class classifiers having different levels of significance. For example, two significance levels, one higher than the other, are predefined and stored by manual configuration (see block 50). Banknote validation is performed as described herein using a single type of classifier with a higher significance level (see block 51). If the note is classified as true, an output is made indicating true (see blocks 52 and 53). If the note is not classified as true, the validation is repeated using the same single-class classifier with a lower significance level (see block 54). If the banknote is classified as counterfeit, this result is output (see blocks 55 and 57). However, if the note is classified as true, an indication is made that the note is "suspect" (see block 56). That is, the automatic validation process is repeated with different significance levels for the same note. If the result of the single-class classifier on the banknote is different in each case, the banknote is classified as "suspect". The single class classifier is believed to effectively perform a test on the probability distribution of the morphological characteristics of genuine banknotes. For example, the boundary in the probability distribution is defined by setting a significant level of the target false rejection rate for genuine banknotes.
In another embodiment, we sort notes into more than two categories by forming two or more segmentation maps (which may or may not have the same number of segments). Each segmentation map relates to a region of the banknote as will now be described in detail with reference to figure 6. This results in a plurality of classifiers, one for each segmentation map, whereby each classifier is associated with a different region of the banknote. These classifiers are referred to herein as local classifiers.
Figure 6 is a schematic representation of the face of a banknote of a particular denomination and currency. The face of the banknote is divided into regions 61, 62 and 63 indicated by dashed lines in figure 6. Two or more regions are used and the regions are positioned, sized and arranged in any suitable manner. In a preferred example, the regions are selected in such a way that each region contains one or more security features 64 (e.g. holograms, silks and watermarks) of the banknote. However, this is not essential. As shown in fig. 6, the regions may be uniform and contiguous, although this is not required. Advantageously, by selecting regions in such a way that each region contains one or more security features, we are able to assess the likelihood of the absence of one or more of those security features. This helps classify banknotes into classes that include counterfeit, genuine and "suspect". The regions may be selected in any suitable manner, for example by using image processing or an image recognition system to identify the security feature. For example, infrared or thermal imaging may be used to find out the appropriate security feature, such as a watermark. Additionally, customized illumination may be used to find holograms or other complex diffraction grating security features. Alternatively, the areas may be manually configured in advance for different currencies and denominations.
FIG. 7 is a flow chart of a method of using a partial classifier for banknote validation. The banknote to be authenticated is input to the validator (see block 70) and an image of the banknote is taken (see block 71). The image is divided into R designated regions (see block 72). The R regions are the same regions that have been used to form the segmentation map and corresponding classifiers. The segmentation map is then used on each region of the image to segment the region (see block 73) and information is extracted from each segment of each region. This information is input to the appropriate R classifiers (see block 75). If all of the classifiers indicate pass, i.e., the note is true, the note is indicated as true (see block 76). If all classifiers indicate failure, the note is indicated as counterfeit (see block 77). Otherwise, the note is indicated as suspect (see block 78).
One or more of the methods described herein for sorting notes into two or more categories may also be combined.
As described above, one approach involves using multiple classifiers, each associated with one segment of the segmentation map. This method will now be referred to as method a.
Another approach involves using a single classifier with multiple levels of significance. This method will now be referred to as method B.
Another method involves the use of multiple local classifiers, each associated with a different region of the banknote image. This method will now be referred to as method C.
Possible combinations of these methods include (but are in no way limited to):
a then B;
c then B (as shown in FIG. 8)
C then A
C then A then B
Fig. 8 is a flowchart of an example of combining method C and then method B. In fig. 8, the steps of method C are indicated by blocks 82, 83 and 84, and the steps of method B are indicated by blocks 85, 86, 87, 88 and 89. Banknotes to be inspected are input (block 80), an image is taken (block 81), and the image is divided into S regions (block 82). S local segmentation maps are then created using the methods described herein (block 83), and information is extracted from the S regions using the appropriate segmentation maps (block 84). All S classifier row classifier tests are performed using the higher significance level (block 85). If all classifiers indicate a genuine banknote, a genuine banknote is indicated (block 87). Otherwise, the classifier repeats the test using a lower significance level. If all of the classifiers indicate a genuine note, then the suspect note is indicated (block 88). Otherwise, a counterfeit banknote is indicated (block 89). In this way we can provide special credit to the customer. This is common for genuine banknotes that become worn and worn out after a certain number of cycles. If a strictly (high) significance level is used, such banknotes are very likely to be classified as counterfeit by all S local classifiers. This can result in property damage to the customer. Thus, by again testing using a relaxed (lower) significance level, the note may be identified as true by all S classifiers, and thus, for further investigation, the note may be classified as suspect. This avoids loss to the customer. At the same time, the security of the self-service device or other means of using the process is maintained, as genuine counterfeit banknotes are not affected and still identified. The method also provides the bank with the flexibility to customize their tightness and standardize what quality of banknotes will be placed in suspect categories. This can be achieved because both S and significance level are adjustable.
Fig. 9 is a flowchart of an example of combining method a followed by method B. The steps of method a are indicated by blocks 92 to 93 and the steps of method B are indicated by blocks 94 to 98. Steps 90 and 91 correspond to steps 80 and 81 of fig. 8.
FIG. 10 is a flow chart of an example of combining methods C, then A, then B. The steps of method C are indicated by blocks 100 and 101. The step of method a is 102. In this case, a plurality of classifiers are used, one for each bill region S and one for each part of the bill region K. The verification is performed at two significance levels using each of the S × K classifiers (see blocks 103 to 107).
An advantage of using multiple classes (e.g., counterfeit, genuine, suspect) of banknote validation methods is that the methods can increase customer confidence, appreciation, and credit of the automated banknote validator. If the note is classified as suspect, the note may be accepted and credited to the customer's account for a short period of time while a manual or other off-line investigation of the validity of the note is conducted.
FIG. 11 is a schematic diagram of an apparatus 110 for creating a sorter 112 for banknote validation. It includes:
an input 111 configured to access a training set of banknote images;
a processor 113 configured to create a segmentation map using the training set images;
a segmenter 114 configured to segment each training set image using a segmentation map;
a feature extractor 115 configured to extract one or more features from each segment of each training set image; and
a classification formation device 116 configured to form a classifier using the feature information;
wherein the processor is configured to create a segmentation map based on information from all images in a training set. For example by using the spatio-temporal image decomposition described above.
Fig. 12 is a schematic diagram of the bill validator 121. It includes:
an input configured to receive at least one image 120 of a banknote to be validated;
the segmentation map 122;
a processor 123 configured to segment the image of the banknote using the segmentation map;
a feature extractor 124 configured to extract one or more features from each segment of the banknote image;
a classifier 125 configured to classify the banknote as valid or invalid based on the extracted features; wherein the segmentation map is formed based on information about each of the set of training images of the banknote. Note that the devices of fig. 12 need not be independent of each other, and these devices may be integrated.
FIG. 13 is a flow chart of a method of validating a banknote. The method comprises the following steps:
accessing at least one image of the banknote to be validated (block 130);
accessing a segmentation map (block 131);
segmenting the image of the note using the segmentation map (block 132);
extracting features from each segment of the banknote image (block 133);
classifying the banknote as valid or invalid based on the extracted features using a classifier (block 134);
wherein the segmentation map is formed based on information relating to each of a set of training images of the banknote. The steps of the methods may be performed in any suitable order or combination as is known in the art. The segmentation map may implicitly include information about each image in the training set, since the segmentation map may be formed based on the information. However, the information implied in the partition map may be a simple file with a list of pixel addresses to be included in each segment.
Fig. 14 is a schematic diagram of a self-service device 141 having a bill validator 143. It includes:
means 140 for accepting banknotes;
an imaging device 142 for obtaining a digital image of the banknote; and
the bill validator 143 described above.
The methods described herein may be performed on images or other representations of banknotes, which images/representations are of any suitable type. For example, images on the red, blue and green channels or other images as described above.
The segmentation may be formed based on only one type of image, such as the red channel. Alternatively, the segmentation map may be formed based on images of all types (e.g., red, blue, and green channels). Multiple segmentation maps may also be formed, one for each image or combination of image types. For example, there may be three segmentation maps, one for the red channel image, one for the blue channel image, and one for the green channel image. In this case, during validation of a single note, an appropriate segmentation map/classifier is used depending on the type of image selected. Thus, each of the above methods may be modified by using different types of images and corresponding segmentation maps/classifiers.
As with the imaging device, the device for accepting the banknotes is of any suitable type known in the art. A feature selection algorithm may be used to select one or more features for use in the extracting features step. In addition, in addition to the feature information discussed herein, a classifier may be formed based on specific information relating to a specific denomination or currency of a banknote. For example, information associated with regions that are significantly rich in data in terms of color, shape in a given currency and denomination, or spatial frequency.
As will be apparent to the skilled person: any range or device value given herein may be extended or altered without loss of effect.
It should be understood that the above description of the preferred embodiments is given by way of example only and that various modifications may be made by those skilled in the art.

Claims (27)

1. A media validator comprising:
(i) an input configured to receive at least one image of a media item to be authenticated;
(ii) a segmentation map comprising information about the relationship of corresponding image elements between all images in a training set image of the media item;
(iii) a processor configured to segment an image of the media item using the segmentation map;
(iv) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(v) one or more classifiers together configured to classify the banknote into one of at least three classes based on the extracted features,
wherein the segmentation map is created by using information of each image in the training set, pixel locations are clustered into segments, and the segments are non-contiguous across the training set images such that a given segment comprises more than one slice in different regions of the image.
2. A media validator as claimed in claim 1 comprising only one classifier, the classifier being configured to operate at each of a plurality of pre-specified confidence levels.
3. A media validator as claimed in claim 1 comprising a plurality of classifiers each formed from feature information extracted from a different one of the segments.
4. A media validator as claimed in claim 1 comprising means for dividing an image of the media item to be validated into a plurality of regions and further comprising a plurality of segmentation maps, each segmentation map relating to a different one of the regions.
5. A media validator as claimed in claim 4 comprising a plurality of classifiers each relating to a different one of the segmentation maps.
6. A media validator as claimed in claim 3 wherein each classifier is further configured to operate at each of a plurality of pre-specified confidence levels.
7. A media validator as claimed in claim 5 wherein each classifier is further configured to operate at each of a plurality of pre-specified confidence levels.
8. A media validator as claimed in claim 4 comprising a plurality of classifiers each relating to a different one of the segmentation maps and a different segment of the segmentation map.
9. A media validator as claimed in claim 8 wherein each classifier is further configured to operate at each of a plurality of pre-specified confidence levels.
10. A media validator as claimed in claim 1 wherein the image of the media item is of a particular type and which further comprises a plurality of segmentation maps, each segmentation map being for a different type of media item image.
11. A media validator as claimed in claim 1 wherein the classifier is a one-class classifier.
12. A media validator as claimed in claim 1 comprising means for combining results from a plurality of classifiers.
13. A media validator as claimed in claim 1 wherein the segmentation maps are created by using an average of pixel intensities of pixels at the same location of each image in a training set.
14. A method of authenticating a media item, comprising:
(i) accessing at least one image of a media item to be authenticated;
(ii) accessing a segmentation map comprising information about relationships of corresponding image elements between all images in a set of training images of the media item;
(iii) segmenting the image of the media item using the segmentation map;
(iv) extracting features from each segment of the image of the media item;
(v) using one or more classifiers together to classify the media item into one of at least three classes based on the extracted features,
wherein the segmentation map is created by using information of each image in the training set, pixel locations are clustered into segments, and the segments are non-contiguous across the training set images such that a given segment comprises more than one slice in different regions of the image.
15. The method of claim 14, further comprising: classifying the media item using only one classifier configured to operate at each of a plurality of pre-specified confidence levels.
16. The method of claim 14, comprising: classifying the media item using a plurality of classifiers, each of the plurality of classifiers comprising feature information extracted from a different one of the segments.
17. The method of claim 14, further comprising: the image of the media item is divided into a plurality of regions and a plurality of segmentation maps are accessed, each segmentation map relating to a different one of the regions.
18. The method of claim 17, further comprising: classifying the media item using a plurality of classifiers, each classifier associated with a different one of the segmentation maps.
19. The method of claim 16, further comprising: each of the classifiers is run at a plurality of pre-specified confidence levels.
20. The method of claim 18, further comprising: each of the classifiers is run at a plurality of pre-specified confidence levels.
21. The method of claim 17, further comprising: classifying the media item using a plurality of classifiers, each classifier being associated with a different one of the segmentation maps and a different segment of the segmentation map.
22. The method of claim 21, further comprising: each of the classifiers is run at a plurality of pre-specified confidence levels.
23. The method of claim 14, wherein the image of the media item is of a particular type and which includes accessing a plurality of segmentation maps, each segmentation map for a different type of media item image.
24. The method of claim 14, comprising: combining results from multiple classifiers.
25. The method of claim 14, wherein the segmentation map is created by using an average of pixel intensities of pixels at the same location of each image in a training set.
26. A self-service device comprising:
(i) a device for receiving an item of media,
(ii) an imaging device for obtaining a digital image of a media item; and
(iii) a media validator comprising:
(i) an input configured to receive at least one image of a media item to be authenticated;
(ii) a segmentation map comprising information about the relationship of corresponding image elements between all images in a training set image of the media item;
(iii) a processor configured to segment an image of the media item using the segmentation map;
(iv) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(v) one or more classifiers together configured to classify the media item into one of at least three classes based on the extracted features,
wherein the segmentation map is created by using information of each image in the training set, pixel locations are clustered into segments, and the segments are non-contiguous across the training set images such that a given segment comprises more than one slice in different regions of the image.
27. The self-service device of claim 26, wherein the segmentation map is created by using an average of pixel intensities of pixels at the same location of each image in a training set.
CN2006800472788A 2005-12-16 2006-12-14 Media validation Expired - Fee Related CN101366060B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US30553705A 2005-12-16 2005-12-16
US11/305,537 2005-12-16
US11/366,147 US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/366,147 2006-03-02
PCT/GB2006/004676 WO2007068930A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items

Publications (2)

Publication Number Publication Date
CN101366060A CN101366060A (en) 2009-02-11
CN101366060B true CN101366060B (en) 2012-08-29

Family

ID=40206435

Family Applications (4)

Application Number Title Priority Date Filing Date
CN2006800473583A Expired - Fee Related CN101331526B (en) 2005-12-16 2006-09-26 Banknote validation
CN2006800475165A Expired - Fee Related CN101331527B (en) 2005-12-16 2006-12-14 Processing images of media items before validation
CN2006800472788A Expired - Fee Related CN101366060B (en) 2005-12-16 2006-12-14 Media validation
CN2006800473687A Active CN101366061B (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN2006800473583A Expired - Fee Related CN101331526B (en) 2005-12-16 2006-09-26 Banknote validation
CN2006800475165A Expired - Fee Related CN101331527B (en) 2005-12-16 2006-12-14 Processing images of media items before validation

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN2006800473687A Active CN101366061B (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items

Country Status (1)

Country Link
CN (4) CN101331526B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010055974A1 (en) 2010-12-23 2012-06-28 Giesecke & Devrient Gmbh Method and device for determining a class reference data set for the classification of value documents
CN102110323B (en) * 2011-01-14 2012-11-21 深圳市怡化电脑有限公司 Method and device for examining money
WO2012145909A1 (en) * 2011-04-28 2012-11-01 中国科学院自动化研究所 Method for detecting tampering with color digital image based on chroma of image
CN102306415B (en) * 2011-08-01 2013-06-26 广州广电运通金融电子股份有限公司 Portable valuable file identification device
CN102565074B (en) * 2012-01-09 2014-02-05 西安印钞有限公司 System and method for rechecking images of suspected defective products by small sheet sorter
US8983168B2 (en) * 2012-04-30 2015-03-17 Ncr Corporation System and method of categorising defects in a media item
US9299225B2 (en) * 2014-06-23 2016-03-29 Ncr Corporation Value media dispenser recognition systems
CN105184954B (en) * 2015-08-14 2018-04-06 深圳怡化电脑股份有限公司 A kind of method and banknote tester for detecting bank note
DE102015016716A1 (en) * 2015-12-22 2017-06-22 Giesecke & Devrient Gmbh Method for transmitting transmission data from a transmitting device to a receiving device for processing the transmission data and means for carrying out the method
CN108074320A (en) * 2016-11-10 2018-05-25 深圳怡化电脑股份有限公司 A kind of image-recognizing method and device
CN108806058A (en) * 2017-05-05 2018-11-13 深圳怡化电脑股份有限公司 A kind of paper currency detecting method and device
CN107705417A (en) * 2017-10-10 2018-02-16 深圳怡化电脑股份有限公司 Recognition methods, device, finance device and the storage medium of bank note version
EP3729334A1 (en) * 2017-12-20 2020-10-28 Alpvision SA Authentication machine learning from multiple digital representations
CN110910561B (en) * 2018-09-18 2021-11-16 深圳怡化电脑股份有限公司 Banknote contamination identification method and device, storage medium and financial equipment
TWI709188B (en) * 2018-09-27 2020-11-01 財團法人工業技術研究院 Fusion-based classifier, classification method, and classification system
CN111599081A (en) * 2020-05-15 2020-08-28 上海应用技术大学 Method and system for collecting and dividing RMB banknotes
CN113538809B (en) * 2021-06-11 2023-08-04 深圳怡化电脑科技有限公司 Data processing method and device based on self-service equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
EP1484719A2 (en) * 2003-06-06 2004-12-08 Ncr International Inc. Currency validation
CN1630843A (en) * 2001-10-30 2005-06-22 松下电器产业株式会社 Method, system, device and computer program for mutual authentication and content protection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3369088B2 (en) * 1997-11-21 2003-01-20 富士通株式会社 Paper discrimination device
WO2004023403A1 (en) * 2002-08-30 2004-03-18 Fujitsu Limited Device, method and program for identifying paper sheet_____
US7194105B2 (en) * 2002-10-16 2007-03-20 Hersch Roger D Authentication of documents and articles by moiré patterns
JP2005018688A (en) * 2003-06-30 2005-01-20 Asahi Seiko Kk Banknote recognition device using a reflective optical sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
CN1630843A (en) * 2001-10-30 2005-06-22 松下电器产业株式会社 Method, system, device and computer program for mutual authentication and content protection
EP1484719A2 (en) * 2003-06-06 2004-12-08 Ncr International Inc. Currency validation

Also Published As

Publication number Publication date
CN101331527B (en) 2011-07-06
CN101366061B (en) 2010-12-08
CN101331526A (en) 2008-12-24
CN101366060A (en) 2009-02-11
CN101331527A (en) 2008-12-24
CN101366061A (en) 2009-02-11
CN101331526B (en) 2010-10-13

Similar Documents

Publication Publication Date Title
CN101366060B (en) Media validation
JP5219211B2 (en) Banknote confirmation method and apparatus
US7639858B2 (en) Currency validation
JP5344668B2 (en) Method for automatically confirming securities media item and method for generating template for automatically confirming securities media item
Zeggeye et al. Automatic recognition and counterfeit detection of Ethiopian paper currency
Alnowaini et al. Yemeni paper currency detection system
Dhar et al. Paper currency detection system based on combined SURF and LBP features
EP3410409B1 (en) Media security validation
US10438436B2 (en) Method and system for detecting staining
Pen et al. Developing a Model for Detection of Ethiopian Fake Banknote Using Deep Learning
TO et al. Engineering and Technology (Autonomous), Vengamukkapalem, Ongole-Prakasam (DT)., AP

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120829

Termination date: 20191214

CF01 Termination of patent right due to non-payment of annual fee