CN101331527B - Processing images of media items before validation - Google Patents
Processing images of media items before validation Download PDFInfo
- Publication number
- CN101331527B CN101331527B CN2006800475165A CN200680047516A CN101331527B CN 101331527 B CN101331527 B CN 101331527B CN 2006800475165 A CN2006800475165 A CN 2006800475165A CN 200680047516 A CN200680047516 A CN 200680047516A CN 101331527 B CN101331527 B CN 101331527B
- Authority
- CN
- China
- Prior art keywords
- image
- media item
- mrow
- images
- media
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000010200 validation analysis Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 63
- 238000009826 distribution Methods 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 39
- 230000007935 neutral effect Effects 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims description 42
- 239000003607 modifier Substances 0.000 claims description 10
- 230000002159 abnormal effect Effects 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims 2
- 238000000638 solvent extraction Methods 0.000 claims 2
- 230000001594 aberrant effect Effects 0.000 abstract 3
- 239000013598 vector Substances 0.000 description 16
- 238000012360 testing method Methods 0.000 description 12
- 230000002547 anomalous effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 238000003909 pattern recognition Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000002068 genetic effect Effects 0.000 description 3
- 238000007619 statistical method Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000013450 outlier detection Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000001845 vibrational spectrum Methods 0.000 description 2
- RWSOTUBLDIXVET-UHFFFAOYSA-N Dihydrogen sulfide Chemical compound S RWSOTUBLDIXVET-UHFFFAOYSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000002986 genetic algorithm method Methods 0.000 description 1
- 238000002921 genetic algorithm search Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Images
Landscapes
- Inspection Of Paper Currency And Valuable Securities (AREA)
Abstract
Automatic media item validation is typically problematic in the case of media items that are damaged or marked. The present invention relates to a method of processing images of media items before automatic validation which addresses this problem. Aberrant image elements are identified, for example, using a bandpass filter. The aberrant image elements are replaced by neutral decision making data. This data is neutral with respect to a decision making process being a specified automatic media item validation process. For example, for each aberrant image element an estimated distribution is accessed for that image position across all images in a training set of images of media items. A value is selected from the estimated distribution on the basis of a significance level which is related to a significance level used by the automatic media item validation process. In this way media items which have tears, holes, marks or soiling may be successfully processed by an automatic media item validator.
Description
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. patent application No. 11/366,147 filed on 3/2/2006 and U.S. patent application No. 11/366,147 is a continuation-in-part application of U.S. patent application No. 11/305,537 filed on 16/12/2005. Application 11/366,147 filed on 3/2/2006 and 11/305,537 filed on 12/16/2005 are hereby incorporated by reference.
Technical Field
The present invention relates to a method and apparatus for processing an image of a media item prior to authentication. And in particular, but in no way limited to, processing images of media items such as banknotes, passports, bonds, stocks, checks and the like.
Background
There is an increasing need to automatically check and verify banknotes of different denominations and denominations in a simple, reliable and cost-effective manner. This is necessary, for example, in self-service devices that receive banknotes (e.g., self-service kiosks, ticket machines, automated teller machines arranged to transact deposits, self-service currency exchanges, etc.).
Previously, manual methods of currency validation have involved image inspection of banknotes, perspective effects such as watermarks and grain alignment marks, and hand feel and even smell. Other known methods rely on semi-public features that require semi-manual interrogation. For example, magnetic devices, ultraviolet sensors, fluorescence, infrared detectors, capacitors, metal strips, image patterns, and the like are used. However, by themselves, these methods are manual or semi-manual and are not suitable for many applications where manual intervention is not possible for a long time. For example in a self-service device.
There are significant problems to overcome to create an automatic currency validator. For example, there are many different types of currency that have different security features and even substrate types. It is also common to have different levels of security features at different denominations. It is therefore desirable to provide a general method for those different currencies and denominations that is easy and convenient to perform currency validation.
Previous automatic validation methods typically require a relatively large number of known counterfeit banknote samples to train the classifier. In addition, those previous classifiers were trained to detect only known counterfeit banknotes. This is problematic because there is little or no information available for possible counterfeit banknotes. This is problematic, for example, in particular for newly introduced denominations or newly introduced currencies.
In an earlier paper published on Pattern Recognition 37 (Pattern Recognition 37, paragraph 37) (2004), page 1085-1096, written by Chao He, MarkGirolami and Gary Ross (two of which are the inventors of the present application), entitled "applying optimized combinations of one-class classifiers for automated currency validation", an automated banknote validation method is described (patent numbers EP1484719, US 20042447169). This involves using a grid structure to segment the image of the entire note into regions. A separate "one-class" classifier is constructed for each region, and a small subset of the region-specific classifiers are combined to provide a comprehensive description (the term "single-class" is explained in more detail below). Segmentation and combination of region-specific classifiers to achieve good performance is achieved by employing genetic algorithms. This method requires a small amount of counterfeit samples in the genetic algorithm stage, and as such, it is not applicable when no counterfeit data can be obtained.
There is also a need to perform automatic banknote validation in a computationally inexpensive manner that can be performed in real time.
Automatic currency validation is often problematic in the event that a note is damaged or marked. For example if the banknote has tears, holes, smudges and/or folded corners. The aging of notes and the soiling that occurs during wear of notes is also problematic for automated currency validation systems.
Many of the problems mentioned above also apply to the authentication of other types of media such as passports, stocks, bonds, checks, etc.
Disclosure of Invention
In the case of damaged or marked media items (media items), automatic media item verification is often problematic. A method of processing an image of a media item prior to automatic authentication is described that addresses this problem. The anomalous image elements are identified by, for example, using a band pass filter. The abnormal image elements are replaced with neutral decision-making data. This data is neutral with respect to the decision-making process as a specified automatic media item validation process. For example, for each abnormal image element, an estimated distribution of image locations throughout all images of a training set of images of media items is acquired. A value is selected from the estimated distribution based on a saliency level related to a saliency level used by an automatic media item validation process. In this manner, an automated media item validator may successfully process media items having tears, holes, marks, or stains.
The methods described herein may be performed by software in machine-readable form on a storage medium. The steps of the methods may be performed in any suitable order and/or in parallel as will be clear to a person skilled in the art.
This means that the software can be a valuable, separately marketable commodity. This means that software that runs or controls "(dummy) dumb" or standard software to perform the desired function (thus, software essentially defines the function of a register and may therefore be referred to as a register even before combining it with its standard hardware). For similar reasons, software that "describes" or defines the construction of hardware, such as HDL (hardware description language) software used to design silicon chips or to construct general purpose programmable chips, is also included to perform the desired functions.
As will be clear to the skilled person, the preferred features may be combined as appropriate and with any of the various aspects of the invention.
Drawings
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
FIG. 1 is a flow chart of a method of identifying and replacing anomalous image elements in a banknote image;
FIG. 2 is a flow chart of a method of creating a classifier for banknote validation
FIG. 3 is a flow chart of a method of replacing an anomalous image element in a banknote image;
FIG. 4 is a schematic diagram of an apparatus for creating a classifier for banknote validation;
FIG. 5 is a schematic view of a bill validator;
FIG. 6 is a flow chart of a method of validating a banknote;
FIG. 7 is a schematic diagram of a self-service device with a bill validator.
Detailed Description
Embodiments of the present invention are described below, by way of example only. These examples represent the best modes of practicing the invention presently known to the applicant, and are not the only modes in which the invention can be practiced. Although the present examples are described and illustrated herein as being implemented for automatic banknote validation, the system described herein is described as an example and not a limitation. Those skilled in the art will appreciate that the present examples are suitable for application in a variety of different types of media validation systems, including but not limited to passport validation systems, check validation systems, validation systems for bonds and stocks.
The term "single class classifier" is used to denote a classifier that is formed or constructed using information about samples from only a single class, but which is used to assign newly appearing samples to or not to the single class. This is different from a conventional binary classifier that is created by using information about samples of two classes and is used to assign new samples to one or the other of the two classes. A single class classifier may be considered to define a boundary around a known class, such that samples that fall off the boundary are considered not to belong to the known class.
As mentioned above, automatic currency validation is often problematic in the case of damaged or marked banknotes. For example if the banknote has tears, holes, smudges and/or folded corners. The aging of notes and the soiling that occurs during wear of notes is also problematic for automated note verification systems.
For example, an automatic banknote validation system may use a process whereby an image of a banknote to be validated is divided into segments. The segments may be formed by using a grid structure or other methods that separately use spatial location information. Alternatively, the segments may be formed by using a segmentation map that uses information about the correlation values of image elements between corresponding image elements in each item of the set of training banknote images.
This causes problems in the automatic banknote validation process if the banknote to be validated is damaged or marked, because some of the information is abnormal or unreliable. For example, a hole in a banknote can result in an abnormally high intensity (intensity) pixel in an image of the banknote. In addition, stains or marks on the note can result in pixels of abnormally low brightness in the image of the note.
Where the image of the banknote to be validated is divided into segments as part of the validation process, one option is to ignore those segments that contain anomalous data (e.g. holes, marks, creases, tears, etc.). However, only a small number of segments are used, which means that a large proportion of the data is ignored. In addition, if the ignored segments happen to contain important banknote regions such as security features (e.g., holograms, silks, watermarks, etc.), the confidence of the banknote validator will decrease.
To address these issues, we identify abnormal image elements in an image of a media item, such as a banknote to be authenticated, and replace those abnormal image elements with decision-neutral (decision-neutral) data. By "decision-neutral data" or "neutral decision-making data", we mean data that will affect the results of the pre-performed media item validation process. The media item authentication process may be of any suitable type, including but not limited to the specific media item authentication processes described herein.
FIG. 1 is a schematic flow chart of a method of processing an image of a banknote to be authenticated.
An image of the banknote to be authenticated is taken using any suitable technique as will be described in more detail below (see block 1). The image is normalized and/or pre-processed (see block 2), such as aligning the image in a particular orientation and scaling the image to a particular size. This allows changes in the sensor and lighting environment to be taken into account. An optional step is then introduced (see block 3) to determine one or more of the currency, serial number, denomination and orientation of the note using a recognition algorithm. If the recognition algorithm fails, it can be tried again by referring to a different edge or corner of the banknote image. If all four edges have tried and all failed, the note is rejected (see block 7). Otherwise, processing continues and looks for anomalies in the image (see block 4).
The anomalies may be identified in any suitable manner. For example, missing regions or holes in a note often cause image regions of abnormally high brightness. In this case, all image regions, elements or pixels above a specified threshold may be identified as anomalous.
In some currencies, a window is used for plastic currencies. Such windows also give rise to image areas of high brightness. In order for these windows not to be identified as anomalies, knowledge about the expected locations, positions, and sizes of these windows may be considered when identifying anomalies.
Smudges, marker marks, staples, creases and other such damage cause extremely opaque areas in the banknote image. In this case, all image regions, elements, or pixels having a brightness below a specified threshold may be identified as anomalous. Alternatively, information about the expected brightness of image elements of a particular currency and denomination may be considered when identifying anomalies.
In order to quickly identify picture elements having a brightness above or below a specified threshold, a band pass filter may be used.
Once the anomaly is identified, the anomaly may be removed by replacing it with decision neutral data (see block 5). Optionally, the proportion of the banknote image identified as anomalous is checked. If the ratio is above a specified threshold, the banknote is rejected if it has not been rejected at the reject algorithm stage (see block 7). This ensures that counterfeit notes formed by the partial blurring of counterfeit notes by the split portion are rejected. In addition, in this way, a limit can be set on the amount of exception data that may be replaced. Because processing tends towards decision-neutral data replacing 100% of the banknote images, the ability to detect counterfeiting may be reduced.
The resulting modified image of the banknote is then transmitted to the banknote validation system (see block 6) to be validated.
The process of forming decision-neutral data will be described in more detail below with reference to fig. 3.
In a particular set of embodiments, the pre-assigned banknote validation process uses a sorter formed as now described.
FIG. 2 is a schematic flow chart diagram of a method of creating a classifier for banknote validation.
First, we obtain a training set of images of genuine banknotes (see block 10 of fig. 1). These are the same type of images taken of banknotes of the same currency and denomination. The type of image relates to how the image is obtained, which may be in any manner known in the art. For example, a reflectance image, a transmission image, an image on any channel of red, blue or green, a thermal image, an infrared image, an ultraviolet image, an x-ray image or other image types. The images in the training set are aligned and are the same size. As is known in the art, preprocessing may be performed to align the images and scale the images, if necessary.
Next, we create a segmentation map by using information from the training set images (see block 12 of FIG. 2). The segmentation map includes information on how to divide the image into a plurality of segments. The segments may be discontinuous, i.e. a given segment may comprise more than one patch (batch) in different regions of the image. Preferably, but not necessarily, the segmentation map also includes a certain number of segments to be used.
Using the segmentation map, we segment each image in the training set (see block 14 of fig. 2). Then, we extract one or more features from each segment in each training set image (see block 16 of fig. 2). By the term "feature" we mean any statistic or other characteristic of the segment. For example, mean pixel intensity, median pixel intensity, pattern of pixel intensities, texture, histogram, fourier transform descriptor, wavelet transform descriptor, and/or any other statistic in the segment.
Then, a classifier is formed by using the feature information (see block 18 of fig. 2). Any suitable type of classifier may be used as is known in the art. In a particularly preferred embodiment of the invention, the classifier is a one-class classifier, which does not require information about counterfeit banknotes. However, a binary classifier or any other type of classifier of any suitable type as known in the art may also be used.
The method of figure 2 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and efficiently. To create classifiers for other currencies or denominations, the method is repeated with the appropriate training set images.
Previously, in EP1484719 and US20042447169 (as mentioned in the background section) we used segmentation techniques and genetic algorithm methods involving the use of a mesh structure for the image plane to form the segmentation map. This necessarily uses some information about counterfeit banknotes and incurs increased computational costs when performing genetic algorithm searches.
Embodiments described herein may use different methods of forming segmentation maps that do not require the use of genetic algorithms or equivalent methods to search for good segmentation maps among a large number of possible segmentation maps. This reduces computational cost and improves performance. In addition, no information about counterfeit banknotes is required.
It is believed that in a counterfeiting process it is often difficult to provide consistent quality emulation of the entire note, and therefore, certain regions of the note are more difficult to replicate successfully than others. We have therefore recognised that instead of using a strictly uniform grid segmentation, we can improve banknote validation by using more complex segmentations. This is the case in fact when we perform empirical tests indicating the above-mentioned case. Segmentation based on morphological characteristics such as mode, color and texture results in better performance in detecting counterfeit banknotes. However, when applying a conventional image segmentation method, such as using an edge detector, to each image in the training set, it is difficult to use the conventional image segmentation method. This is because different results are obtained for each training set item and it is difficult to align the corresponding features in the different training set images. To avoid the problem of aligning the segments, in a preferred embodiment we use what is known as "spatio-temporal image decomposition".
Details regarding the method of forming the segmentation map are now given. In summary, the method can be considered to specify how to divide the image plane into a plurality of segments, each segment comprising a plurality of specified pixels. As mentioned above, the segmentation may be discontinuous. For example, the present specification is written based on information from all images in the training set. In contrast, segmentation using a strict mesh structure does not require information from the images in the training set.
For example, each segmentation map includes information about the relationship of corresponding image elements between all images in the training set.
The images in the training set are considered to be stacked and aligned with each other in the same orientation. A given pixel in the banknote image plane is acquired, which pixel is considered to have a "pixel intensity profile" (profile) that includes information about the intensity of the pixel at a particular pixel location in each training set image. The pixel locations in the image plane are clustered into segments using any suitable clustering algorithm, with pixel locations in the segments having similar or related pixel intensity profiles.
In a preferred example, we use these pixel intensity profiles. However, it is not necessary to use a pixel brightness profile. Other information from all images in the training set may also be used. For example, a luminance profile of a block of 4 neighboring pixels or an average of the pixel luminance of pixels at the same location in each training set image.
A specific preferred embodiment of our method of forming a segmentation map will now be described in detail. This is based on the approach taught in the following publications: feature Notes in computer science, 2352: 747 Eigensegments of Avidan, S. in 758,2002: a spatial-temporal composition of an ensemble of sensitive objects.
Given an image ensemble (ensemble) that has been aligned and scaled to the same size r × c i1, 2, Λ, N, each image IiCan be passed through its pixels in vector formIs represented by [ alpha ]1i,α2i,Λ,αMi]TWherein α isji(j ═ 1, 2, Λ, M) is the luminance of the jth pixel in the ith image, and M ═ r · c is the total number of pixels in the image. The vectors I for all images in the ensemble may then be stacked (stacking)i(using mean return to zero) to generate a design matrixThus, a ═ I1,I2,Λ,IN]. Row vector in A [ alpha ]j1,αj2,Λ,αjN]Can be seen as the luminance profile of a particular pixel (jth) of the N images. If two pixels are from the same pattern region of the image, they may have similar luminance values and therefore a strong temporal correlation. Note that the term "time" herein need not correspond exactly to the time axis, but is used to indicate the axis in the ensemble that passes through the different images. Our algorithm attempts to find these correlations and spatially divides the image plane into regions of pixels with similar temporal behavior. We measure this correlation by defining a matrix between the luminance profiles. A simple way is to use Euclidean distance, i.e. the time correlation between two pixels j and k can be expressed as <math><mrow><mi>d</mi><mrow><mo>(</mo><mi>j</mi><mo>,</mo><mi>k</mi><mo>)</mo></mrow><mo>=</mo><msqrt><msubsup><mi>Σ</mi><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></msubsup><msup><mrow><mo>(</mo><msub><mi>a</mi><mi>ji</mi></msub><mo>-</mo><msub><mi>a</mi><mi>ki</mi></msub><mo>)</mo></mrow><mn>2</mn></msup></msqrt><mo>.</mo></mrow></math> The smaller d (j, k), the stronger the correlation between two pixels.
To spatially resolve the image plane using the temporal correlation between pixels, we perform a clustering algorithm on the pixel intensity profile (the rows of the design matrix a). This will result in clusters of temporally related pixels. The most straightforward choice is to use the K-means algorithm, but any other clustering algorithm may be used. As a result, the image plane is divided into segments of temporally related pixels. This segment can then be used as a template to segment all the images in the training set; and a classifier can be constructed with respect to features extracted from those segments of all images in the training set.
In order to achieve training without using counterfeit banknotes, a one-class classifier is preferred. Any suitable type of single-class classifier known in the art may be used. Such as neural network based single class classifiers and statistical based single class classifiers.
Suitable statistical methods for single class classification are typically based on log-likelihood ratio maximization under the null assumption of extracting the considered observations from the target class, and these methods include assuming the target class as a multivariate gaussian distributed D2The test (described in Morrison, DF: multivariate statistical Methods (third edition.) McGraw-Hill publishing Co., New York, 1990). In the case of any non-Gaussian distribution, the density of the target class can be estimated by using, for example, a semi-parametric mixture of Gauss (described in Bishop, CM: Neural Networks for Pattern recognition, Oxford university Press, New York, 1995) or a non-parametric Parzen window (described in Duda, RO, Hart, PE, Stork, DG: Pattern Classification (second edition), John Wiley and Sons, INC, New York, 2001), and the distribution of log-likelihood ratios under invalid assumptions can be obtained by sampling techniques such as self-sampling (described in Wang, S, Woodward, WA, Gary, HL et al: A new test for outlier detection from a multivariate mixture distribution, Journal of computational and Graphical Statistics, 6 (3): 285; 299, 1997).
Another method that may be employed for single class classification is Support vector data field description (SVDD) (described in Tax, DMJ, Duin, RPW: Support vector domain description, Pattern Recognition Letters (pattern recognition flash), 20 (11-12): 1191-1199, 1999), and also known "support estimation" (described in Hayton, P, Scholkopf, B, Tarassassenko, L, Anuzis, P: support Vector Detection Applied to Jet Engine vibration spectra (new Detection of Support vectors Applied to Jet Engine vibration spectra), advanced in Neural Information Processing Systems (Neural network Information Processing Systems), 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946 Ash 952, 2001) and Extreme Value Theory (EVT) (in Roberts, SJ: IEE Proceedings on Vision, Image&Signal Processing (IEE conference record on vision, image and Signal Processing), 146 (3): 124-129, 1999). In SVDD, the distribution of supported data is estimated, while the EVT estimates the distribution of extreme values. For this particular application, a large number of genuine banknote samples are used, and therefore, in this case, a reliable estimate of the distribution of the target classes can be obtained. Thus, in a preferred embodiment, we choose a single class classification method that can unambiguously estimate the density distribution, although this is not essential. In a preferred embodiment, we use parameter-based D2And (4) a single classification method of the test.
In the preferred embodiment, the statistics hypothesis test for our single class classifier is detailed as follows:
uniformly distributed p-dimensional vector samples (feature sets per note) x assuming N-independence1,Λ,xNC has a base density function p (x | theta) with respect to a parameter theta. For new point xN+1The following hypothesis test is given such that H0:xN+1Is e.g. C, and <math><mrow><msub><mi>H</mi><mn>1</mn></msub><mo>:</mo><msub><mi>x</mi><mrow><mi>N</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>∉</mo><mi>C</mi><mo>,</mo></mrow></math> where C denotes a region where invalid hypothesis is true, and C is defined by p (x | θ). Assuming that the distributions under the alternative assumptions are allUniform, then invalid and hypothesized normal log-likelihood ratio
May be used as test statistics for invalid hypotheses. In the preferred embodiment, we can use log-likelihood ratios as test statistics for the validation of newly presented notes.
1) Feature vector with multivariate gaussian density: assuming that the feature vectors describing individual points in the sample are Multivariate gaussians, the test presented from the above likelihood ratio (1) evaluates whether each point in the sample shares a common mean value (described in (Morrison, DF: Multivariate statistical methods (third edition): McGraw-Hill publishing company, new york, 1990)). Assuming N independent, uniformly distributed p-dimensional vector samples x1, Λ, xNFrom a sample with mean μ and covarianceMultivariate normal distribution of C, whose sample estimates areAndthe random selection of samples is denoted x0Distance of related squared Mahalanobis (Mahalanobis)
Can be expressed as a central F distribution distributed with p and N-p-1 degrees of freedom
Then, if
F>Fα;p,N-p-1 (4)
Then maleCommon ensemble mean vector x0And x remainsiWill be rejected, wherein Fα;p,N-P-1Is the upper α.100% point of the F distribution with degree of freedom (p, N-p-1).
Now assume that x is selected0As having a maximum of D2A vector of observations of the statistics. Maximum D from random samples of size N2The distribution of (2) is complicated. However, a conservative approximation of 100 α percent above this critical value can be obtained by the penultimate (Bonferroni) inequality. Therefore, if
Then we can conclude that x0Is an outlier.
In fact, both equation (4) and equation (5) may be used for outlier detection.
When adding data xN+1When available, in an experiment that designs a new sample that does not form part of the original sample, we can use the following incremental estimates of mean and covariance, i.e., mean
Sum covariance
By using expressions (5) and (6) and the matrix inversion theorem, equation (2) for the N sampled reference sets and the N +1 th checkpoint becomes
Wherein,
Therefore, the mean value can be based on a common estimationSum covarianceTo test for a new point xN+1. Although the assumption of multivariate gaussian feature vectors has been found to be a suitable practical choice for many applications, the assumption of multivariate gaussian feature vectors is often not true in practice. In the following section we abandon this assumption and consider arbitrary densities.
2) Feature vectors with arbitrary density: finite data samples extracted from any arbitrary density p (x) can be extracted using any suitable semi-parametric (e.g., gaussian mixture model) or non-parametric (e.g., parever window method) density estimation method known in the artObtaining probability density estimatesThis density can then be used in calculating the log-likelihood ratio (1). Unlike the case of multivariate gaussian distributions, under the null assumption, there is no analytical distribution of the test statistic (λ). Therefore, to obtain such a distribution, a digital self-decimation method may be employed to obtain an otherwise non-analytic null distribution at the estimated density, so various thresholds may be established from the obtained empirical distribution. It can be seen that at the limit value N → ∞, it appears that the ratio can be estimated by
Wherein,expressed under a model estimated from the original N samplesxN+1The probability density of (c).
Parameters for estimating density distribution in generating B-set white samples of N samples from reference data set and using themThereafter, the number of samples N +1 can be calculated by randomly selecting the sampleTo obtain B-self-sampled replicated test statisticsBy pairing in ascending orderOrdering, a threshold α can be defined such that if λ ≦ λαThen the null hypothesis is rejected at the desired significance level, where λαIs thatAnd α ═ j/(B + 1).
Preferably, the method of forming the classifier is repeated for different numbers of segments and verified using an image of a banknote known to be authentic. Then, the number of segments that give the best performance is selected, and the classifier that uses the number of segments is used. We have found that the optimum number of segments is from about 2 to 15, although any suitable number of segments may be used.
As mentioned above, a particular problem relates to identifying and replacing anomalous image elements in an image of a banknote to be authenticated. FIG. 3 is a flow chart of a process of replacing an abnormal image element with decision-neutral data. For each image element (block 300), e.g. pixel, group of pixels, a distribution of image locations is obtained (block 301). The distribution is an estimated distribution of the images positioned throughout all the images in a training set of images. As described above, the training set of images may be a plurality of images of genuine banknotes. For example, the distribution may be a pixel luminance profile or a luminance profile of a four-pixel located block, or the like as described above. Preferably, the distribution is the same as that used during the process of forming the segmentation map for the banknote validator as described above. This reduces the computational cost and saves time because the distribution has already been estimated.
A value is then selected from the acquired distribution based on the significance level (also referred to as confidence) (block 302). The significance level is related to the significance level of the classifier used in the banknote validator. For example, the significance level is the same as the significance level used by the classifier. Because the significance level is related to the significance level of the classifier, decision-neutral data is obtained by selecting values in this manner. The value at the anomalous image element is then replaced with the selected value (see block 303). By using decision-neutral data in this way, we ensure that the remainder of the banknote is indicative of the classification result of the banknote validator. This is advantageous over conventional methods for which missing or unreliable data on genuine banknotes means that false acceptance rates will be experienced to avoid many false rejects. In this way we can successfully address damaged, worn, cracked or partially discolored notes without modifying the core note validation process. Only preprocessing of the banknote image is required. In addition, this is achieved without compromising the false acceptance rate.
FIG. 4 is a schematic diagram of the apparatus 20 creating a sorter 22 for banknote validation. It includes:
an input 21 configured to access a training set of banknote images;
a processor 23 configured to create a segmentation map using the training set images;
a segmenter 24 configured to segment each training set image using a segmentation map;
a feature extractor 25 configured to extract one or more features from each segment of each training set image; and
a classification formation device 26 configured to form a classifier using the feature information; wherein the processor is configured to create a segmentation map based on information from all images in the training set. For example by using the spatio-temporal image decomposition described above.
Fig. 5 is a schematic diagram of the bill validator 31. It includes:
an input configured to receive at least one image 30 of a banknote to be validated;
the segmentation map 32;
a processor 36 configured to identify anomalies in the image;
an image modifier 37 configured to form a modified image by replacing the identified anomalies with neutral decision making data, said data being neutral decision making data, relating to the classifier 35;
another processor 33 (which may be integral with the processor 36) configured to segment the image of the banknote using the segmentation map;
a feature extractor 34 configured to extract one or more features from each segment of the banknote image;
a classifier 35 configured to classify the banknote as valid or invalid based on the extracted features; wherein the segmentation map comprises information about the relationship of corresponding image elements between all images in the training set of images of the banknote. Note that the devices of fig. 5 need not be independent of each other, and may be integrated.
FIG. 6 is a flow chart of a method of validating a banknote. The method comprises the following steps:
accessing at least one image of the banknote to be validated (block 40);
identifying an anomalous image element (block 41);
replace the outlier image element with decision neutral data (box 42);
access the segmentation map (block 43);
segmenting the image of the banknote using the segmentation map (block 44);
extracting features from each segment of the banknote image (block 45);
classifying the banknote as valid or invalid based on the extracted features using a classifier (block 46);
wherein the segmentation map is formed based on information relating to each image in the training image set of banknotes. The steps of the methods may be performed in any suitable order or combination as is known in the art. The segmentation map may implicitly include information about each image in the training set, since the segmentation map may be formed based on the information. However, the information implied in the segmentation map may be a simple file with a list of pixel addresses to be included in each segment.
Figure 7 is a schematic diagram of a self-service device 51 having a banknote validator 53. It includes:
a device 50 for accepting banknotes;
an imaging device 52 for obtaining a digital image of the banknote;
a processor 54 for replacing the anomaly images with decision-neutral data; and
the banknote validator 53 as described above.
The methods described herein may be performed on images or other representations of banknotes, which images/representations are of any suitable type. For example, images on the red, blue and green channels or other images as described above.
The segmentation may be formed based on only one type of image, such as the red channel. Alternatively, the segmentation map may be formed based on images of all types (e.g., red, blue, and green channels). Multiple segmentation maps may also be formed, one for each image or combination of image types. For example, there may be three segmentation maps, one for the red channel image, one for the blue channel image, and one for the green channel image. In this case, during validation of a single note, an appropriate segmentation map/classifier is used depending on the type of image selected. Thus, each of the above methods may be modified by using different types of images and corresponding segmentation maps/classifiers.
As with the imaging device, the device for accepting the banknotes is of any suitable type known in the art. Any feature selection algorithm known in the art may be used to select one or more features for use in the extracting features step. In addition, in addition to the feature information discussed herein, a classifier may be formed based on specific information relating to a specific denomination or currency of a banknote. For example, information associated with regions that are significantly rich in data in terms of color, shape in a given currency and denomination, or spatial frequency.
It will be apparent to the skilled person that any of the ranges or device values given herein may be extended or altered without losing the effect.
It should be understood that the above description of the preferred embodiments is given by way of example only and that various modifications may be made by those skilled in the art.
Claims (23)
1. A method of processing an image of a media item, comprising:
(i) identifying anomalies in the image of the media item;
(ii) the modified image is formed by replacing the identified anomalies with neutral decision making data, the data being neutral decision making data relating to a decision making process that is a pre-specified media item validation process.
2. The method of claim 1, wherein the step of identifying anomalies in the image comprises applying a band pass filter.
3. The method of claim 1, wherein the method comprises: the neutral decision making data is obtained by, for each abnormal image element, acquiring an estimated distribution of image locations throughout all images of a training set of images of media items, and selecting values from the estimated distribution.
4. A method according to claim 3, wherein the value is selected from the estimated distribution based on a significance level, the significance level being a significance level of the pre-specified media item validation process.
5. The method of claim 3, wherein the training set of images of media items includes only images of genuine media items.
6. The method of claim 3, wherein the distribution is estimated based on a pixel brightness profile.
7. The method of claim 1 wherein the pre-specified media item validation process includes using a single class classifier.
8. The method of claim 1, further comprising: providing the modified image as input to the pre-specified media item validation process.
9. The method of claim 1, wherein the anomaly in the image of the media item includes a condition of a damaged, worn, ripped, or partially faded media item.
10. An apparatus for processing an image of a media item, the apparatus comprising:
(i) a processor configured to identify anomalies in an image of the media item;
(ii) an image modifier configured to form a modified image by replacing the identified anomaly with neutral decision making data, the data being neutral decision making data relating to a decision making process that is a pre-specified media item validation process.
11. The apparatus of claim 10, wherein the processor comprises a band pass filter for identifying anomalies in the image.
12. The apparatus of claim 10, wherein the image modifier is configured to: the neutral decision making data is obtained by, for each abnormal image element, acquiring an estimated distribution of image locations throughout all images of a training set of images of media items, and selecting values from the estimated distribution.
13. The apparatus of claim 12, wherein the image modifier is configured to: selecting the value from the estimated distribution based on a significance level, the significance level being a significance level of the pre-specified media item validation process.
14. The apparatus of claim 12, wherein the image modifier is configured to estimate the distribution based on a pixel intensity profile.
15. The apparatus of claim 12, wherein the configuration image modifier is configured to: the distribution is estimated from a training set of images comprising only images of genuine media objects.
16. The apparatus of claim 10, comprising a banknote validator, and wherein the configuration image modifier is configured to: inputting the modified image to the media item validator.
17. The apparatus of claim 16 wherein said media item validator comprises a one-class classifier.
18. The apparatus of claim 10, wherein the anomaly in the image of the media item includes a condition of a damaged, worn, ripped, or partially faded media item.
19. A media item validator comprising:
(i) an input configured to receive at least one image of a media item to be authenticated;
(ii) a processor configured to identify anomalies in an image of the media item;
(iii) an image modifier configured to form a modified image by replacing the identified anomalies with neutral decision-making data, the data being neutral decision-making data relating to a classifier of the media item validator;
(iv) partitioning and mapping;
(v) a processor configured to segment an image of the media item using the segmentation map;
(vi) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(vii) a classifier configured to classify the media item based on the extracted features
Wherein the segmentation map comprises information about the relationship of the respective image elements between all images in a training image set of media items.
20. A media item validator as claimed in claim 19 wherein the image modifier is configured to: the neutral decision making data is obtained by, for each abnormal image element, acquiring an estimated distribution of image locations throughout all images of a training set of images of media items, and selecting values from the estimated distribution.
21. A media item validator as claimed in claim 19 wherein the anomalies in the images of the media items include instances of damaged, worn, ripped or partially faded media items.
22. A self-service device comprising:
(i) means for accepting a media item;
(ii) an imaging device for obtaining a digital image of the media item; and
(iii) a media item validator comprising:
(i) an input configured to receive at least one image of a media item to be authenticated;
(ii) a processor configured to identify anomalies in an image of the media item;
(iii) an image modifier configured to form a modified image by replacing the identified anomalies with neutral decision-making data, the data being neutral decision-making data relating to a classifier of the media item validation;
(iv) partitioning and mapping;
(v) a processor configured to segment an image of the media item using the segmentation map;
(vi) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(vii) a classifier configured to classify the media item based on the extracted features;
wherein the segmentation map comprises information about the relationship of the respective image elements between all images in a training image set of media items.
23. The apparatus of claim 22, wherein the anomaly in the image of the media item includes a condition of a damaged, worn, ripped, or partially faded media item.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US30553705A | 2005-12-16 | 2005-12-16 | |
US11/305,537 | 2005-12-16 | ||
US11/366,147 | 2006-03-02 | ||
US11/366,147 US20070140551A1 (en) | 2005-12-16 | 2006-03-02 | Banknote validation |
PCT/GB2006/004663 WO2007068923A1 (en) | 2005-12-16 | 2006-12-14 | Processing images of media items before validation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101331527A CN101331527A (en) | 2008-12-24 |
CN101331527B true CN101331527B (en) | 2011-07-06 |
Family
ID=40206435
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2006800473583A Expired - Fee Related CN101331526B (en) | 2005-12-16 | 2006-09-26 | Banknote validation |
CN2006800472788A Expired - Fee Related CN101366060B (en) | 2005-12-16 | 2006-12-14 | Media validation |
CN2006800473687A Active CN101366061B (en) | 2005-12-16 | 2006-12-14 | Detecting improved quality counterfeit media items |
CN2006800475165A Expired - Fee Related CN101331527B (en) | 2005-12-16 | 2006-12-14 | Processing images of media items before validation |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2006800473583A Expired - Fee Related CN101331526B (en) | 2005-12-16 | 2006-09-26 | Banknote validation |
CN2006800472788A Expired - Fee Related CN101366060B (en) | 2005-12-16 | 2006-12-14 | Media validation |
CN2006800473687A Active CN101366061B (en) | 2005-12-16 | 2006-12-14 | Detecting improved quality counterfeit media items |
Country Status (1)
Country | Link |
---|---|
CN (4) | CN101331526B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102010055974A1 (en) | 2010-12-23 | 2012-06-28 | Giesecke & Devrient Gmbh | Method and device for determining a class reference data set for the classification of value documents |
CN102110323B (en) * | 2011-01-14 | 2012-11-21 | 深圳市怡化电脑有限公司 | Method and device for examining money |
WO2012145909A1 (en) * | 2011-04-28 | 2012-11-01 | 中国科学院自动化研究所 | Method for detecting tampering with color digital image based on chroma of image |
CN102306415B (en) * | 2011-08-01 | 2013-06-26 | 广州广电运通金融电子股份有限公司 | Portable valuable file identification device |
CN102565074B (en) * | 2012-01-09 | 2014-02-05 | 西安印钞有限公司 | System and method for rechecking images of suspected defective products by small sheet sorter |
US8983168B2 (en) * | 2012-04-30 | 2015-03-17 | Ncr Corporation | System and method of categorising defects in a media item |
US9299225B2 (en) * | 2014-06-23 | 2016-03-29 | Ncr Corporation | Value media dispenser recognition systems |
CN105184954B (en) * | 2015-08-14 | 2018-04-06 | 深圳怡化电脑股份有限公司 | A kind of method and banknote tester for detecting bank note |
DE102015016716A1 (en) * | 2015-12-22 | 2017-06-22 | Giesecke & Devrient Gmbh | Method for transmitting transmission data from a transmitting device to a receiving device for processing the transmission data and means for carrying out the method |
CN108074320A (en) * | 2016-11-10 | 2018-05-25 | 深圳怡化电脑股份有限公司 | A kind of image-recognizing method and device |
CN108806058A (en) * | 2017-05-05 | 2018-11-13 | 深圳怡化电脑股份有限公司 | A kind of paper currency detecting method and device |
CN107705417A (en) * | 2017-10-10 | 2018-02-16 | 深圳怡化电脑股份有限公司 | Recognition methods, device, finance device and the storage medium of bank note version |
CN111480167B (en) * | 2017-12-20 | 2024-10-15 | 艾普维真股份有限公司 | Authentication machine learning for multiple digital representations |
CN110910561B (en) * | 2018-09-18 | 2021-11-16 | 深圳怡化电脑股份有限公司 | Banknote contamination identification method and device, storage medium and financial equipment |
TWI709188B (en) * | 2018-09-27 | 2020-11-01 | 財團法人工業技術研究院 | Fusion-based classifier, classification method, and classification system |
CN111599081A (en) * | 2020-05-15 | 2020-08-28 | 上海应用技术大学 | Method and system for collecting and dividing RMB banknotes |
CN113538809B (en) * | 2021-06-11 | 2023-08-04 | 深圳怡化电脑科技有限公司 | Data processing method and device based on self-service equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6163618A (en) * | 1997-11-21 | 2000-12-19 | Fujitsu Limited | Paper discriminating apparatus |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5729623A (en) * | 1993-10-18 | 1998-03-17 | Glory Kogyo Kabushiki Kaisha | Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm |
US7275161B2 (en) * | 2001-10-30 | 2007-09-25 | Matsushita Electric Industrial Co., Ltd. | Method, system, device and computer program for mutual authentication and content protection |
AU2002332275A1 (en) * | 2002-08-30 | 2004-03-29 | Fujitsu Frontech Limited | Device, method and program for identifying paper sheet_____ |
US7194105B2 (en) * | 2002-10-16 | 2007-03-20 | Hersch Roger D | Authentication of documents and articles by moiré patterns |
GB0313002D0 (en) * | 2003-06-06 | 2003-07-09 | Ncr Int Inc | Currency validation |
JP2005018688A (en) * | 2003-06-30 | 2005-01-20 | Asahi Seiko Kk | Banknote recognition device using a reflective optical sensor |
-
2006
- 2006-09-26 CN CN2006800473583A patent/CN101331526B/en not_active Expired - Fee Related
- 2006-12-14 CN CN2006800472788A patent/CN101366060B/en not_active Expired - Fee Related
- 2006-12-14 CN CN2006800473687A patent/CN101366061B/en active Active
- 2006-12-14 CN CN2006800475165A patent/CN101331527B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6163618A (en) * | 1997-11-21 | 2000-12-19 | Fujitsu Limited | Paper discriminating apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN101366061B (en) | 2010-12-08 |
CN101331526B (en) | 2010-10-13 |
CN101331526A (en) | 2008-12-24 |
CN101366060B (en) | 2012-08-29 |
CN101366060A (en) | 2009-02-11 |
CN101331527A (en) | 2008-12-24 |
CN101366061A (en) | 2009-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101331527B (en) | Processing images of media items before validation | |
JP5219211B2 (en) | Banknote confirmation method and apparatus | |
US7639858B2 (en) | Currency validation | |
Hassanpour et al. | Feature extraction for paper currency recognition | |
Zeggeye et al. | Automatic recognition and counterfeit detection of Ethiopian paper currency | |
Dhar et al. | Paper currency detection system based on combined SURF and LBP features | |
Sawant et al. | Currency recognition using image processing and minimum distance classifier technique | |
KR101232684B1 (en) | Method for detecting counterfeits of banknotes using Bayesian approach | |
Andrushia et al. | An Intelligent Method for Indian Counterfeit Paper Currency Detection | |
Patgar et al. | An unsupervised intelligent system to detect fabrication in photocopy document using geometric moments and gray level co-occurrence matrix | |
US10438436B2 (en) | Method and system for detecting staining | |
Vishnu et al. | Currency detection using similarity indices method | |
Kumar et al. | Currency Authentication using Color Based Processing | |
CN111627145B (en) | Method and device for identifying fine hollow image-text of image | |
Sánchez-Rivero¹ et al. | Authenticity Assessment of Cuban Banknotes by Combining Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110706 Termination date: 20191214 |