[go: up one dir, main page]

CN1928886A - Iris identification method based on image segmentation and two-dimensional wavelet transformation - Google Patents

Iris identification method based on image segmentation and two-dimensional wavelet transformation Download PDF

Info

Publication number
CN1928886A
CN1928886A CN 200610021266 CN200610021266A CN1928886A CN 1928886 A CN1928886 A CN 1928886A CN 200610021266 CN200610021266 CN 200610021266 CN 200610021266 A CN200610021266 A CN 200610021266A CN 1928886 A CN1928886 A CN 1928886A
Authority
CN
China
Prior art keywords
iris
image
recognition
matrix
grayscale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610021266
Other languages
Chinese (zh)
Other versions
CN100373396C (en
Inventor
马争
董自信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CNB200610021266XA priority Critical patent/CN100373396C/en
Publication of CN1928886A publication Critical patent/CN1928886A/en
Application granted granted Critical
Publication of CN100373396C publication Critical patent/CN100373396C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

基于图像分割和二维小波变换的虹膜识别方法,属于生物特征模式识别技术领域。先把虹膜定位分成内边缘定位和外边缘定位两个部分,并把重点放在内边缘定位上。接着采用直角坐标与极坐标映射理论把定位后的图像归一化到一个固定的灰度矩阵中。并对归一化后的图像进行两次图像分割,最终分割成18个小区域。再通过对归一化处理后虹膜图像的各个小分割区域进行二维小波变换,提取出主要小波通道的小波系数均值和方差作为特征值。在匹配识别算法中,采用方差倒数加权求和的匹配算法进行识别判决。最后用不同的系数来加权这3个识别结果,得到最终的识别结果。与已有的虹膜识别方法相比本发明在满足系统实时性要求的前提下具有较好的抗噪性能和较高的识别率。

Figure 200610021266

The invention relates to an iris recognition method based on image segmentation and two-dimensional wavelet transform, which belongs to the technical field of biometric pattern recognition. First divide iris positioning into inner edge positioning and outer edge positioning, and focus on inner edge positioning. Then, using Cartesian coordinates and polar coordinates mapping theory, the positioned image is normalized into a fixed gray matrix. And the normalized image is segmented twice, and finally divided into 18 small areas. Then, through the two-dimensional wavelet transform of each small segmented area of the iris image after normalization processing, the mean value and variance of the wavelet coefficients of the main wavelet channel are extracted as feature values. In the matching recognition algorithm, the matching algorithm of variance reciprocal weighted sum is used for recognition and judgment. Finally, the three recognition results are weighted with different coefficients to obtain the final recognition result. Compared with the existing iris recognition method, the present invention has better anti-noise performance and higher recognition rate on the premise of meeting the real-time requirement of the system.

Figure 200610021266

Description

Iris identification method based on image segmentation and two-dimensional wavelet transformation
Technical field
Based on the iris identification method of image segmentation and two-dimensional wavelet transformation, belong to the biological characteristic mode identification technology, particularly iris feature recognition methods.
Background technology
Along with the development of infotech and the widespread use of ecommerce, information security day by day becomes the important and urgent problem that people face.Can be used for identity differentiates, protects the biometrics identification technology of information security more and more to be subject to people's attention.So-called biometrics identification technology is meant that high-tech means is close combines by computing machine and optics, acoustics and biostatistics etc., utilizes intrinsic physiological property of human body and behavioural characteristic to carry out the evaluation of personal identification.And the iris feature recognition technology utilizes the difference of human eye iris texture to carry out identification.
Contain profuse information in the iris of human eye.Its surface has some to be similar to the texture of filament, spot, whirlpool, shape such as crown.These textures of iris have uniqueness, and different people has different iris texture characteristics, even if same individual, the iris texture characteristic of its left eye and right eye is all different.Therefore, it is very accurate carrying out identification with these textural characteristics.And the textural characteristics of iris mainly be by the people also the environment when the embryo determined, and since the outside of iris have transparent cornea with it with extraneous isolated, so full grown iris is not vulnerable to extraneous injury and changes.Therefore iris recognition has absolute reliability.In addition, because pupil can change according to the power of light, and then affects the iris shape also to ensue variation, whether the iris sample that utilizes this point to differentiate to be used to discern is the iris of eye of living, so iris recognition also has higher antifalsification.Have these advantages just because of iris texture, make identity recognizing technology based on iris feature in finance, ecommerce, various aspects such as security all have great application prospect.
Identity recognizing technology based on iris feature has obtained vigorous growth abroad, and has progressively carried out the commercialization process of iris recognition.Iris authentication system is succeeded in developing by univ cambridge uk at first.1993, the John.G.Daugman of univ cambridge uk provided more complete iris identification method.This method accuracy height, speed is fast, is the theoretical foundation of nearly all commercial iris authentication system in the world today, and his initiative work makes automatic iris recognition become possibility.1996, people such as the Wildes of Princeton succeeded in developing the iris authentication system based on the area image registration technology.1998, people such as the Boles of University of Queensland proposed the iris identification method based on the zero crossing wavelet transformation.See document for details: J.G.Daugman.High Confidence Visual Recognition of Persons by a Test of Statistical Independence.IEEE transactions on Pattern Analysis and Machine Intelligence, 1993,15 (11): 1148-1161, document: R.P.Wildes, J.C.Asmuth, G.L.Green, et al.A Machine-Vision System for Iris Recognition.Machine Vision and Applications, 1996,9 (1): 1-8 and document: W.Boles, B.Boashah.A HumanIdentification Technique Using Images of the Iris and Wavelet Transform.IEEE transactions onSignal Processing, 1998,46 (4): 1185-1188.
Domestic research to iris recognition technology is started late, and has also obtained development faster in recent years.But compare with the flourishing prosperous growth momentum of external iris recognition industry, certain gap is still arranged.At present, Institute of Automation, Chinese Academy of sociences has finished the research of the laboratory stage of iris recognition, and has applied for the patent of iris capturing device.Shanghai Communications University, Zhejiang University, the Central China University of Science and Technology etc. are also in the research of being correlated with, and have also all obtained certain achievement in research.See document for details: Wang Yunhong, Zhu Yong, Tan Tieniu. the identity based on iris recognition is differentiated. robotization journal, 2002,28 (1): 1-10, document: answer honeysuckle, Xu Guozhi. based on the iris recognition technology of wavelet transformation zero passage detection. Shanghai Communications University's journal, 2002,36 (3): 355-358 and document: Chen Liangzhou, leaf tiger year. a kind of new Algorithm of Iris Recognition research. Huabei Polytechnical College measuring technology journal, 2000,14 (4): 211-216.
At present, in the iris identification method that has proposed, having obtained preferably in actual applications, the method for recognition effect has both at home and abroad:
1, the iris identification method of Daugman based on phase analysis.It adopts the phase characteristic of the method coding iris of Gabor wavelet filtering.2D Gabor function can reach the localization of frequency field and spatial domain preferably, has good frequency and directional selectivity in other words in the localization of space.By calculating 2D Gabor phase coefficient, can from texture, extract continuous and discontinuous texture information effectively.See document for details: J.G.Daugman.High Confidence Visual Recognition ofPersons by a Test of Statistical Independence.IEEE transactions on Pattern Analysis and MachineIntelligence, 1993,15 (11): 1148-1161.
2, the iris identification method of the zero crossing of Boles detection.It adopts one-dimensional wavelet transform that a sampling curve along iris centres circle is carried out zero crossing and detects, and finishes the classification of iris feature by two self-defining similarity functions.Its theoretical foundation is that the signal zero crossing of Mallat is described reconstruction theory.See document for details: W.Boles, B.Boashah.A HumanIdentification Technique Using Images of the Iris and Wavelet Transform.IEEE transactions onSignal Processing, 1998,46 (4): 1185-1188.
3, Tan Tieniu, people's such as Wang Yunhong iris identification method based on texture analysis.This method is regarded the feature of iris as a kind of random grain, from the local feature of the angle extraction iris of texture analysis.This method adopts Gabor filtering and is that the two-dimensional wavelet transformation of wavelet basis extracts iris texture characteristic with the Daubechies-4 small echo, and carry out characteristic matching with variance inverse weight Euclidean distance method, obtained recognition effect preferably, see document for details: Wang Yunhong, Zhu Yong, Tan Tieniu. the identity based on iris recognition is differentiated. robotization journal, 2002,28 (1): 1-10.
Though above-mentioned 3 kinds of iris identification methods have all been obtained recognition effect preferably, still have incomplete place.Though method 1 has obtained higher identification accuracy,, 2048 dimensions have been reached because the dimension of the iris feature vector of its extraction is higher.Therefore the sharpness to the iris image that collects has higher requirement.The limitation that system is in the past drifted about though method 2 has overcome, rotation and ratio scaling bring, also to brightness variation and insensitive for noise, and not very high to the quality requirements of images acquired, but because this algorithm has only utilized a part of iris texture characteristic information, so do not obtain higher correct recognition rata.Method 3 has adopted different texture analysis strategy texture feature extractions, has obtained travelling speed faster, but it is more coarse to the iris texture description, and its correct recognition rata still is not very high.
The quality of estimating a kind of iris identification method has two than important index: discrimination and travelling speed.Usually, these two indexs are contradiction.A kind of good iris identification method is to improve discrimination to greatest extent under the prerequisite that satisfies the travelling speed that system real time requires.
Summary of the invention
The iris identification method that the present invention proposes has obtained higher iris recognition rate under the prerequisite of requirement of real time.It is considered except the noise in the iris texture image preferably by image segmentation, more comprehensively extracted the characteristic information of two-dimentional iris texture image again by two-dimensional wavelet transformation, the method for suing for peace by variance inverse weight feature gets recognition result to the end at last.
Content of the present invention for convenience of description, at first make a term definition at this:
1. iris: the tissue of people's eyeball intermediary in the middle of pupil and sclera, this part group only contains unique, and abundant texture information can be used for carrying out identification.Its geometric configuration is an annular shape.
2. the inside and outside edge of iris: the intersection of iris and pupil is called the interior change edge of iris, and it is a circle.The intersection of iris and sclera is called the outward flange of iris, and it also is a circle.
3. iris image acquiring device: the device of catching the iris image digital signal.
4. gray level image: only comprised monochrome information in the image and without any the image of colouring information.
5. medium filtering: being a kind of nonlinear signal processing method, also is a kind of typical low-pass filter.By the gray level ordering, getting its intermediate value is output pixel value with the pixel in the field for it.The effect of medium filtering depends on two key elements: the pixel count that relates in the spatial dimension in field and the median calculation.
6. grey level histogram: grey level histogram (histogram) is the function of gray level, has the number of the pixel of every kind of gray level in its presentation video, every kind of frequency that gray scale occurs in the reflection image.The horizontal ordinate of grey level histogram is a gray level, and ordinate is the frequency that this gray level occurs, and is the most basic statistical nature of image.
7. image binaryzation: set a threshold value, the gray-scale value of each picture element in the image all with this threshold, when gray-scale value is made as 1 to the gray-scale value of this picture element during greater than threshold value; When gray-scale value is made as 0 to the gray-scale value of this point during less than threshold value.Picture element gray-scale value in the image become have only 0,1 process to be called image binaryzation.
8. Iris Location: comprise in pupil, sclera, the ciliary iris image at a width of cloth, accurate in locating goes out the process of the geometric position of annular iris in image.
9.Roberts operator: it is a kind of edge detection operator commonly used.The Roberts operator assigns to approach gradient operator based on the vertical and level error with image.It is realized jointly by two 2 * 2 templates: 1 0 0 - 1 , 0 1 - 1 0 . Its computing formula is: R ( i , j ) = ( f ( i , j ) - f ( i + 1 , j + 1 ) ) 2 + ( f ( i , j + 1 ) - f ( i + 1 , j ) ) 2 .
10.x the Gray Projection amount P (x) of direction: gray matrix B (x, y) in, on the x direction, the corresponding pixel gray scale of each x value is all added together.Promptly P ( x ) = Σ y B ( x , y ) .
11.y the Gray Projection amount P (y) of direction: gray matrix B (x, y) in, on the y direction, the corresponding pixel gray scale of each y value is all added together.Promptly P ( y ) = Σ x B ( x , y ) .
12. circular edge detecting device: its basic mathematical operator is: Its basic thought is exactly at the continuous iteration of parameter space (r, x 0, y 0) value because each parameter value (r, x 0, y 0) all corresponding circle, so iteration (r, x 0, y 0) process, that is to say the process of iteration circle.On the circumference of each circle, ask the round integration of gray-scale value.Along with (r, x 0, y 0) circulation change, gray-scale value circle integration also and then changes, and finds round integration to change that maximum circle and is the circle that will detect.
13. iris image normalization: because each the position of iris all can be different when gathering iris image, and because the influence of acquisition system illumination can cause the amplification or the contraction of pupil, this can make the size of iris also and then change.Can not directly be used for feature extraction so orient the iris region of annular, must convert the annular iris to a fixed-size gray matrix image.This process is the image normalization process.
14. histogram equalization: it is to be the histogram transformation of original image uniform form, has so just increased the dynamic range of pixel gray-scale value, has strengthened the contrast on the integral image.Through the image after the equalization, its readability is significantly improved, and required target information will be highlighted out.Histogram equalization can solve because the problem that inhomogeneous illumination impacts the iris result.
15. two-dimensional wavelet transformation a: width of cloth iris image is carried out that two-dimensional wavelet transformation can extract horizontal direction, vertical direction and to the detail coefficients of angular direction.It is fit to the analysis of two-dimensional images signal very much.
16.Haar small echo: it is a kind of better simply wavelet basis, and its wavelet function is:
ψ ( t ) = 1 , t ∈ [ 0,1 / 2 ] - 1 , t ∈ [ 1 / 2,1 .
17. wavelet channel: carrying out completely to piece image, wavelet decomposition obtains a series of wavelet coefficient.Usually the subimage that these coefficient of wavelet decomposition are constituted calls the wavelet decomposition passage.Piece image is carried out 1 rank two-dimensional wavelet transformation can obtain four kinds of wavelet channel: LL, LH, HL, HH.Each passage has characterized the information under original image different space frequency and the direction.
18. average: in its expression wavelet channel, the mean value of wavelet coefficient.It has characterized the energy size of wavelet channel.Its computing formula is: E n = 1 M × N Σ i = 1 M Σ j = 1 N | x ( i , j ) | .
19. sample variance: it is used for weighing in the wavelet channel, the departure degree of wavelet coefficient and average.Its computing formula is:
D n = Σ i = 1 M Σ j = 1 N [ | x ( i , j ) | - E n ] 2 M × N - 1 .
20. the systematic learning stage: system extracts complete eigenwert to the iris image that collects.And this eigenwert deposited in the iris sample database, with master sample as coupling identification.
21. the system identification stage: the iris image of unknown identity is read in system, extracts the eigenwert of half.And the eigenwert in this eigenwert and the iris sample database compared according to certain coupling recognizer, finally draw the recognition result of unknown identity iris image.
Detailed technology scheme of the present invention is:
Iris identification method based on image segmentation and two-dimensional wavelet transformation is characterized in that, it comprises the following step:
Step 1: gather iris image.
Harvester by iris image is gathered iris image, and the iris image gray matrix H that obtains can be used for further handling (x, y).
Step 2: iris image medium filtering.
To the iris image gray matrix H of step 1 gained (x, y) do smoothing processing obtain gray matrix I (x, y).In smoothing processing, wave filter is the nonlinear median filter of 9 discrete picture elements for the sample window size.
Step 3: iris inward flange location.Specifically may further comprise the steps:
Step 1): image binaryzation
Obtain gray matrix I (x, grey level histogram y), find out in the grey level histogram in the peak value corresponding gray scale value M of gray-scale value in (20~125) scope, gray-scale value M and a safety coefficient D addition are obtained can be used for the threshold value Y of gray level image binaryzation.Safety coefficient D gets the number between 3 to 7 usually.Be threshold value with Y to iris gray level image I (x y) carries out binaryzation, obtain binary image B (x, y).
Step 2): look for the rough center of circle of inward flange
Calculate the Gray Projection amount P (x) of binaryzation matrix B, at the Gray Projection amount P (y) of y direction in the x direction.In one-dimension array P (x), find its minimum value, and find the x of this minimum value correspondence 1In like manner in one-dimension array P (y), find its minimum value, and find the y of this minimum value correspondence 1(x 1, y 1) be the rough center of circle of iris inward flange.
Step 3): rim detection
To the image B after the binaryzation (x y) carries out edge of image with the Roberts operator and detects, obtain a bianry image BW who contains the edge (x, y).
Step 4): marginal point is divided into 4 quadrants
BW (x, y) in the rough center of circle (x of the inward flange that obtains in the step 3) 1, y 1) set up coordinate system for true origin.The divergent margin point is divided into four quadrants.In each quadrant, in the fan-shaped range of 30 pixels of distance initial point and 50 pixels, 3 pixels of picked at random differ at 10 marginal points more than the unit.In like manner, also be so to select 3 pixels in other quadrants.Such 4 quadrants have 12 pixels.
Step 5): 4 quadrant associatings are accurately located
In 12 points of step 4), select 3 not points on same straight line, just can constitute a circle by these 3 points.And obtain all the other 9 points to this circle apart from d.The multipotency of 12 points constitutes 220 circles, also promptly has 220 apart from d, finds that circle of minimum d correspondence to be inward flange of iris in these 220 distances.If the center of circle of the iris inward flange that obtains is (x a, y a), radius is r a
Step 4: iris outward flange location.Specifically may further comprise the steps:
Step 1): limit the iteration scope that rounded edge detects template
Step 2 obtain I (x, y) in, carry out iteration with the circular edge detecting device and ask the gray integration value.In iterative process the center of circle (x of inward flange a, y a) as iteration (x 0, y 0) initial value.And (x 0, y 0) the hunting zone be limited to (x a-10, y a-10), (x a-10, y a+ 10), (x a+ 10, y a-10), (x a+ 10, y a+ 10) be in the rectangle on summit.And the hunting zone of r is limited in 70~110 unit picture elements.In the process of search, be not on whole circle, gray scale to be done round integration, but with (x 0, y 0) in the rectangular coordinate system set up, angle is to ask the integration of gray scale on-45 °~45 °, 135 °~225 ° the circular camber line.
Step 2): find out outward flange in the iteration
(r, x at the scope inner iteration parameter space of step 1) 0, y 0) value, obtain gray integration and change outward flange that maximum circle is iris.Corresponding parameters value (r, x 0, y 0) be outer peripheral radius of iris and circle-center values (r b, x b, y b).Positioning result as shown in Figure 1.
Step 5: iris image normalization.Specifically may further comprise the steps:
Step 1): set up the coordinate conversion model and with iris image normalization
Obtain the circumference parameter (r at the inside and outside edge of iris by step 3 and step 4 a, x a, y a), (r b, x b, y b), with the interior round heart (x a, y a) as the initial point foundation of coordinate system rectangular coordinate is converted to polar mathematical model (as shown in Figure 2).Begin to do the ray that becomes the θ angle with horizontal line from initial point in this model, respectively there is an intersection point on it and inside and outside border, and note is made B (x respectively i, y i), A (x o, y o).Two intersection point A on the ray, (x y) can use A (x to the coordinate of any point between the B o(θ), y o(θ)), B (x i(θ), y iLinear combination (θ)) is represented: x ( r , θ ) = ( 1 - r ) * x i ( θ ) + r * x 0 ( θ ) y ( r , θ ) = ( 1 - r ) * y i ( θ ) + y * y 0 ( θ ) . So just iris image can be normalized to θ is transverse axis, and r is that (x, y), iris normalized result as shown in Figure 3 for the gray matrix P of the fixed measure of the longitudinal axis.
Step 2): histogram equalization
The gray matrix P that step 1) is obtained (x, y) do histogram equalization handle obtain Normalized Grey Level matrix PI (x, y).
Step 6: the gray matrix image segmentation first time after the normalization
PI (x, y) in, the part of choosing gray matrix top 16 * 1024 is as the iris texture a-quadrant.Choosing line width is 17~48, the scope of row be respectively 1~128,384~640,896~1024 totally 3 pockets as iris texture B zone.Choosing line width is 49~64, and the scope of row is respectively that 1~64,448~576,980~1024 totally 3 pockets are as iris texture C zone, and iris image is cut apart as shown in Figure 4 for the first time.
Step 7: the gray matrix image segmentation second time after the normalization.
The gray matrix that texture a-quadrant in the step 6 is divided into 8 fritters: line width is 1~16, and col width is respectively 1~128,129~256,257~384,385~512,513~640,641~768,769~896,897~1024.The gray matrix that iris texture B zone also is divided into 8 fritters: line width is 17~32, and col width is respectively 1~128,385~512,513~640,897~1024; Line width is 33~48, and col width is respectively 1~128,385~512,513~640,897~1024.The gray matrix that iris texture C zone is divided into 2 fritters: line width is 49~64, and col width is that two gray matrix zones of 1~64,961~1024 combine; Line width is 49~64, and col width is 449~578 gray matrix zone.Iris image is cut apart as shown in Figure 5 for the second time.
Step 8: each cut zone is carried out two-dimensional wavelet transformation
Is that wavelet basis carries out 3 rank two-dimensional wavelet transformations to each the little cut zone in the step 7 with the Haar small echo, obtains 10 wavelet channel behind the two-dimensional wavelet transformation altogether.These passages are designated as LL respectively 3, LH 3, HL 3, HH 3, LH 2, HL 2, HH 2, LH 1, HL 1, HH 1HH wherein 1, HH 2, HH 3Triple channel has been represented the information of iris image under horizontal high frequency and vertical high frequency, and they contain a large amount of noises, is unfavorable for extracting iris feature.Give up these three passages and only keep remaining 7 passages, as shown in Figure 6.
Step 9: extract wavelet coefficient average and variance as eigenwert
Each wavelet channel behind certain little cut zone two-dimensional wavelet transformation is extracted its average and sample variance: E n = 1 M × N Σ i = 1 M Σ j = 1 N | x ( i , j ) | , D n = Σ i = 1 M Σ j = 1 N [ | x ( i , j ) | - E n ] 2 M × N - 1 As eigenwert.This each wavelet channel of little cut zone can extract two eigenwerts like this.7 wavelet channel just can extract 14 eigenwerts.Each little cut zone is all repeated this process.Each little cut zone can both extract 14 eigenwerts, has 18 little cut zone, so can extract 252 eigenwerts.
Step 10: mate identification with the equal value difference of variance inverse weight.Specifically may further comprise the steps:
Step 1): extract complete characterization as the iris feature sample
Learning phase in system is all handled to the processing procedure of step 9 according to step 2 the every width of cloth iris image that will learn.Every width of cloth iris image can extract 252 eigenwerts, these eigenwerts as the sample storage of this iris image in the iris sample database, be used for the back identification decision.
Step 2): extract characteristics of mean and be used for identification
Cognitive phase in system to the iris image an of width of cloth the unknown, is handled to the processing procedure of step 9 according to step 2 this image.And in step 9, only extract average E nAs eigenwert.The iris image that is used to like this discern only need extract 128 eigenwerts.
Step 3): divide 3 parts to mate identification respectively
In the coupling identifying, 252 eigenwerts in the sample are divided into 3 part: E according to three texture regions of step 6 A, E B, E CE ABe the eigenwert of iris texture a-quadrant through the conversion extraction.E BBe the eigenwert of iris texture B zone through the conversion extraction.E CBe the eigenwert of iris texture C zone through the conversion extraction.In like manner also 128 eigenwerts of iris image to be identified are divided into 3 part: e A, e B, e CCoupling recognizer according to the summation of variance inverse weight is mated the identification computing respectively to 3 parts.Promptly P j = Σ i = 1 N ( e ji - E ji ) 2 D ji , j=A,B,C。
Step 4): with 3 recognition results of different coefficient weightings
Can obtain P by step 3) A, P B, P CThe recognition result of every part be multiply by weighting coefficient promptly obtained final recognition result P.Be P=a*P A+ b*P B+ c*P CUsually weighting coefficient situation about distributing according to iris texture weighting coefficient a: b: c=7 then: 2: 1.In specific implementation process of the present invention, get a=0.7, b=0.2, c=0.1.
Step 5): setting threshold T discerns judgement
Set a threshold value T,, judge that promptly current iris image sample in iris image to be identified and the database is from same eyes as P during as P<T.When P>T, judge that promptly current iris image sample in iris image to be identified and the database is from different eyes.
By above 10 steps, just can carry out the identification of real-time by gathering iris image.
Need to prove:
1, the purpose of introducing safety coefficient is for fear of noise such as introducing eyelashs in to iris image binaryzation process in the step 1) of step 3.What the value of safety coefficient can not be established is excessive or too small, and usually value is 5 more suitable.
2, the purpose that finds the rough center of circle the step 2 of step 3) is in order to be that true origin is set up coordinate system with this center of circle in the step 4) of step 3, and then determines four and can search for round quadrant.
3, the step 3) of step 3 only selects for use the Roberts operator to carry out rim detection in numerous edge detection operators.Be that it is more obvious to become edge in making because the threshold value of binaryzation is selected better.So use simple Roberts operator can accurately detect inward flange, and because its calculating is simple, so also have very high arithmetic speed.
4, in the step 2 of step 3) in calculate the center of circle and the radius that can "ball-park" estimate goes out the iris inward flange according to the accumulation of gray scale.But the reason that also will carry out step 3), step 4), step 5) be since the texture major part of iris all in place, so "ball-park" estimate can influence discrimination near inward flange.The inward flange of iris needs accurately location, so come the accurate localization inward flange by step 3), step 4), step 5).
5, why location iris outer peripheral process is not both because iris outward flange and sclera gray scale difference value are not very big with the method for step 3 location inward flange in the step 4, can not locate fast with the method for binaryzation.And, also be subjected to ciliary interference easily, so outer peripheral location does not need very high degree of accuracy because it is less to distribute near outer peripheral iris region texture.Adopt the circular edge detecting device to locate outward flange and reached the locating accuracy requirement.
6, why will do normalized by step 5 is because each the position of iris all can be different when gathering iris image.And because the influence of acquisition system illumination can cause the amplification or the contraction of pupil, this can make the size of iris also and then change.Can not directly be used for feature extraction so orient the iris region of annular, must convert the annular iris to a fixed-size gray matrix image.Though when gathering iris image, all can cause the change of the absolute position of iris, the relative position of iris texture generally can not change at every turn.Still adopt the method for polar coordinate transform that iris is carried out normalization.Because the outer edge of iris is not concentric usually, so this polar coordinate transform neither be concentric.
7, the step 3) in the step 5 is the purpose that the gray level image after the normalization is reached the figure image intensifying by histogram equalization.This step can solve in the image acquisition process because the problem of the iris image intensity profile inequality that inhomogeneous illumination causes.
8, step 6 is for doing image segmentation processing for the first time to iris image gray matrix after the normalization.It is divided into A, B, a C3 part to the Normalized Grey Level matrix according to the sparse of iris texture distribution.
9, step 7 is an image segmentation for the second time, and it is on the basis of the image segmentation first time and then iris image is divided into 18 is subjected to the less zone of noise possibility.
10, in the step 8, each cut zone can both obtain 10 wavelet channel after through 3 rank two-dimensional wavelet transformations, why will give up 3 wavelet channel that comprise diagonal detail is because these 3 wavelet channel are subjected to the possibility maximum of noise, discrimination can be influenced, so give up this 3 wavelet channel.
11, in the step 9, average is used for weighing the energy of wavelet channel, and variance is used for weighing the degree that wavelet coefficient in the wavelet channel departs from average.
12, the step 2 of step 10), only need extract half eigenwert at the cognitive phase of system is because what adopt in matching algorithm is the coupling recognizer of variance inverse weight summation.And used variance is the variance in the iris sample characteristics in the database, and does not need to know the variance feature in the iris image to be identified.So, when extracting the iris feature of iris image to be identified, only need to extract the characteristics of mean of iris image to be identified, and do not need to extract the variance feature of wavelet channel through wavelet coefficient behind the two-dimensional wavelet transformation at cognitive phase.The arithmetic speed that this has improved iris recognition has to a certain extent improved the real-time of recognition system.
13, in the step 5) of step 10, different threshold value T can be set according to the difference of security needs.Occasion higher to security requirement can be oppositely arranged lower threshold value T; Occasion lower to security requirement can be oppositely arranged higher threshold value T.
The present invention is divided into two parts in inward flange location and outward flange location to Iris Location, and lays stress on the inward flange location.In image normalization to the fixing gray matrix after then adopting rectangular coordinate and polar coordinates mapping theory the location.And the image after the normalization carried out image segmentation twice, finally be divided into 18 zonules.Carry out two-dimensional wavelet transformation by each little cut zone again, extract the wavelet coefficient average of main wavelet channel and variance as eigenwert to iris image after the normalized.In the coupling recognizer, in the normalized for the first time 3 subregions after the image segmentation adopt the matching algorithm of variance inverse weight summation to discern judgement respectively, obtain 3 recognition results.Come these 3 recognition results of weighting with different confidence level coefficients at last, obtain final recognition result.Adopt Algorithm of Iris Recognition of the present invention to carry out identification and can obtain higher recognition accuracy and noiseproof feature preferably, and very fast travelling speed is arranged, satisfy the requirement of system real time.
Innovation part of the present invention is:
1, in to the iris image position fixing process, according to locating the order that inward flange relocates the foreign aid earlier,, then carry out rim detection with the Roberts operator earlier to the iris image binaryzation, accurately locate the iris inward flange with the thought of Hough conversion then; During outward flange, the rounded edge that adopts Daugman to propose detects template and locatees outward flange in the location, and according to the parameter of oriented inward flange, has dwindled the hunting zone that rounded edge detects template, has improved arithmetic speed.
2, normalization matrix is carried out image segmentation twice.Cut apart for the first time normalization matrix is divided into 3 parts.Cut apart for the second time normalization matrix is divided into 18 zonules.Avoid eyelid and ciliary influence through what cut apart big limit for twice, and done sufficient preparation for the latter feature extraction.Then the gray matrix image after the normalization has been done the image enhancement processing of histogram equalization, avoided owing to uneven illumination causes the strong dark bigger situation of difference of image.
3,, be that wavelet basis carries out 3 rank two-dimensional wavelet transformations respectively to 18 little cut zone and obtains 10 wavelet channel with the Haar small echo in feature extraction phases.In these 10 wavelet channel, give up and fall to contain comprising of noise of horizontal high frequency and 3 wavelet channel of vertical high frequency information, keep remaining 7 passages.And to the 2-d wavelet coefficient of these 7 passages average and variance as eigenwert.
4, adopt the matching algorithm of variance inverse weight summation.And used variance is an existing variance in the database, make the average of wavelet coefficient that only need extract iris to be identified at cognitive phase as eigenwert, promptly the eigenwert quantity extracted of recognition system cognitive phase only is half of learning phase extraction eigenwert quantity.The arithmetic speed that this has accelerated system has to a certain extent preferably improved the real-time performance of system.
Description of drawings
Fig. 1 is the Iris Location result schematic diagram
This figure iris original image uses is a width of cloth iris image in the CASIA iris database (version 1.0).It is that a panel height is 280 pixels, and wide is the gray level image matrix of 320 pixels.The zone that the circle of two whites comprises among the figure is belt iris region.
Fig. 2 is a coordinate system transformation model synoptic diagram
Fig. 3 is iris normalized result
Belt iris region among Fig. 1 is normalized into a fixing gray matrix, and the height of gray matrix is 64 pixels among the figure, and wide is 1024 pixels.
Fig. 4 is cut apart for the first time for iris image
Texture distribution situation according to iris is divided into 3 zones to the Normalized Grey Level matrix.Wherein texture distribution in a-quadrant is the closeest, and the texture of whole iris 70% all is distributed in this zone; The B zone includes 20% iris texture; The C zone includes 10% iris texture.
Fig. 5 is cut apart for the second time for iris image
According to the possibility that is subjected to noise, on the basis of Fig. 4, continue to segment the image into 18 little cut zone ((1) is to (18) among the figure).These 18 zonules are subjected to eyelid and ciliary interference less.
Fig. 6 selects synoptic diagram for the two-dimensional transform wavelet channel
The wavelet channel of dark part for keeping among the figure, white portion is the wavelet channel of giving up.Wherein, LL 3Be the information of iris image under horizontal low frequency and vertical low frequency behind the 3 rank two-dimensional wavelet transformations.LH 1, LH 2, LH 3Be respectively the information of iris image under horizontal low frequency and vertical high frequency behind 1,2, the 3 rank wavelet transformations.HL 1, HL 2, HL 3Be respectively the information under horizontal high frequency and vertical low frequency of iris image behind 1,2, the 3 rank wavelet transformations.HH 1, HH 2, HH 3Be respectively the information of iris image under horizontal high frequency and vertical high frequency behind 1,2, the 3 rank wavelet transformations.
Fig. 7 is a schematic flow sheet of the present invention.
Embodiment
Adopt algorithm of the present invention, in CASIA iris database (version 1.0), test.100 groups of iris images in our the picked at random CASIA database.Get four width of cloth images in every group of iris image, totally 400 width of cloth iris images experimentize.At learning phase, 4 width of cloth iris images in every group of iris image are extracted 252 averages and variance eigenwert respectively, again the eigenwert of this 4 width of cloth image is got average and deposit in the sample database as last sample characteristics corresponding to this group iris image.In like manner, we extract the sample characteristics of these 100 groups of iris images, and deposit in the sample database.At cognitive phase, we use 400 width of cloth iris images to carry out pattern match identification computing to every group of sample characteristics in the sample database.Carry out 40000 (400 * 100) inferior pattern coupling identification computing so altogether.Obtain 40000 coupling identification result of calculation P altogether.Empirical value according to P is provided with threshold value T.Can obtain correct recognition rata preferably when T=2.0, this moment, correct recognition rata was 98.6%.

Claims (5)

1、基于图像分割和二维小波变换的虹膜识别方法,其特征是,它包含下列步骤:1, based on the iris recognition method of image segmentation and two-dimensional wavelet transform, it is characterized in that it comprises the following steps: 步骤1:采集虹膜图像Step 1: Capture the iris image 通过虹膜图像的采集装置采集虹膜图像,得到可用于进一步处理的虹膜图像灰度矩阵H(x,y);The iris image is collected by an iris image acquisition device to obtain an iris image grayscale matrix H(x, y) that can be used for further processing; 步骤2:虹膜图像中值滤波Step 2: Median filtering of the iris image 对步骤1所得的虹膜图像灰度矩阵H(x,y)做平滑处理得到灰度矩阵I(x,y);The iris image gray matrix H (x, y) obtained in step 1 is smoothed to obtain the gray matrix I (x, y); 步骤3:虹膜内边缘定位,具体包括以下步骤:Step 3: Locate the inner edge of the iris, specifically including the following steps: 步骤1):图像二值化Step 1): Image binarization 求出灰度矩阵I(x,y)的灰度直方图,找出灰度直方图内中灰度值在(20~125)范围内的峰值所对应的灰度值M,把灰度值M与一安全系数D相加得到可用于灰度图像二值化的阈值Y;以Y为阈值对虹膜灰度图像I(x,y)进行二值化,得到二值化图像B(x,y);Calculate the grayscale histogram of the grayscale matrix I(x, y), find out the grayscale value M corresponding to the peak value of the grayscale value in the grayscale histogram in the range of (20-125), and put the grayscale value Add M and a safety factor D to obtain the threshold Y that can be used for grayscale image binarization; use Y as the threshold to binarize the iris grayscale image I(x, y) to obtain a binarized image B(x, y); 步骤2):找内边缘粗略圆心Step 2): Find the rough center of the inner edge 计算出二值化矩阵B在x方向的灰度投影量P(x),在y方向的灰度投影量P(y);在一维数组P(x)中,找到其最小值,并找到该最小值对应的x1;同理在一维数组P(y)中,找到其最小值,并找到该最小值对应的y1;(x1,y1)即为虹膜内边缘的粗略圆心;Calculate the grayscale projection amount P(x) of the binarization matrix B in the x direction, and the grayscale projection amount P(y) in the y direction; find its minimum value in the one-dimensional array P(x), and find The x 1 corresponding to the minimum value; similarly, in the one-dimensional array P(y), find the minimum value, and find the y 1 corresponding to the minimum value; (x 1 , y 1 ) is the rough center of the inner edge of the iris ; 步骤3):边缘检测Step 3): Edge Detection 对二值化后的图像B(x,y)用Roberts算子进行图像的边缘检测,得到一个含有边缘的二值图像BW(x,y);Use the Roberts operator to perform image edge detection on the binarized image B(x, y), and obtain a binary image BW(x, y) containing edges; 步骤4):将边缘点分成4个象限Step 4): Divide edge points into 4 quadrants 在BW(x,y)中以步骤3)中得到的内边缘粗略圆心(x1,y1)为坐标原点建立坐标系,把离散边缘点分为四个象限;在每个象限内,在距离原点30个像素和50个像素的扇形范围内,随机选取3个像素相差在10个单位以上的边缘点;同理,在其他象限内也是如此选择3个像素点,这样4个象限共有12个像素点;In BW(x, y), use the rough center of the inner edge (x 1 , y 1 ) obtained in step 3) as the coordinate origin to establish a coordinate system, and divide the discrete edge points into four quadrants; in each quadrant, the Within the fan-shaped range of 30 pixels and 50 pixels from the origin, randomly select 3 edge points with a difference of more than 10 units; similarly, select 3 pixel points in other quadrants, so that there are 12 pixels in the 4 quadrants. pixels; 步骤5):4个象限联合精确定位Step 5): Joint precise positioning of 4 quadrants 在步骤4)的12个点中选出3个不在同一直线上的点,由这个3个点构成一个圆,求出其余9个点到这个圆的距离d;12个点最多能构成220个圆,也即有220个距离d,在这220个距离中找到最小的d对应的那个圆即为虹膜的内边缘,设得到的虹膜内边缘的圆心为(xa,ya),半径为raSelect 3 points that are not on the same straight line from the 12 points in step 4), form a circle from these 3 points, and find the distance d from the remaining 9 points to this circle; 12 points can form up to 220 A circle, that is, there are 220 distances d, and the circle corresponding to the smallest d among the 220 distances is the inner edge of the iris, and the center of the obtained inner edge of the iris is (x a , y a ), and the radius is r a ; 步骤4:虹膜外边缘定位,具体包括以下步骤:Step 4: Locate the outer edge of the iris, specifically including the following steps: 步骤1):限定圆边缘检测模板的迭代范围Step 1): Limit the iteration range of the circular edge detection template 在步骤2得到I(x,y)中,用圆形边缘检测器进行迭代求灰度积分值;在迭代过程中把内边缘的圆心(xa,ya)作为迭代(x0,y0)的初始值,并把(x0,y0)的搜索范围限定在以(xa-10,ya-10)、(xa-10,ya+10)、(xa+10,ya-10)、(xa+10,ya+10)为顶点的矩形内,并把r的搜索范围限定在70~110个单位像素内;在搜索的过程中,并不是在整个圆上对灰度做圆积分,而是在以(x0,y0)建立的直角坐标系中,角度为-45°~45°、135°~225°的圆形弧线上求灰度的积分;In step 2 to obtain I(x, y), the circular edge detector is used to iteratively calculate the gray integral value; in the iterative process, the center (x a , y a ) of the inner edge is used as the iteration (x 0 , y 0 ), and limit the search range of (x 0 , y 0 ) to (x a -10, y a -10), (x a -10, y a +10), (x a +10, y a -10), (x a +10, y a +10) are the vertices of the rectangle, and the search range of r is limited to 70-110 unit pixels; in the process of searching, it is not in the whole circle The circular integral is done on the gray scale, but in the Cartesian coordinate system established by (x 0 , y 0 ), the gray scale is calculated on the circular arc with angles of -45°~45°, 135°~225° integral; 步骤2):迭代中找出外边缘Step 2): Find the outer edge in iteration 在步骤1)的范围内迭代参数空间的(r,x0,y0)的值,求出灰度积分变化最大的那个圆即为虹膜的外边缘,对应的参数值(r,x0,y0)即为虹膜外边缘的半径和圆心值(rb,xb,yb);Iterate the value of (r, x 0 , y 0 ) in the parameter space within the scope of step 1), and find the circle with the largest change in the gray integral, which is the outer edge of the iris, and the corresponding parameter value (r, x 0 , y 0 ) is the radius of the outer edge of the iris and the center value (r b , x b , y b ); 步骤5:虹膜图像归一化,具体包括以下步骤:Step 5: iris image normalization, specifically including the following steps: 步骤1):建立坐标转换模型、并将虹膜图像归一化Step 1): Establish a coordinate transformation model and normalize the iris image 由步骤3和步骤4得到虹膜内、外边缘的圆周参数(ra,xa,ya)、(rb,xb,yb),以内圆圆心(xa,ya)作为坐标系的原点建立将直角坐标转换为极坐标的数学模型;在该模型中从原点开始做与水平线成θ角的射线,它与内、外边界各有一个交点,分别记作B(xi,yi),A(x0,y0);射线上两个交点A,B之间的任何一点的坐标(x,y)都可以用A(x0(θ),y0(θ),B(xi(θ),yi(θ))的线性组合来表示: x ( r , &theta; ) = ( 1 - r ) * x i ( &theta; ) + r * x 0 ( &theta; ) y ( r , &theta; ) = ( 1 - r ) * y i ( &theta; ) + r * y 0 ( &theta; ) , 这样就可以将虹膜图像归一化为以θ为横轴,r为纵轴的固定尺寸的灰度矩阵P(x,y);The circumference parameters (r a , x a , y a ) and (r b , x b , y b ) of the inner and outer edges of the iris are obtained from step 3 and step 4, and the center of the inner circle (x a , y a ) is used as the coordinate system Establish a mathematical model that converts Cartesian coordinates to polar coordinates at the origin; in this model, a ray that forms an angle θ with the horizontal line is made from the origin, and it has an intersection point with the inner and outer boundaries, which are respectively recorded as B(x i , y i ), A(x 0 , y 0 ); the coordinates (x, y) of any point between the two intersection points A and B on the ray can be expressed by A(x 0 (θ), y 0 (θ), B ( xi (θ), y i (θ)) to represent the linear combination: x ( r , &theta; ) = ( 1 - r ) * x i ( &theta; ) + r * x 0 ( &theta; ) the y ( r , &theta; ) = ( 1 - r ) * the y i ( &theta; ) + r * the y 0 ( &theta; ) , In this way, the iris image can be normalized into a fixed-size grayscale matrix P(x, y) with θ as the horizontal axis and r as the vertical axis; 步骤2):直方图均衡化Step 2): Histogram equalization 对步骤1)得到的灰度矩阵P(x,y)做直方图均衡化处理得到归一化灰度矩阵PI(x,y);Perform histogram equalization processing on the grayscale matrix P (x, y) obtained in step 1) to obtain the normalized grayscale matrix PI (x, y); 步骤6:归一化后灰度矩阵第一次图像分割Step 6: The first image segmentation of the gray matrix after normalization 在PI(x,y)中,选取灰度矩阵上方16×1024的部分作为虹膜纹理区域(A);选取行宽为17~48,列的范围分别是1~128、384~640、896~1024共3小块区域作为虹膜纹理区域(B);选取行宽为49~64,列的范围分别是1~64、448~576、980~1024共3小块区域作为虹膜纹理区域(C);In PI(x, y), the part of 16×1024 above the gray matrix is selected as the iris texture area (A); the row width is selected to be 17-48, and the column ranges are 1-128, 384-640, 896- A total of 3 small areas of 1024 are used as the iris texture area (B); a total of 3 small areas with a row width of 49-64 and a column range of 1-64, 448-576, and 980-1024 are selected as the iris texture area (C) ; 步骤7:归一化后灰度矩阵第二次图像分割。Step 7: The second image segmentation of the gray matrix after normalization. 把步骤6中的纹理区域(A)分成8个小块的灰度矩阵:行宽为1~16,列宽分别为1~128、129~256、257~384、385~512、513~640、641~768、769~896、897~1024;把虹膜纹理区域(B)也分成8个小块的灰度矩阵:行宽为17~32,列宽分别为1~128、385~512、513~640、897~1024;行宽为33~48,列宽分别为1~128、385~512、513~640、897~1024;把虹膜纹理区域(C)分成2个小块的灰度矩阵:行宽为49~64,列宽为1~64、961~1024的两个灰度矩阵区域结合在一起;行宽为49~64,列宽为449~578的灰度矩阵区域;Divide the texture area (A) in step 6 into 8 small grayscale matrices: the row width is 1-16, and the column width is 1-128, 129-256, 257-384, 385-512, 513-640 . 513~640, 897~1024; the row width is 33~48, and the column width is 1~128, 385~512, 513~640, 897~1024; the gray scale of dividing the iris texture area (C) into 2 small blocks Matrix: two grayscale matrix areas with a row width of 49-64 and a column width of 1-64 and 961-1024 are combined; a gray-scale matrix area with a row width of 49-64 and a column width of 449-578; 步骤8:对每个分割区域进行二维小波变换Step 8: Perform 2D wavelet transform on each segmented region 对步骤7中的每个小分割区域以Haar小波为小波基进行3阶二维小波变换,二维小波变换后共得到10个小波通道,把这些通道分别记为LL3、LH3、HL3、HH3、LH2、HL2、HH2、LH1、HL1、HH1;舍弃HH1、HH2、HH3这三个通道只保留剩余的7个通道;For each small segmented area in step 7, the third-order two-dimensional wavelet transform is performed with the Haar wavelet as the wavelet base. After the two-dimensional wavelet transform, a total of 10 wavelet channels are obtained, and these channels are respectively recorded as LL 3 , LH 3 , and HL 3 , HH 3 , LH 2 , HL 2 , HH 2 , LH 1 , HL 1 , HH 1 ; discard the three channels HH 1 , HH 2 , and HH 3 and only keep the remaining 7 channels; 步骤9:提取小波系数均值和方差作为特征值Step 9: Extract wavelet coefficient mean and variance as eigenvalues 对某个小分割区域二维小波变换后的每个小波通道提取其均值和样本方差:Extract the mean and sample variance of each wavelet channel after the two-dimensional wavelet transform of a small segmented area: E n = 1 M &times; N &Sigma; i = 1 M &Sigma; j = 1 N | x ( i , j ) | , D n = &Sigma; i - 1 M &Sigma; j - 1 N [ | x ( i , j ) | - E n ] 2 M &times; N - 1 作为特征值,这样该小分割区域每个小波通道能提取出两个特征值,7个小波通道就能提取出14个特征值;对每个小分割区域都重复此过程,共能提取出252个特征值; E. no = 1 m &times; N &Sigma; i = 1 m &Sigma; j = 1 N | x ( i , j ) | , D. no = &Sigma; i - 1 m &Sigma; j - 1 N [ | x ( i , j ) | - E. no ] 2 m &times; N - 1 As eigenvalues, two eigenvalues can be extracted from each wavelet channel in this small segmented area, and 14 eigenvalues can be extracted from 7 wavelet channels; this process is repeated for each small segmented area, and a total of 252 eigenvalues can be extracted. feature value; 步骤10:用方差倒数加权均值差进行匹配识别,具体包括以下步骤:Step 10: Use the reciprocal variance weighted mean difference for matching identification, specifically including the following steps: 步骤1):提取完整特征做为虹膜特征样本Step 1): Extract complete features as iris feature samples 在系统的学习阶段,对要学习的每幅虹膜图像都按照步骤2到步骤9的处理过程进行处理,每幅虹膜图像能提取出252个特征值,把这些特征值作为该虹膜图像的样本存储在虹膜样本数据库中,用于后面识别判定;In the learning phase of the system, each iris image to be learned is processed according to the processing process from step 2 to step 9, and 252 feature values can be extracted from each iris image, and these feature values are stored as samples of the iris image In the iris sample database, it is used for subsequent identification and judgment; 步骤2):提取均值特征用于识别Step 2): Extract the mean feature for recognition 在系统的识别阶段,对一幅未知的虹膜图像,对该图像按照步骤2到步骤9的处理过程进行处理,并在步骤9中只提取128个均值En用于识别;In the identification stage of the system, to an unknown iris image, the image is processed according to the processing procedure of step 2 to step 9, and in step 9 only 128 mean values E are extracted for identification; 步骤3):分3个部分分别进行匹配识别Step 3): Carry out matching recognition in 3 parts 在匹配识别过程中,把样本中的252个特征值按照步骤6的三个纹理区域分成3个部分:EA、EB、EC:EA为虹膜纹理区域(A)经变换提取的特征值,EB为虹膜纹理区域(B)经变换提取的特征值,EC为虹膜纹理区域(C)经变换提取的特征值;同理也把待识别虹膜图像的128个特征值分成3个部分:eA、eB、eC;按照方差倒数加权求和的匹配识别算法对3个部分分别进行匹配识别运算,即 P j = &Sigma; i = 1 N ( e ji - E ji ) 2 D ji , j=A,B,C;In the process of matching recognition, the 252 feature values in the sample are divided into three parts according to the three texture regions in step 6: E A , E B , E C : E A is the feature extracted from the iris texture region (A) through transformation value, E B is the eigenvalue extracted from the iris texture area (B) through transformation, E C is the eigenvalue extracted from the iris texture area (C) through transformation; similarly, the 128 eigenvalues of the iris image to be recognized are divided into 3 Parts: e A , e B , e C ; according to the matching recognition algorithm of the reciprocal variance weighted summation, the matching recognition operation is performed on the three parts respectively, namely P j = &Sigma; i = 1 N ( e the ji - E. the ji ) 2 D. the ji , j = A, B, C; 步骤4):用不同的系数加权3个识别结果Step 4): Weight the 3 recognition results with different coefficients 由步骤3)可以得到PA、PB、PC,对每部分的识别结果乘以加权系数即得到了最终的识别结果P,即P=a*PA+b*PB+c*PCFrom step 3), P A , P B , and P C can be obtained, and the final recognition result P is obtained by multiplying the recognition results of each part by the weighting coefficient, that is, P=a* PA +b*P B +c*P C ; 步骤5):设定阈值T进行识别判决Step 5): Set the threshold T for recognition and judgment 设定一个阈值T,当P当P<T时,即判定待识别的虹膜图像与数据库中的当前虹膜图像样本来自同一只眼睛;当P>T时,即判定待识别的虹膜图像与数据库中的当前虹膜图像样本来自不同的眼睛。Set a threshold T, when P When P<T, it is determined that the iris image to be recognized and the current iris image sample in the database come from the same eye; when P>T, it is determined that the iris image to be recognized is the same The current iris image samples are from different eyes. 2、根据权利要求1所述的基于图像分割和二维小波变换的虹膜识别方法,其特征是,所述步骤2中对步骤1所得的虹膜图像灰度矩阵H(x,y)做平滑处理过程中,滤波器为采样窗口大小为9个离散象素点的非线性中值滤波器。2, the iris recognition method based on image segmentation and two-dimensional wavelet transform according to claim 1, is characterized in that, in described step 2, the iris image gray matrix H (x, y) of step 1 gained is done smoothing process In the process, the filter is a nonlinear median filter with a sampling window size of 9 discrete pixels. 3、根据权利要求1所述的基于图像分割和二维小波变换的虹膜识别方法,其特征是,步骤3中的步骤1)图像二值化处理过程中,所述安全系数D通常取3到7之间的数。3, the iris recognition method based on image segmentation and two-dimensional wavelet transform according to claim 1, is characterized in that, in step 1) in the image binarization process in step 3, described safety factor D usually gets 3 to Number between 7. 4、根据权利要求3所述的基于图像分割和二维小波变换的虹膜识别方法,其特征是,所述安全系数D为5。4. The iris recognition method based on image segmentation and two-dimensional wavelet transform according to claim 3, wherein the safety factor D is 5. 5、根据权利要求1所述的基于图像分割和二维小波变换的虹膜识别方法,其特征是,步骤10的步骤4)中所述加权系数通常根据虹膜纹理分布的情况而定,其具体比例可以是:a∶b∶c=7∶2∶1。5, the iris recognition method based on image segmentation and two-dimensional wavelet transform according to claim 1, it is characterized in that, described weight coefficient in the step 4) of step 10 usually decides according to the situation of iris texture distribution, its specific ratio It can be: a:b:c=7:2:1.
CNB200610021266XA 2006-06-27 2006-06-27 Iris Recognition Method Based on Image Segmentation and 2D Wavelet Transform Expired - Fee Related CN100373396C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200610021266XA CN100373396C (en) 2006-06-27 2006-06-27 Iris Recognition Method Based on Image Segmentation and 2D Wavelet Transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200610021266XA CN100373396C (en) 2006-06-27 2006-06-27 Iris Recognition Method Based on Image Segmentation and 2D Wavelet Transform

Publications (2)

Publication Number Publication Date
CN1928886A true CN1928886A (en) 2007-03-14
CN100373396C CN100373396C (en) 2008-03-05

Family

ID=37858846

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200610021266XA Expired - Fee Related CN100373396C (en) 2006-06-27 2006-06-27 Iris Recognition Method Based on Image Segmentation and 2D Wavelet Transform

Country Status (1)

Country Link
CN (1) CN100373396C (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847260A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
CN102081739A (en) * 2011-01-13 2011-06-01 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102136072A (en) * 2010-01-21 2011-07-27 索尼公司 Learning apparatus, leaning method and process
CN102324032A (en) * 2011-09-08 2012-01-18 北京林业大学 A texture feature extraction method based on gray level co-occurrence matrix in polar coordinate system
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN104166848A (en) * 2014-08-28 2014-11-26 武汉虹识技术有限公司 Matching method and system applied to iris recognition
CN104700386A (en) * 2013-12-06 2015-06-10 富士通株式会社 Edge extraction method and device of tongue area
CN103246871B (en) * 2013-04-25 2015-12-02 山东师范大学 A kind of imperfect exterior iris boundary localization method strengthened based on image non-linear
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106447600A (en) * 2016-07-06 2017-02-22 河北箱变电器有限公司 Electric power client demand drafting system
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN107134025A (en) * 2017-04-13 2017-09-05 奇酷互联网络科技(深圳)有限公司 Iris lock control method and device
CN107195079A (en) * 2017-07-20 2017-09-22 长江大学 A kind of dining room based on iris recognition is swiped the card method and system
CN107895157A (en) * 2017-12-01 2018-04-10 沈海斌 A kind of pinpoint method in low-resolution image iris center
CN108334438A (en) * 2018-03-04 2018-07-27 王昆 The method that intelligence prevents eye injury
CN108470171A (en) * 2018-07-27 2018-08-31 上海聚虹光电科技有限公司 The asynchronous coding comparison method of two dimension
CN109409223A (en) * 2018-09-21 2019-03-01 昆明理工大学 A kind of iris locating method
CN109501721A (en) * 2017-09-15 2019-03-22 南京志超汽车零部件有限公司 A kind of vehicle user identifying system based on iris recognition
CN110619272A (en) * 2019-08-14 2019-12-27 中山市奥珀金属制品有限公司 Iris image segmentation method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN112699874A (en) * 2020-12-30 2021-04-23 中孚信息股份有限公司 Character recognition method and system for image in any rotation direction
CN115166120A (en) * 2022-06-23 2022-10-11 中国科学院苏州生物医学工程技术研究所 A spectral peak identification method, equipment, medium and product
CN115546236A (en) * 2022-11-24 2022-12-30 阿里巴巴(中国)有限公司 Image segmentation method and device based on wavelet transformation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408977B (en) * 2008-11-24 2012-04-18 东软集团股份有限公司 Method and device for dividing candidate barrier area

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1092372C (en) * 1997-05-30 2002-10-09 王介生 Iris recoganizing method
KR100374707B1 (en) * 2001-03-06 2003-03-04 에버미디어 주식회사 Method of recognizing human iris using daubechies wavelet transform

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101847260B (en) * 2009-03-25 2013-03-27 索尼公司 Image processing apparatus, and image processing method
CN101847260A (en) * 2009-03-25 2010-09-29 索尼公司 Image processing equipment, image processing method and program
CN102136072A (en) * 2010-01-21 2011-07-27 索尼公司 Learning apparatus, leaning method and process
CN102081739A (en) * 2011-01-13 2011-06-01 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102081739B (en) * 2011-01-13 2012-07-25 山东大学 Iris characteristic extracting method based on FIR (Finite Impulse Response) filter and downsampling
CN102324032B (en) * 2011-09-08 2013-04-17 北京林业大学 Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN102324032A (en) * 2011-09-08 2012-01-18 北京林业大学 A texture feature extraction method based on gray level co-occurrence matrix in polar coordinate system
CN103198301A (en) * 2013-04-08 2013-07-10 北京天诚盛业科技有限公司 Iris positioning method and iris positioning device
CN103198301B (en) * 2013-04-08 2016-12-28 北京天诚盛业科技有限公司 iris locating method and device
CN103246871B (en) * 2013-04-25 2015-12-02 山东师范大学 A kind of imperfect exterior iris boundary localization method strengthened based on image non-linear
CN104700386A (en) * 2013-12-06 2015-06-10 富士通株式会社 Edge extraction method and device of tongue area
CN104166848B (en) * 2014-08-28 2017-08-29 武汉虹识技术有限公司 A kind of matching process and system applied to iris recognition
CN104166848A (en) * 2014-08-28 2014-11-26 武汉虹识技术有限公司 Matching method and system applied to iris recognition
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106447600A (en) * 2016-07-06 2017-02-22 河北箱变电器有限公司 Electric power client demand drafting system
CN106910434A (en) * 2017-02-13 2017-06-30 武汉随戈科技服务有限公司 A kind of exhibitions conference service electronics seat card
CN107134025A (en) * 2017-04-13 2017-09-05 奇酷互联网络科技(深圳)有限公司 Iris lock control method and device
CN107195079A (en) * 2017-07-20 2017-09-22 长江大学 A kind of dining room based on iris recognition is swiped the card method and system
CN109501721A (en) * 2017-09-15 2019-03-22 南京志超汽车零部件有限公司 A kind of vehicle user identifying system based on iris recognition
CN107895157B (en) * 2017-12-01 2020-10-27 沈海斌 Method for accurately positioning iris center of low-resolution image
CN107895157A (en) * 2017-12-01 2018-04-10 沈海斌 A kind of pinpoint method in low-resolution image iris center
CN108334438A (en) * 2018-03-04 2018-07-27 王昆 The method that intelligence prevents eye injury
CN108470171A (en) * 2018-07-27 2018-08-31 上海聚虹光电科技有限公司 The asynchronous coding comparison method of two dimension
CN109409223A (en) * 2018-09-21 2019-03-01 昆明理工大学 A kind of iris locating method
CN110619272A (en) * 2019-08-14 2019-12-27 中山市奥珀金属制品有限公司 Iris image segmentation method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN111161276B (en) * 2019-11-27 2023-04-18 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN112699874A (en) * 2020-12-30 2021-04-23 中孚信息股份有限公司 Character recognition method and system for image in any rotation direction
CN115166120A (en) * 2022-06-23 2022-10-11 中国科学院苏州生物医学工程技术研究所 A spectral peak identification method, equipment, medium and product
CN115546236A (en) * 2022-11-24 2022-12-30 阿里巴巴(中国)有限公司 Image segmentation method and device based on wavelet transformation

Also Published As

Publication number Publication date
CN100373396C (en) 2008-03-05

Similar Documents

Publication Publication Date Title
CN1928886A (en) Iris identification method based on image segmentation and two-dimensional wavelet transformation
Gangwar et al. IrisSeg: A fast and robust iris segmentation framework for non-ideal iris images
Lau et al. Automatically early detection of skin cancer: Study based on nueral netwok classification
TWI224287B (en) Iris extraction method
CN101266645B (en) A method of iris localization based on multi-resolution analysis
CN1885314A (en) Pre-processing method for iris image
CN101055618A (en) Palm grain identification method based on direction character
CN1710593A (en) A Hand Feature Fusion Authentication Method Based on Feature Relationship Measurement
CN105261015B (en) Eye fundus image blood vessel automatic division method based on Gabor filter
WO2013087026A1 (en) Locating method and locating device for iris
CN102306289A (en) Method for extracting iris features based on pulse couple neural network (PCNN)
CN1685357A (en) Palmprint recognition method and device
CN102521600A (en) Method and system for identifying white-leg shrimp disease on basis of machine vision
CN1421815A (en) Fingerprint image enhancement method based on knowledge
CN1251142C (en) Multi-source image registering method on the basis of contour under rigid body transformation
CN100351852C (en) Iris recognition method based on wavelet transform and maximum detection
CN1092372C (en) Iris recoganizing method
CN110232390B (en) A method of image feature extraction under changing illumination
CN102332098A (en) Method for pre-processing iris image
CN1442823A (en) Individual identity automatic identification system based on iris analysis
CN101034440A (en) Identification method for spherical fruit and vegetables
CN1202490C (en) Iris marking normalization process method
CN1737821A (en) Image Segmentation and Fingerprint Line Distance Extraction Technology in Automatic Fingerprint Recognition Method
CN1549188A (en) Estimation of irides image quality and status discriminating method based on irides image identification
CN110610455B (en) Effective region extraction method for fisheye image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080305

Termination date: 20100627