[go: up one dir, main page]

CN102620667B - Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology - Google Patents

Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology Download PDF

Info

Publication number
CN102620667B
CN102620667B CN201210084539.0A CN201210084539A CN102620667B CN 102620667 B CN102620667 B CN 102620667B CN 201210084539 A CN201210084539 A CN 201210084539A CN 102620667 B CN102620667 B CN 102620667B
Authority
CN
China
Prior art keywords
image
point target
pixel
value
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210084539.0A
Other languages
Chinese (zh)
Other versions
CN102620667A (en
Inventor
谭久彬
赵烟桥
刘俭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201210084539.0A priority Critical patent/CN102620667B/en
Publication of CN102620667A publication Critical patent/CN102620667A/en
Application granted granted Critical
Publication of CN102620667B publication Critical patent/CN102620667B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

利用点目标像拼合的图像传感器像素间距测量方法与装置属于以采用光学方法为特征的计量设备领域中用于计量长度、宽度或厚度的领域;本方法使点目标处于不同视场下并对其两次成像,根据两幅点目标像构造出线状图像,在频域中寻找像素间距的取值范围,并根据与像素间距相关的实际调制传递函数曲线与理论调制传递函数曲线在最小二乘条件下重合度最好,利用搜索算法计算得到像素间距;本装置中承载点目标的滑块安装在第一导轨和第二导轨上,滑块在第一导轨上的运动与滑块在第二导轨上的运动相配合,使点目标在任意视场位置都准焦成像到图像传感器表面;采用本发明测量图像传感器像素间距,有利于减小单次测量结果之间的误差,进而提高测量结果重复性。

The method and device for measuring the pixel distance of an image sensor by using point target image stitching belong to the field of measuring length, width or thickness in the field of measuring equipment characterized by the use of optical methods; the method makes point targets under different fields of view and compares them Two imaging, construct a linear image based on two point target images, find the value range of the pixel spacing in the frequency domain, and according to the actual modulation transfer function curve related to the pixel spacing and the theoretical modulation transfer function curve in the least squares condition The lower coincidence degree is the best, and the pixel spacing is calculated by using the search algorithm; the slider carrying the point target in this device is installed on the first guide rail and the second guide rail, and the movement of the slider on the first guide rail is the same as that of the slider on the second guide rail. Cooperate with the movement on the surface, so that the point target can be imaged on the surface of the image sensor at any position of the field of view; the measurement of the pixel pitch of the image sensor by the present invention is beneficial to reduce the error between the single measurement results, and then improve the repeatability of the measurement results. sex.

Description

Utilize image sensor pixel measurement method for distance and the device of point target as amalgamation
Technical field
Utilizing point target to belong to adopt optical means as the image sensor pixel measurement method for distance of amalgamation and device is the field for gauge length, width or thickness in the metering outfit field of feature, relate in particular to a kind of at frequency domain image sensor pixel measurement method for distance and the device based on two frame static point target image combination methods.
Background technology
Image sensor pixel spacing is the very important technical indicator in precision measurement field.For example, a known target of size is passed through to optical system imaging, according to target as shared image sensor pixel number, and pel spacing, can know the size of target picture, finally be business by size and the target size of target picture, just can demarcate the lateral magnification of this optical system; In addition, piece image is carried out to spectrum analysis, only know pel spacing, just may accurately obtain the frequency spectrum of this image.
But, the product description of a lot of imageing sensors, has only provided the pixel dimension of imageing sensor, and has not provided pel spacing, as the MV-1300UM type industrial digital camera of Shaanxi dimensional view picture, it is 5.2 μ m × 5.2 μ m that product description has only been given the size of pixel; The and for example IR113 type non-refrigeration focal surface movement of Wuhan Gao De, its pixel dimension is 25 μ m × 25 μ m, although provide fill factor, curve factor > 80%, still cannot obtain pel spacing according to a uncertain fill factor, curve factor numerical value simultaneously.If the frequency spectrum that we utilize above-mentioned imageing sensor to go to demarcate the lateral magnification of optical system or obtain certain image, pel spacing must become technical bottleneck.So it is very important that the pel spacing of measurement image sensor seems.
One, image sensor pixel measurement method for distance background technology
For the measuring method of image sensor pixel spacing, what first expect is exactly in theory, can utilize one to project to image sensor surface and the known linear image of length, and the number of pixels covering divided by this linear image, obtains pel spacing.In the ideal situation, this method has following two features:
1) pixel line source being covered completely, its gray-scale value is as benchmark gray-scale value.
2) edge pixel that can not cover completely for line source, according to the ratio of its gray-scale value and benchmark gray-scale value, judge can cover part scale.
But this method but has inevitable disturbing factor, have a strong impact on the accuracy of measurement result.
1) if the pixel covering is completely saturated, gray-scale value will keep 255 constant, gray-scale value between the pixel of the edge pixel that can not cover completely and covering completely will no longer have proportionate relationship, and the ratio judgement of edge pixel that line source covers just there will be mistake.
2) in the process of line source imaging, the impact of light, random noise and imageing sensor dark current is bound to have powerful connections, be subject to the impact of these disturbing factors, the pixel that can make line source cover completely, gray-scale value is not identical yet, will bring difficulty to the judgement of benchmark gray-scale value like this.
Although these shortcomings can increase the length of line source in theory, made up by sharing error equally by more pixel, the length that increases line source also can be brought new problem:
1) for the large optical lens of distortion, increase the length of line source, may make line source picture that serious deformation occurs in length, in this case, not only can not share error equally, and can make on the contrary the error in judgement of number of pixels larger.
2), in optical system debug process, can make imageing sensor there is different responses to the target of same intensity under different visual fields.Increase so again the judgement of benchmark gray-scale value.
Have above-mentioned a series of problem just because of this method, therefore, in actual mechanical process, this method is seldom used, and the substitute is another a series of method.
In April, 2005, College of Military Engineering journal the 17th volume is published an article for No. 2 " pel spacing of measuring ccd image acquisition system based on Joint Fourier Transformation ", and the method for image capturing system pel spacing introduced a kind of method that utilization carries out twice Fourier transform to centrosymmetric two square target and try to achieve in this section of article.First the method exports centrosymmetric two square-shaped image in spatial light modulator, by fourier lense imaging, obtains the power spectrum of this width image on CCD surface | S (u, v) | 2, this power spectrum obtains real power spectrum after image capturing system amplifies p times | S ' (u ', v ') | 2; To respectively | S (u, v) | 2compose with real power | S ' (u ', v ') | 2again be presented in spatial light modulator, by fourier lense re-imaging, scheme electric acquisition system and amplify, obtain respectively | S (u, v) | 2power spectrum o (ξ, η) and | S ' (u ', v ') | 2power spectrum o (ξ ', η '), here, o (ξ ', η ') and o (ξ, be η) brighter square centered by all, symmetry is relatively darker square pattern in both sides, center, and o (ξ ', η ') after p doubly amplifies, be exactly o (ξ, η), therefore, o (ξ, η) shared image capturing system pixel count D be o (ξ ', η ') shared image capturing system pixel count D ' p doubly, so, can utilize the ratio of D and D ' to calibrate the enlargement ratio p of image capturing system; After p value is determined, | S ' (u ', v ') | 2with o (ξ ', η ') can determine in succession, just can obtain the distance d ' between two squares in o (ξ ', η '), finally utilize d '/D ' to obtain the pel spacing of ccd image acquisition system.The shortcoming of this method is: o (ξ, η) and o (ξ ', η ') all can not ensure that square just in time covers in a pixel of ccd image acquisition system just, and very greatly may be across two pixels, this will bring difficulty to the judgement of D and D ', when judgement, all easily there is ± 1 error, thereby make the calibration result of the enlargement ratio p of ccd image acquisition system have error, and then have influence on o (ξ ', η ') in the judgement of spacing d ' of two specks, owing to having utilized d '/D ', so can make the judgement of ccd image acquisition system pel spacing have inevitable error.
In Dec, 2005, College of Military Engineering journal the 17th volume is published an article for No. 6 " the CCD pel spacing based on circular hole Fraunhofer diffraction is demarcated ", and a kind of method of utilizing Fraunhofer diffraction distribution plan to demarcate CCD pel spacing introduced in this section of article.The method utilizes directional light to irradiate the circular hole that is placed in collimator objective focal length place, forms the Fraunhofer diffraction distribution plan of circular hole on collimator objective surface, and this distribution plan, through the parallel ejaculation of collimator objective, incides the picture of this Fraunhofer diffraction distribution plan of CCD surface formation.According to the diameter a of circular hole, incident light wave length λ, and collimator objective focal distance f, can obtain central bright spot diameter L=1.22f λ/a in circular hole Fraunhofer diffraction distribution plan, again according to the number N of the shared CCD pixel of diameter of central bright spot in Fraunhofer diffraction distribution plan ', the pel spacing that obtains CCD is δ '=L/N '.The shortcoming of this method is: the edge that can not ensure central bright spot just in time drops in a pixel of CCD, and very greatly may be across two pixels, this just brings difficulty to the judgement of the shared CCD number of pixels of central bright spot diameter N ' in Fraunhofer diffraction distribution plan, easily there is ± 1 error, make the judgement of CCD pel spacing have inevitable error.
In June, 2008, photon journal the 37th volume is published an article for No. 6 " pel spacing that utilizes TFT-LCD pixel mechanism diffraction test ccd image acquisition system ", and this section of article introduced one and utilized Thin Film Transistor-LCD (TFT-LCD) to test principle and the method for ccd image acquisition system pel spacing.The method is first by TFT-LCD formation square signal, according to TFT-LCD pixel region printing opacity, the lighttight characteristic in non-pixel region, can be regarded as a two-dimensional grating by orthogonal two Periodic Rectangular optical grating constitutions, be placed on the front focal plane of fourier lense, on the back focal plane of this fourier lense, can obtain the spectrum intensity distribution plan of two-dimensional grating.This frequency spectrum profile is multistage spectrum distribution form, wherein, areal coordinate initial point place is composed in being centered close to of zero level frequency spectrum, the distribution form of each senior frequency spectrum and width are identical with zero level frequency spectrum, but the rising of intensity level time and reducing rapidly, according to m level frequency spectrum center to the distance of initial point is | m λ f/d|.Gather this two-dimensional grating spectrum intensity distribution plan by ccd image acquisition system, and arrive the shared pixel count N of initial point according to m level frequency spectrum center m, the pel spacing that can obtain ccd image acquisition system is | m λ f/dN m|.This method also has with the identical shortcoming of above prior art: can not ensure that zero level frequency spectrum and m level frequency spectrum center just in time drop in a pixel of CCD, therefore, N mthere will be equally ± 1 error, make the judgement of ccd image acquisition system pel spacing have inevitable error.In order to solve N mhave the problem of ± 1 error, adopted a kind of method of averaging of repeatedly measuring in literary composition, in the situation that not considering enlargement factor, the pel spacing of the ccd image acquisition system of trying to achieve is:
d xCCD = 1 6 | 3 λf N - 3 d x + 2 λf N - 2 d x + λf N - 1 d x + λf N 1 d x + 2 λf N 2 d x + 3 λf N 3 d x |
The method is alleviated N to a certain extent m± 1 error problem.
In October, 2008, photoelectric technology application the 29th volume is published an article for No. 5 " the CCD pel spacing of interfering based on double direction shear is demarcated ", and this section of article introduced and a kind ofly interfered the relativeness of two half-court widths of fringe to measure the method for ccd image acquisition system pel spacing by double direction shear.The method is irradiated wedge-shaped mirrors W by directional light, and the reflected light on the forward and backward surface of wedge-shaped mirrors W is because the effect of wedge-shaped mirrors W forms the shearing of x axle forward, then through mirror M 1after reflection, be imaged on ccd detector transmitted through wedge-shaped mirrors W, the width of fringe of this Shearing interference fringes is d 1=λ R/ (s+2n β R), wherein, d 1=N 1q; Meanwhile, the transmitted light on the forward and backward surface of wedge-shaped mirrors W is through mirror M 2after reflection, incide again wedge-shaped mirrors W upper, form the shearing of x axle negative sense, the width of fringe of this Shearing interference fringes is d 2=λ R/ (s+2n β R), wherein, d 2=N 2q.These two equations are all the equations about CCD pel spacing q, radius R, shearing displacement s, and these two equations are formed to system of equations, and the expression formula that can obtain ccd image acquisition system pel spacing is:
q = λ 4 nβ · N 1 + N 2 N 1 N 2
Wherein, the wavelength that λ is incident light wave, n is the refractive index of wedge-shaped mirrors W, β is the angle of wedge of wedge-shaped mirrors W, all can be given by calibration system; N 1, N 2be respectively the pixel count of the adjacent Shearing interference fringes ccd image sensor that width covers of the positive negative sense of x axle, by N 1, N 2measurement, can obtain ccd image acquisition system pel spacing q.The shortcoming of this method is: can not ensure that adjacent stripes covers in a pixel of CCD just, therefore, N 1, N 2all there will be ± 1 error, make the judgement of ccd image acquisition system pel spacing have inevitable error.
The common trait of these four kinds of methods is above:
1) form all known figures of shape and size in image sensor surface;
2) figure has obvious boundary characteristic;
3) boundary position of figure is thought in the center of graphic limit institute respective pixel.
Than desirable measuring method, the advantage of this serial of methods is:
1) because avoided the judgement of benchmark gray-scale value, and avoided by judging and the process of edge pixel can make this method can bear the impact of larger interference factor with the proportionate relationship of benchmark gray-scale value;
2) even if image, to a certain extent in state of saturation, also in the judgement that does not affect graphic limit position, has reduced for the requirement of image.
But this method also has the problem of self:
For the judgement of number of pixels, can only be integer judgement, the error of can exist ± 0.5 pixel of the judgement of each side, the error of two will exist ± 1 pixels of edge, line source length is shorter, and error will be larger.
Although these shortcomings can increase the length of line source in theory, made up by sharing error equally by more pixel, the length that increases line source can be brought new problem equally:
1) for the large optical lens of distortion, increase the length of line source, may make line source picture that serious deformation occurs in length, in this case, not only can not share error equally, and can make on the contrary the error in judgement of number of pixels larger;
2), in optical system debug process, can make imageing sensor there is different responses to the target of same intensity under different visual fields.Increase so again the judgement of benchmark gray-scale value.
Existing methodical common shortcoming is, for the large optical lens of distortion, to be not suitable for measuring under large visual field; And measurement under small field of view, between single measurement result, error is larger, therefore makes measurement result poor repeatability.
Two, image sensor pixel gap measuring device background technology
The field tests of international Patent classificating number G01M 11/02 optical property, discloses forming of dynamic image modulation transfer function measuring device by two patents of invention:
Patent No. ZL200810137150.1, at on 09 29th, 2010 Granted publication day, patent of invention " dynamic target modulation transfer function measurement method and device ", a kind of dynamic image modulation transfer function measuring device of high-accuracy multifunctional is disclosed, in this device, also there is the structure of light source, optical system and imageing sensor, and be that light source arrives image sensor surface through optical system imaging equally.
Patent No. ZL201010252619.3, at on 01 11st, 2012 Granted publication day, patent of invention " dynamic image modulation transfer function measuring device ", on the basis of a upper disclosed device of patent, further defines the coupling scheme of optical lens and the method for synchronization of measurement in device.
But the movement locus that these two characteristic feature of an inventions are light sources is perpendicular to the straight line of optical axis, for the optical system that has the curvature of field, in the process of light source motion, will inevitably cause the out of focus of image, if these two the disclosed measurement mechanisms of invention are applied directly in the present invention, cannot overcome problem of image blurring and gradation of image value variation issue that out of focus causes, this problem can cause the locational skew of cutoff frequency, and the accuracy of measurement result is affected.
Summary of the invention
The present invention is exactly the problem that is not suitable for measuring within the scope of small field of view for above-mentioned existing measuring method, and there is the problem of out of focus in existing measurement mechanism, proposed a kind of image sensor pixel spacing frequency domain measuring method and device, the method can improve measurement result repeatability within the scope of small field of view; This device can be eliminated the impact of out of focus on measurement result.
The object of the present invention is achieved like this:
Utilize the image sensor pixel measurement method for distance of point target as amalgamation, step is as follows:
A. imageing sensor, to the imaging for the first time of static point target, obtains the first frame initial static point target image, and extracts point target as place pixel coordinate position (x 1, y 1);
B. make point target move along image sensor line or column direction, displacement is h, and holding point target is stationary state afterwards;
C. keep the imageing sensor time shutter constant, imageing sensor, to the imaging for the second time of static point target, obtains the second frame initial static point target image, and extracts point target as place pixel coordinate position (x 2, y 2);
D. remove point target and keep the imageing sensor time shutter constant, imageing sensor, to background imaging, obtains interfering picture, and using the maximal value of gray-scale value in interfering picture as threshold value;
E. a walks the first frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the first frame correction static point target image; C walks the second frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the second frame correction static point target image;
F. e is walked to the first frame correction static point target image and the second frame correction static point target image that obtain superimposed, and by after stack in new images two point targets as all grey scale pixel value phase adductions of place row or column divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image;
G. f walks the structure point spread function image obtaining, and full line or the permutation information of wire hot spot place row or column are extracted, and as tectonic line spread function image, this tectonic line spread function image has n element;
H. g being walked to the tectonic line spread function image that obtains is 1 to carry out discrete Fourier transformation delivery by spacing, obtain initial modulation transport function image, this initial modulation transport function image has same g and walks the identical element number n of tectonic line spread function image obtaining, be n discrete spectrum component, be respectively M according to spatial frequency order from small to large 0, M 1, M 2..., M n-1, under this order, it is M that initial modulation transfer function values reaches the corresponding modulating transfer function value of minimal value for the first time i, its lower footnote sequence number is i;
I. according to the displacement h of b step, calculate after the optical system that is β through lateral magnification, the distance between two point target pictures is: d=h β;
J. according to i walk distance d between two point target pictures that obtain and g walk the corresponding modulation transfer function model M of the tectonic line spread function TF (f) that obtains=| sinc (π fd) |, the cutoff frequency that obtains g step tectonic line spread function image spectrum is: f=1/d=1/ (h β);
K. walking the cutoff frequency f of the tectonic line spread function image spectrum obtaining and h according to j, to walk the modulating transfer function value obtaining be M i-1and M i+1corresponding spatial frequency values equates respectively, that is: f=(i-1)/(nl min) and f=(i+1)/(nl max), the pel spacing span that obtains imageing sensor is: l min=(i-1)/(nf)=(i-1) d/n=(i-1) h β/n and l max=(i+1)/(nf)=(i+1) d/n=(i+1) h β/n;
1. walk according to k the pel spacing span obtaining, pel spacing is divided into N part, be respectively l 1, l 2..., l n, wherein l 1=l min, l n=l max;
M. in h walks the functional value of n the modulated terminal obtaining, choose K data as a comparison, this K modulating transfer function value is respectively M k1, M k2..., M kK, N the pel spacing that the 1st step is obtained is updated to respectively following formula: in the N that this formula an obtains value, the corresponding pel spacing l of minimum value is required.
The above-mentioned image sensor pixel measurement method for distance of point target as amalgamation that utilize, e step, f step replace with:
E '. a is walked to the first frame initial static point target image of obtaining and c, and to walk the second frame initial static point target image obtaining superimposed, and the gray-scale value that after stack, gray-scale value is less than the pixel of 2 times of d step gained threshold values in image is modified to 0, obtain revising superimposed image;
F '. in the correction superimposed image that e ' step is obtained, two point targets, as all grey scale pixel value phase adductions of place row or column divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image.
Utilize the image sensor pixel gap measuring device of point target as amalgamation, comprise axial the first guide rail of point target, optical system, imageing sensor, slide block and vertical light, described point target arrives image sensor surface through optical system imaging; And, this device also comprises the second guide rail along optical axis direction, the slide block of bearing point target is arranged on the first guide rail and the second guide rail, the motion of slide block on the first guide rail matches with the motion of slide block on the second guide rail, make point target any field positions all accurate Jiao be imaged onto image sensor surface.
The invention has the beneficial effects as follows:
1) measuring method that the present invention adopts is different from traditional spatial domain measuring method, the method is taking pointolite as target, make point target under different visual fields and to its twice imaging, look like to construct linear image according to two width point targets, in frequency domain, find the span of pel spacing, and best according to the actual modulated transfer curve relevant to pel spacing and theoretical modulation transfer function curve registration under least square condition, utilize searching algorithm to calculate pel spacing; When this feature makes to adopt the short and small line source of length, can obtain higher cutoff frequency, thereby share the error of cutoff frequency equally, make the error between single measurement result less, and then improve measurement result repeatability;
2) measurement mechanism that the present invention adopts comprises the second guide rail along optical axis direction, the slide block of bearing point target is arranged on the first guide rail and the second guide rail, the motion of slide block on the first guide rail matches with the motion of slide block on the second guide rail, make point target any field positions all accurate Jiao be imaged onto image sensor surface; This feature makes the modulation transfer function curve measuring more approach true curve, and the cutoff frequency position that actual measurement obtains is more accurate, can further reduce the error between single measurement result, improves measurement result repeatability.
Brief description of the drawings
Fig. 1 utilizes the image sensor pixel gap measuring device structural representation of point target as amalgamation
Fig. 2 utilizes the image sensor pixel measurement method for distance process flow diagram of point target as amalgamation
In figure: 1 point target 2 optical system 3 imageing sensor 4 slide block 5 first guide rail 6 second guide rails
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the invention is described in further detail.
Fig. 1 utilizes the image sensor pixel gap measuring device structural representation of point target as amalgamation; This device comprises axial the first guide rail 5 of point target 1, optical system 2, imageing sensor 3, slide block 4 and vertical light, and described point target 1 is imaged onto imageing sensor 3 surfaces through optical system 2; And, this device also comprises the second guide rail 6 along optical axis direction, the slide block 4 of bearing point target 1 is arranged on the first guide rail 5 and the second guide rail 6, the motion of slide block 4 on the first guide rail 5 matches with the motion of slide block 4 on the second guide rail 6, make point target 1 any field positions all accurate Jiao be imaged onto imageing sensor 3 surfaces; Wherein, point target 1 is the pin hole of diameter 15 μ m, and the lateral magnification of optical system 2 is 0.0557.
Utilize the image sensor pixel measurement method for distance of point target as amalgamation, as shown in Figure 2, the method step is as follows for process flow diagram:
A. imageing sensor 3, to static point target 1 imaging for the first time, obtains the first frame initial static point target image, and extracts point target as place pixel coordinate position (x 1, y 1);
B. make point target 1 move along imageing sensor 3 line directions, displacement is h=1.526mm, and holding point target 1 is stationary state afterwards;
C. keep 3 time shutter of imageing sensor constant, imageing sensor 3, to static point target 1 imaging for the second time, obtains the second frame initial static point target image, and extracts point target as place pixel coordinate position (x 2, y 2);
D. remove point target 1 and keep 3 time shutter of imageing sensor constant, imageing sensor 3, to background imaging, obtains interfering picture, and using the maximal value of gray-scale value in interfering picture as threshold value, this threshold value is 10;
E. a walks the first frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the first frame correction static point target image; C walks the second frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the second frame correction static point target image;
F. e is walked to the first frame correction static point target image and the second frame correction static point target image that obtain superimposed, and by all grey scale pixel value phase adductions that after stack, two point target pictures are expert in new images divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image;
G. f walks the structure point spread function image obtaining, and the full line information of wire hot spot place row is extracted, and as tectonic line spread function image, this tectonic line spread function image has n=1280 element;
H. g being walked to the tectonic line spread function image that obtains is 1 to carry out discrete Fourier transformation delivery by spacing, obtain initial modulation transport function image, this initial modulation transport function image has same g and walks the identical element number n of tectonic line spread function image obtaining, be n discrete spectrum component, be respectively M according to spatial frequency order from small to large 0, M 1, M 2..., M n-1, under this order, it is M that initial modulation transfer function values reaches the corresponding modulating transfer function value of minimal value for the first time i, its lower footnote sequence number is i;
I. according to the displacement h of b step, calculate after the optical system 2 that is β through lateral magnification, the distance between two point target pictures is: d=h β=1.526 × 0.0557=0.085mm;
J. according to i walk distance d between two point target pictures that obtain and g walk the corresponding modulation transfer function model M of the tectonic line spread function TF (f) that obtains=| sinc (π fd) |, the cutoff frequency that obtains g step tectonic line spread function image spectrum is: f = 1 d = 1 h · β = 1 0.085 = 11.7647 lp / mm ;
K. walking the cutoff frequency f of the tectonic line spread function image spectrum obtaining and h according to j, to walk the modulating transfer function value obtaining be M i-1and M i+1corresponding spatial frequency values equates respectively, that is: f=(i-1)/(nl min) and f=(i+1)/(nl max), the pel spacing span that obtains imageing sensor 3 is: l min=(i-1)/(nf)=(i-1) d/n=(i-1) h β/n and l max=(i+1)/(nf)=(i+1) d/n=(i+1) h β/n;
L. walk according to k the pel spacing span obtaining, pel spacing is divided into N part, be respectively l 1, l 2..., l n, wherein l 1=l min, l n=l max;
M. according to spatial frequency order from small to large, h is walked to n the modulating transfer function value obtaining and be depicted as one article of curve, choose on this curve from M 0start to first maximum value, and do not comprise first minimizing all modulating transfer function values, K conduct comparison data altogether, this K modulating transfer function value is respectively M k1, M k2..., M kK, l is walked to N the pel spacing obtaining and is updated to respectively following formula: in the N that this formula an obtains value, the corresponding pel spacing l of minimum value is required.
According to thinking above, pel spacing has been carried out to 100 times and measured, the measurement result obtaining is listed in the table below:
The above-mentioned image sensor pixel measurement method for distance of point target as amalgamation that utilize, e step, f step replace with:
E '. a is walked to the first frame initial static point target image of obtaining and c, and to walk the second frame initial static point target image obtaining superimposed, and the gray-scale value that after stack, gray-scale value is less than the pixel of 2 times of d step gained threshold values in image is modified to 0, obtain revising superimposed image;
F '. all grey scale pixel value phase adductions that in the correction superimposed image that e ' step is obtained, two point target pictures are expert at, divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image.

Claims (2)

1. utilize the image sensor pixel measurement method for distance of point target as amalgamation, it is characterized in that described method step is as follows:
A. imageing sensor, to the imaging for the first time of static point target, obtains the first frame initial static point target image, and extracts point target as place pixel coordinate position (x 1, y 1);
B. make point target move along image sensor line or column direction, displacement is h, and holding point target is stationary state afterwards;
C. keep the imageing sensor time shutter constant, imageing sensor, to the imaging for the second time of static point target, obtains the second frame initial static point target image, and extracts point target as place pixel coordinate position (x 2, y 2);
D. remove point target and keep the imageing sensor time shutter constant, imageing sensor, to background imaging, obtains interfering picture, and using the maximal value of gray-scale value in interfering picture as threshold value;
E. a walks the first frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the first frame correction static point target image; C walks the second frame initial static point target image obtaining, and the gray-scale value that gray-scale value is less than to the pixel of d step gained threshold value is modified to 0, obtains the second frame correction static point target image;
F. e is walked to the first frame correction static point target image and the second frame correction static point target image that obtain superimposed, and by after stack in new images two point targets as all grey scale pixel value phase adductions of place row or column divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image;
G. f walks the structure point spread function image obtaining, and full line or the permutation information of wire hot spot place row or column are extracted, and as tectonic line spread function image, this tectonic line spread function image has n element;
H. g being walked to the tectonic line spread function image that obtains is 1 to carry out discrete Fourier transformation delivery by spacing, obtain initial modulation transport function image, this initial modulation transport function image has same g and walks the identical element number n of tectonic line spread function image obtaining, be n discrete spectrum component, be respectively M according to spatial frequency order from small to large 0, M 1, M 2..., M n-1, under this order, it is M that initial modulation transfer function values reaches the corresponding modulating transfer function value of minimal value for the first time i, its lower footnote sequence number is i;
I. according to the displacement h of b step, calculate after the optical system that is β through lateral magnification, the distance between two point target pictures is: d=h β;
J. according to i walk distance d between two point target pictures that obtain and g walk the corresponding modulation transfer function model M of the tectonic line spread function TF (f) that obtains=| sin c (π fd) |, the cutoff frequency that obtains g step tectonic line spread function image spectrum is: f=1/d=1/ (h β);
K. walking the cutoff frequency f of the tectonic line spread function image spectrum obtaining and h according to j, to walk the modulating transfer function value obtaining be M i-1and M i+1corresponding spatial frequency values equates respectively, that is: f=(i-1)/(nl min) and f=(i+1)/(nl max), the pel spacing span that obtains imageing sensor is: l min=(i-1)/(nf)=(i-1) d/n=(i-1) h β/n and l max=(i+1)/(nf)=(i+1) d/n=(i+1) h β/n;
L. walk according to k the pel spacing span obtaining, pel spacing is divided into N part, be respectively l 1, l 2..., l n, wherein l 1=l min, l n=l max;
M. in h walks the functional value of n the modulated terminal obtaining, choose K data as a comparison, this K modulating transfer function value is respectively M k1, M k2..., M kK, N the pel spacing that the 1st step is obtained is updated to respectively following formula: in the N that this formula an obtains value, the corresponding pel spacing l of minimum value is required.
2. the image sensor pixel measurement method for distance of point target as amalgamation that utilize according to claim 1, is characterized in that e step, f step replace with:
E '. a is walked to the first frame initial static point target image of obtaining and c, and to walk the second frame initial static point target image obtaining superimposed, and the gray-scale value that after stack, gray-scale value is less than the pixel of 2 times of d step gained threshold values in image is modified to 0, obtain revising superimposed image;
F '. in the correction superimposed image that e ' step is obtained, two point targets, as all grey scale pixel value phase adductions of place row or column divided by 2, obtain new gray-scale value; And a is walked to the pixel coordinate position (x obtaining 1, y 1) and c walk the pixel coordinate position (x obtaining 2, y 2) gray-scale value of the pixel that covers of line replaces with new gray-scale value, obtains constructing point spread function image.
CN201210084539.0A 2012-03-17 2012-03-17 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology Expired - Fee Related CN102620667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210084539.0A CN102620667B (en) 2012-03-17 2012-03-17 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210084539.0A CN102620667B (en) 2012-03-17 2012-03-17 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology

Publications (2)

Publication Number Publication Date
CN102620667A CN102620667A (en) 2012-08-01
CN102620667B true CN102620667B (en) 2014-07-16

Family

ID=46560740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210084539.0A Expired - Fee Related CN102620667B (en) 2012-03-17 2012-03-17 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology

Country Status (1)

Country Link
CN (1) CN102620667B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102620912B (en) * 2012-03-17 2014-10-15 哈尔滨工业大学 Lateral magnification measuring method for point target image-spliced optical system and lateral magnification measuring device
CN102620668B (en) * 2012-03-17 2014-07-16 哈尔滨工业大学 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology
CN102607443B (en) * 2012-03-17 2014-12-24 哈尔滨工业大学 Point target image mosaic-based image sensor pixel pitch measurement method
CN102620911B (en) * 2012-03-17 2014-10-15 哈尔滨工业大学 Method and device for measuring transverse magnification of optical system by means of point target image splicing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004037410A (en) * 2002-07-08 2004-02-05 Yucaly Optical Laboratory Inc Modulation transfer function measuring device and modulation transfer function measuring method
CN101354307A (en) * 2008-09-22 2009-01-28 哈尔滨工业大学 Method and device for measuring modulation transfer function of dynamic target
CN102607815A (en) * 2012-03-17 2012-07-25 哈尔滨工业大学 Method and device for measuring lateral magnification of optical system based on jointing of point target images
CN102607443A (en) * 2012-03-17 2012-07-25 哈尔滨工业大学 Point target image mosaic-based image sensor pixel pitch measurement method and point target image mosaic-based image sensor pixel pitch measurement system
CN102620912A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Lateral magnification measuring method for point target image-spliced optical system and lateral magnification measuring device
CN102620911A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Method and device for measuring transverse magnification of optical system by means of point target image splicing
CN102620668A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004037410A (en) * 2002-07-08 2004-02-05 Yucaly Optical Laboratory Inc Modulation transfer function measuring device and modulation transfer function measuring method
CN101354307A (en) * 2008-09-22 2009-01-28 哈尔滨工业大学 Method and device for measuring modulation transfer function of dynamic target
CN102607815A (en) * 2012-03-17 2012-07-25 哈尔滨工业大学 Method and device for measuring lateral magnification of optical system based on jointing of point target images
CN102607443A (en) * 2012-03-17 2012-07-25 哈尔滨工业大学 Point target image mosaic-based image sensor pixel pitch measurement method and point target image mosaic-based image sensor pixel pitch measurement system
CN102620912A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Lateral magnification measuring method for point target image-spliced optical system and lateral magnification measuring device
CN102620911A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Method and device for measuring transverse magnification of optical system by means of point target image splicing
CN102620668A (en) * 2012-03-17 2012-08-01 哈尔滨工业大学 Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology

Also Published As

Publication number Publication date
CN102620667A (en) 2012-08-01

Similar Documents

Publication Publication Date Title
CN102620668B (en) Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology
US20190271537A1 (en) Multiscale Deformation Measurements Leveraging Tailorable and Multispectral Speckle Patterns
Shi et al. Volumetric calibration enhancements for single-camera light-field PIV
CN102607441B (en) Method and device for measuring space of pixels of image sensor by using constant-speed movable point target
Lu et al. Modulation measuring profilometry with cross grating projection and single shot for dynamic 3D shape measurement
CN102607443B (en) Point target image mosaic-based image sensor pixel pitch measurement method
US20130034209A1 (en) X-ray imaging apparatus
CN102620667B (en) Method and device for measuring pixel pitch of image sensor based on point-target image splicing technology
CN104897083A (en) Three-dimensional rapid measurement method for raster projection based on defocusing phase-unwrapping of projector
CN106989689A (en) The sub-aperture stitching detection technique and device of heavy-calibre planar optical elements face shape
CN102607442B (en) Method and device for measuring space of pixels of image sensor by using constant-speed movable point target
CN102607444B (en) Method and device for measuring space of pixels of image sensor by using linear light source
Xiaobo et al. Research and development of an accurate 3D shape measurement system based on fringe projection: model analysis and performance evaluation
CN102620670B (en) Method and device for measuring pixel pitch of image sensor on basis of line light source
CN102620669B (en) Method and device for measuring pixel pitch of image sensor by utilizing constant moving point target
CN102607815B (en) Method and device for measuring lateral magnification of optical system based on jointing of point target images
CN106468562A (en) A kind of color camera radial direction aberration calibration steps based on absolute phase
Liu et al. High-accuracy measurement for small scale specular objects based on PMD with illuminated film
Ri Accurate and fast out-of-plane displacement measurement of flat objects using single-camera based on the sampling moiré method
CN102620671B (en) Method and device for measuring pixel pitches of image sensor by utilizing line light source
Haist et al. Towards one trillion positions
Berssenbrügge et al. Characterization of the 3D resolution of topometric sensors based on fringe and speckle pattern projection by a 3D transfer function
CN102620913A (en) Method and device for measuring transverse magnification of optical system by means of uniform-speed moving point targets
CN102620911A (en) Method and device for measuring transverse magnification of optical system by means of point target image splicing
CN102620914A (en) Method and device adopting line source for measuring transverse magnification of optical system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140716

CF01 Termination of patent right due to non-payment of annual fee