Summary of the invention
Brief summary of the present invention is given below, in order to provide the basic reason about certain aspects of the invention
Solution.It should be appreciated that this summary is not an exhaustive overview of the invention.It is not intended to determine key of the invention
Or pith, nor is it intended to limit the scope of the present invention.Its purpose only provides certain concepts in simplified form, with
This is as the preamble in greater detail discussed later.
To solve the problem above-mentioned, one embodiment of the invention provides a kind of test macro.Test macro include a light box,
Multiple tabula rasas and a carrying bottom.Tabula rasa is separately positioned in light box, and tabula rasa is each located on the different depth in the light box.Carrying
Bottom carries a lens module to be measured to fixed, and makes lens module to be measured in face of light box and tabula rasa, lens module to be measured
It is different from the respective spacing distance of tabula rasa.Wherein, when lens module to be measured shoots the first image frame, the first image
Picture includes the image of tabula rasa.
Those tabula rasas include at least one first tabula rasa, at least one second tabula rasa and an at least third tabula rasa.
At least one first tabula rasa at a distance from the lens module to be measured be a first distance, at least one second tabula rasa with
The distance of the lens module to be measured is a second distance, which is one the with the lens module to be measured at a distance from
Three distances, the first distance are greater than the second distance, which is greater than the third distance.
This at least each side of one first tabula rasa with the light box tank wall close at least one second tabula rasa is set respectively at this for this
At least one jiao of light box, an at least third tabula rasa not with the tank wall of the light box close to.
Multiple figures are respectively contained at least one first tabula rasa, at least one second tabula rasa and an at least third tabula rasa
Picture, those images include multiple anchor points and an analysis diagram.
Those images also include a check figure, a color lump figure, an object figure at least one.
The lens module to be measured includes multiple pick-up lens, those pick-up lens respectively shoot one second image frame, respectively
Second image frame includes the image of at least one first tabula rasa, at least one second tabula rasa and an at least third tabula rasa,
And first image frame is made of those second image frames.
Further, also include:
One processing unit, according to the analysis diagram at least one second image frame of those the second image frames, in terms of
Calculate an image resolution.
Further, also include:
One processing unit, according at least two image frames in those second image frames those anchor points with should be to
The angle between lens module is surveyed, to calculate an image depth of view information;
Wherein the image depth of view information includes scape depth of view information, a close shot depth of view information in a distant view depth of view information, one.
The processing unit also compares the image depth of view information with depth of view information known to one, to obtain close shot correction
Value, scape corrected value and a distant view corrected value in one, and by the distant view depth of view information, scape depth of view information, the close shot depth of field are believed in this
Breath is by the close shot corrected value, scape corrected value and the distant view corrected value are corrected in this, and to export a panoramic deep image, this is entirely
Depth of field image includes those second images after correction.
The processing unit is also to according to those analysis diagrams in those second images in the panoramic deep image, to calculate
The clarity of the panoramic deep image, and those second images after correction are subjected to complementation, to generate a complementary panoramic deep image,
And calculate a depth of field distribution map of the complementation panoramic deep image.
Another embodiment of the present invention provides a kind of test method.Test method comprise the steps of: be respectively set it is multiple
For tabula rasa in a light box, those tabula rasas are each located on the different depth in the light box;One lens module to be measured of fixed carrying exists
One carrying bottom, and lens module to be measured is made to face light box and tabula rasa, lens module to be measured and the respective interval distance of tabula rasa
From being different from;Wherein, when lens module to be measured shoots first image frame, the first image frame includes the shadow of tabula rasa
Picture.
By applying an above-mentioned embodiment, the present invention can capture a picture by the inclusion of in the single light box of the multiple depth of field
Face can obtain a variety of different depth information, and can be using these depth informations to detect camera module to be measured, and can apply this
A little depth informations are to be corrected camera module to be measured.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.It is described in an attached drawing of the invention or a kind of embodiment
Elements and features can be combined with elements and features shown in one or more other attached drawings or embodiment.It should
Note that for purposes of clarity, being omitted in attached drawing and explanation unrelated to the invention, known to persons of ordinary skill in the art
The expression and description of component and processing.Based on the embodiments of the present invention, those of ordinary skill in the art are not paying creation
Property labour under the premise of every other embodiment obtained, shall fall within the protection scope of the present invention.
It is open term, i.e., about "comprising" used herein, " comprising ", " having ", " containing " etc.
Mean including but not limited to.Referring concurrently to Fig. 1 and Fig. 2.Fig. 1 is the signal of the light box according to depicted in one embodiment of the invention
Figure.Fig. 2 is the bottom view of the tabula rasa accommodating space of the light box according to depicted in Fig. 1.Test macro 100 includes light box 120, multiple
Tabula rasa 220,240,260 and one carries bottom 122.Wherein, tabula rasa 220,240,260 is separately positioned in light box 120, tabula rasa
220,240,260 it is each located on the different depth in light box 120.It carries bottom 122 and carries a lens module to be measured to fixed
140, and lens module 140 to be measured is made to face light box 120 and tabula rasa 220,240,260, lens module 140 to be measured and tabula rasa
220, a 240,260 respective spacing distances are different from.Wherein, when lens module 140 to be measured shoots the first image frame,
First image frame includes the image of tabula rasa 220,240,260.
More specifically, as shown in Figure 1, in one embodiment, tabula rasa 220,240,260 is placed on tabula rasa accommodating space
In 200, the top of tabula rasa accommodating space 200 is the top 124 of light box 120, and camera model to be measured 140 can be placed in bottom
122.However, the disposing way of camera model 140 to be measured and tabula rasa accommodating space 200 is not limited thereto, in another embodiment
In, camera model 140 to be measured only needs to be placed on the position that can take each tabula rasa 220,240,260.As a result, when to be measured
When lens module 140 is toward 200 filmed image picture of tabula rasa accommodating space, the image frame taken can include tabula rasa simultaneously
220,240,260 image.
In one embodiment, it if along the direction visual angle a shown in FIG. 1, is watched toward the top 124 of light box 120, it can be seen that light
Plate mode as shown in Figure 2 arranges.In the present embodiment, accommodating space 200 includes at least one first tabula rasa 220, at least 1 the
Two tabula rasas 240 and at least a third tabula rasa 260.Wherein, each side of the first tabula rasa 220 is with the tank wall in light box 120 close to example
Such as: the four of rectangular first tabula rasa 220 while all just with 120 tank wall of light box four while close to;Second tabula rasa 240 is set respectively
At least one jiao of light box, such as: multiple second tabula rasas 240 in a manner of suspending in midair or use bracket, are fixed on the four of light box respectively
A corner;Third tabula rasa 260 not with the tank wall of light box close to, such as: third tabula rasa 260 and the second tabula rasa 240 are equally with suspention
Or using bracket mode, be fixed on the centre of tabula rasa accommodating space 200, without with any tank wall close to.
By this configuration mode, various tabula rasas can be made not block each other completely in filmed image picture, and can quilt
Take the size for carrying out subsequent analysis enough.
Then, referring to Fig. 3~5.Fig. 3 is the side of the tabula rasa accommodating space according to depicted in one embodiment of the invention
View.Fig. 4 is the flow chart according to the test method of one embodiment of the invention.Fig. 5 is according to depicted in one embodiment of the invention
Tabula rasa on image schematic diagram.
In step S410, lens module to be measured is placed into position to be measured.In one embodiment, carrying bottom 122 is used
Lens module to be measured is carried with fixed, and makes lens module to be measured in face of light box 120 and tabula rasa 220,240,260, wherein to
Lens module is surveyed to be different from the respective spacing distance of tabula rasa 220,240,260.In addition, about tabula rasa lens module to be measured
Details are as follows with the embodiment of the respective spacing distance of tabula rasa 220,240,260.
In one embodiment, as shown in figure 3, lens module to be measured 140 is placed on carrying bottom 122, the first tabula rasa 220
It is d1 with the first distance of lens module 140 to be measured, the second distance of the second tabula rasa 220 and lens module 140 to be measured is d2, the
Three tabula rasas 260 and the third distance of lens module 140 to be measured are d3.Wherein, first distance d1 be greater than second distance d2, second away from
It is greater than third distance d3 from d2.In one embodiment, first distance d1 can be 5~15 centimeters, and second distance d2 can be 55
~65 centimeters, third distance d3 can be 95~105 centimeters.
By the configuration mode of this embodiment, can allow lens module 140 to be measured by shoot these tabula rasas 220,240,
260, acquired by picture image there is the image parts of the different depth of field such as distant view, middle scape and close shot, can be conducive to carry out subsequent
For the analysis of test result.
On the other hand, as shown in figure 5, respectively being wrapped on the surface of the tabula rasa 220,240,260 towards lens module 140 to be measured
Containing multiple images, image includes positioning Figure 50 0 and an analysis diagram 510.Positioning in Figure 50 0 includes multiple anchor points.It is real one
Apply in example, image can also include a check Figure 52 0, a color lump figure (not being painted), an object Figure 53 0 or other can provide border to be measured
Image of the module as test item.
In addition, in one embodiment, lens module 140 to be measured can be an array camera, it include multiple pick-up lens.Class
As, these pick-up lens respectively shoot one second image frame, and each second image frame includes the first tabula rasa 220, the second light
The image of plate 240 and third tabula rasa 260, and may make up one first image frame by these second image frames.In an embodiment
In, it is to be spliced into one first image frame by four the second picture images.In another embodiment, processing unit can be according to extremely
Analysis diagram among few one second image frame, to calculate an image resolution.
Then, the step S420 of Fig. 4 is returned to.In the step s 420, according to the second image frame, a depth of view information is calculated,
And image depth of view information compares with depth of view information known to one, it is remote with scape corrected value in one close shot corrected value of acquirement, one and one
Scape corrected value.
In one embodiment, lens module 140 to be measured shoots tabula rasa by its multiple pick-up lens for being included respectively
220,240,260, so that each pick-up lens is respectively obtained one second image frame, and (be not painted) with processing unit, according to this
Multiple anchor points at least two image frames in a little second image frames and the angle between lens module to be measured 140, with
Calculate an image depth of view information.Wherein, image depth of view information includes scape depth of view information, a close shot in a distant view depth of view information, one
Depth of view information.
In another embodiment, the tool of lens module 140 to be measured there are four pick-up lens, all clap respectively by this four pick-up lens
All tabula rasas 220,240,260 are taken the photograph, respectively to obtain one second image frame.Due on each tabula rasa 220,240,260 all at least
Include positioning Figure 50 0 and analysis diagram 510, therefore the second image frame captured by each four pick-up lens, also all includes
There are positioning Figure 50 0 and an analysis diagram 510.Processing unit can be shot respectively in the second image frame by this four pick-up lens,
Positioning Figure 50 0 in optional at least two the second image frames is to calculate an image depth information.
Illustrate the specific embodiment for generating depth of view information below.Please refer to Fig. 6~8.Fig. 6 is the step S420's of Fig. 4
Sub-step flow chart.Fig. 7 is the schematic diagram according to the depth of field bearing calibration of one embodiment of the invention.Fig. 8 is according to of the invention another
The schematic diagram of the depth of field bearing calibration of one embodiment.
In step S421, processing unit by each second image frame obtain distant view anchor point, middle scape anchor point and
The coordinate position of close shot anchor point.
For example, as shown in fig. 7, positioning Figure 50 0a~500d system is to belong to determining in four the second image frames respectively
Bitmap.It positions in Figure 50 0a~500d, pie chart sample represents distant view anchor point, scape anchor point in rectangular pattern representative, triangular graph
Sample represents close shot anchor point.In one embodiment, processing unit selection with positioning Figure 50 0a of all second image frames~
500d carries out operation, and the seat by obtaining distant view anchor point, middle scape anchor point and close shot anchor point in each second image frame
Cursor position.It is understood that, it is however generally that, selection is more to carry out the second image frame number of operation, then in subsequent step
In, the precision of depth of field depth calculation is higher.
In step S422, processing unit calculates multiple anchor points and camera lens mould to be measured at least 2 second image frames
Angle between block 140, to obtain an image depth of view information.Wherein, image depth of view information includes distant view depth of view information, middle Jing Jing
Deeply convince breath, close shot depth of view information.
For example, as shown in figure 8, base is for photographic imagery concept, when with two camera site shooting objects respectively,
Take the photograph two pictures 810,820 are coincided after generating picture 830, apart from two of the farther away object of pick-up lens at
Picture 811,821 and the angle x1 of a fixed point 831 are smaller, and two apart from the farther away object of pick-up lens are imaged 812,822 and determine
The angle x2 of point 831 is larger.It accordingly, can be by this imaging characteristic, by multiple multiple pictures taken the photograph with different camera sites
It is overlapped, and the angle according to multiple imagings of same object in the picture that coincides and pick-up lens or a certain fixed point, to distinguish
Object be in image frame for it is remote, in, the relationship of close shot, to calculate image depth of view information.
In one embodiment, as shown in fig. 7, positioning Figure 50 0a~500d, four pick-up lens shoot the second shadow respectively thus
As the positioning figure in picture.Processing unit can will be positioned according to each anchor point coordinate position in positioning Figure 50 0a~500d
Figure 50 0a~500d, which coincides, merges positioning Figure 50 0e for one.Based on respective distant view anchor point in wantonly two the second images above-mentioned
It is smaller with the angle of lens module 140 (or any fixed point) to be measured, in wantonly two the second images respective close shot anchor point with to
The biggish concept of angle of lens module 140 is surveyed, various anchor points can be learnt apart from camera lens mould to be measured by merging positioning Figure 50 0e
The distant relationships of block 140.
For example, merge positioning Figure 50 0e in, the anchor point of pie chart sample is more intensive, and the anchor point of any two pie chart samples with
The angle of lens module 140 (or any fixed point) to be measured is smaller, and closeness is higher, therefore can determine whether the anchor point generation of these pie chart samples
Table distant view anchor point;The concentration of the anchor point of rectangular pattern is taken second place, therefore can determine whether that the anchor point of these rectangular patterns represents
Middle scape anchor point;The anchor point of triangle pattern is the loosest, and the anchor point of any two triangle patterns and camera lens mould to be measured
The angle of block 140 is larger, and closeness is minimum, therefore can determine whether that the anchor point of triangle pattern represents close shot anchor point.
Accordingly, fixed according to the concentration of anchor point or two after being overlapped by the positioning figure in multiple second images
The angle in site and camera lens, to differentiate that each anchor point is distant view, middle scape or close shot in picture.
In step S423, processing unit compares image depth of view information with depth of view information known to one, close to obtain one
Scape corrected value, scape corrected value and a distant view corrected value in one.
For example, such as the paragraph that aforementioned corresponding diagram 3 is described, tabula rasa 220,240,260 and lens module 140 to be measured
Actual range is it is known that can be used as known depth of view information.Therefore, processing unit can be by acquired image scape in step S422
Breath is deeply convinced compared with known depth of view information, can calculate close shot corrected value, middle scape corrected value and distant view corrected value.
In step S424, reservoir (not being painted) to store correction parameter, correction parameter include close shot corrected value, in
Scape corrected value and distant view corrected value.
Wherein, processing unit and reservoir can be placed in light box 120 or independently set outside light box 120, and to be measured
140 electric property coupling of lens module.Processing unit can be by volume circuit such as micro-control unit (microcontroller), micro-
Manage device (microprocessor), digital signal processor (digital signal processor), the integrated electricity of special applications
Road (application specific integrated circuit, ASIC) or a logic circuit are implemented.In addition, storage
Device is to store various data, e.g. memory, hard disk, Portable disk memory card etc..
Then, the step S430 of Fig. 4 is returned to.In step S430, by distant view depth of view information, middle scape depth of view information, close shot
Depth of view information is corrected by close shot corrected value, middle scape corrected value and distant view corrected value, to export a panoramic deep image, panorama
Deep image includes these second images after correction.
In step S440, processing unit is according to the analysis diagram in the second image in panoramic deep image, to calculate panorama
The clarity of deep image, and the second image after correction is subjected to complementation, to generate a complementary panoramic deep image, and calculate complementation
One depth of field distribution map of panoramic deep image.Wherein, depth of field distribution map can be, for example, that the correspondence depth of field of complementary panoramic deep image is straight
Side's figure or other distribution maps numerically presented.
In one embodiment, processing unit can use the second image after at least two corrections and carry out complementation, complementary
Mode can be ballot method, calculating pixel average or other algorithms that can be compensated mutually image.For example, it handles
Unit is carried out complementary and all identical with one in the second image after this three corrections using the second image after three corrections
Object;If position of this object only in the second image after two corrections is identical, after being corrected with this two second
Subject to object's position in image, the object's position in the second image after another Zhang Jiaozheng is adjusted to correct with this two
The second image afterwards is identical.
In step S450, processing unit by complementary panoramic deep image distant view depth of view information and middle scape depth of view information into
One Fuzzy processing of row, to generate a synthesis segmentation depth map.
In this manner, can make in the synthesis segmentation depth map of output, the object with close shot depth of view information is more highlighted,
And other picture part systems are fuzzy, and judge whether the close shot part in synthesis segmentation depth map by human eye or processing unit
Correctly, in one embodiment, the mode of judgement is the close shot part and known practical close shot portion that will be synthesized in segmentation depth map
Divide and be compared, to judge whether the error of synthesis segmentation depth map and actual environment less than an error threshold value.For example, segmentation
Close shot part in depth map is a basketball, and known practical close shot part is a basketball really, then judges synthesis point
The error of section depth map and actual environment is less than an error threshold value.Thus user is it can be seen that via complementary panoramic deep image institute
Within the acceptable range whether the synthesis of generation is segmented depth map, with the error of actual environment.
In step S460, known depth of view information and synthesis are segmented one of depth map distant view picture, one by processing unit
Middle scape picture and a close shot picture are compared, to generate an analysis as a result, and using display (not shown) to show analysis
As a result.It whether is can determine whether as a result, via correction, complementary and etc. the and synthesis segmentation depth map that generates in distant view picture, middle scape
The performance of picture and close shot picture is correct and clear, and analysis is come out as the result is shown.
By above-mentioned test macro and test method, the present invention can be picked by the inclusion of in the single light box of the multiple depth of field
It takes a picture that can obtain a variety of different depth information, and can be existed using these depth informations with detecting camera module to be measured
Far, in, the camera shooting of close shot performance, and can using these depth informations to be corrected to camera module to be measured.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.