Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method and a system for correcting a plane image, which can effectively eliminate the distortion of an optical system and the processing of the image under non-uniform illumination. Specifically, the invention provides the following technical scheme:
in one aspect, the present invention provides a method for correcting a planar image, the method including:
s1, acquiring basic data of the image acquisition equipment, and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data, and establishing an original image coordinate system with the center of the original distorted image as an origin;
s2, establishing a coordinate system X-Z of the corrected image, and taking the center of the corrected image as an origin;
s3, correcting geometric distortion caused by rotation of the X-Z plane around the Z axis and the X axis; correcting geometric distortion caused by rotation of the X-Z plane about the y-axis; establishing a relation between a point (X, Z) on an actual plane to a physical space point (X, y, Z);
s4, establishing a mapping from the physical space point (x, y, z) to the point (xi ', eta') of the original distorted image;
s5, determining pixel points of the initial distorted image corresponding to the points (xi ', eta') of the original distorted image, and determining the pixel values of the points in the corrected image by using a difference algorithm according to the coordinates and the pixel values of the adjacent points of the pixel point machine; based on the ycCorrecting the image to a required size by specifying the pixel size delta x of the corrected image to obtain a size-corrected image;
s6, obtaining 2D illumination distribution estimation of a region to be corrected through least square fitting on the image after size correction, and determining original non-uniform illumination intensity p (xi, eta);
s7, calculating the equilibrium strength p based on the p (xi, eta)ave(ii) a And correcting the intensity of each point based on the original non-uniform illumination intensity and the equilibrium intensity.
Preferably, the image acquisition device basic data comprises focal plane size length, width and horizontal field angle of the lens.
It should be noted that, in the method, the steps of correcting the geometric distortion of the image and correcting the illumination intensity may be adjusted back and forth, that is, the illumination intensity of the image may be corrected first, and then the geometric distortion of the image may be corrected, the above steps S3-S5 and steps S6-S7 may be exchanged in order, and the exchanged method shall be regarded as falling within the scope of the claims of the present invention.
Preferably, in step S2, the corrected image and the original distorted image are of the same image size, and more preferably, the length and width are the same as the focal plane.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, in S3, for the geometric distortion rotated around the Z-axis, the closest point ξ from the X-axis to the image acquisition device in the original distorted image plane X-Z is found0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
for geometric distortion rotating around an X-axis, a closest point eta from a Z-axis to an image acquisition device in an original distorted image plane X-Z is searched0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx。
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
preferably, in S3, the geometric distortion of the rotation around the y-axis is corrected according to the following manner:
wherein, ayIs the angle of rotation about the y-axis.
Preferably, in S4, the mapping relationship between the physical space point (x, y, z) and the point (ξ ', η') of the original distorted image is:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in S6, the least squares fitting is performed as follows:
where M is the order of the polynomial, 1 and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M and 1 increasing sequentially from 0 to k.
Preferably, in S7, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
In another aspect, the present invention further provides a planar image correction system, including:
the image acquisition module is used for acquiring original distorted image data;
the geometric distortion correction module is used for carrying out geometric distortion correction on the original distorted image and obtaining a size correction image;
the intensity correction module is used for correcting the illumination intensity of the image after the size correction;
an image output module for outputting a final corrected image;
the geometric distortion correction module includes at least: the device comprises an x-axis rotation correction unit, a y-axis rotation correction unit, a z-axis rotation correction unit, a basic data acquisition unit and a size correction unit;
the basic data acquisition unit is used for acquiring basic data of the image acquisition module and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data from the image acquisition module, and establishing an original image coordinate system with the center of the original distorted image as an origin;
the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit are respectively used for correcting geometric distortion caused by rotation around x, y and z axes;
and the size correction unit is used for correcting the image to a required size based on the correction results of the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit and the pixel size delta x of the specified correction image to obtain a size-corrected image.
Preferably, the intensity correction module comprises at least: a two-dimensional illumination distribution estimation unit and an intensity correction unit;
the two-dimensional illumination distribution estimation unit is used for obtaining 2D illumination distribution estimation of an area to be corrected through least square fitting and determining original non-uniform illumination intensity;
and the intensity correction unit is used for correcting the intensity of each point based on the original non-uniform illumination intensity and the balanced intensity.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, the Z-axis rotation correction unit finds the closest point xi from the X-axis to the image acquisition device in the original distorted image plane X-Z for the geometric distortion rotating around the Z-axis0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
preferably, the X-axis rotation correction unit finds, for geometric distortion rotating around the X-axis, a closest point η of the Z-axis to the image acquisition device in the original distorted image plane X-Z0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx。
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
preferably, the y-axis rotation correction unit corrects the geometric distortion rotated around the y-axis according to the following manner:
wherein, ayIs the angle of rotation about the y-axis.
Preferably, when the size correction unit performs the distorted image correction, the mapping relationship from the physical space point (x, y, z) to the point (ξ ', η') of the original distorted image is as follows:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in the two-dimensional illumination distribution estimation unit, the least square fitting is performed in the following manner:
where M is the order of the polynomial and 1 and k are the indices of the coefficients and the corresponding power changes of the two variables in the summation polynomial described above.
Preferably, in the intensity correction unit, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
Compared with the prior art, the technical scheme of the invention has the following advantages:
the geometric distortion of the image caused by the rotation of the object in the three dimensional directions is effectively corrected, the problem that the sizes of corresponding actual sizes of pixels are not uniform due to different photographing distances is effectively adjusted, and meanwhile, the image distortion caused by an infrared optical system and the problem of uneven brightness distribution of the image caused by the nonuniformity of illumination intensity are effectively corrected, so that the problems of distortion or uniformity and the like cannot occur after the image is spliced or synthesized.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Example 1
In the present embodiment, in order to further embody the idea of the present invention, a specific implementation of the method is described by taking a camera, particularly an infrared camera as an example, and it should be noted that although an infrared camera is taken as an example below, the application of the technical solution of the present invention is not limited to the subdivision field of the infrared camera, and should not be interpreted as limiting the scope of the present invention.
The imaging of a camera or the like follows the rules of projection imaging, and in an ideal camera system, in combination with fig. 1, without errors (such as lens aberrations) introduced by various components, in a three-dimensional (3D) cartesian coordinate system x-y-z (camera coordinate system), which is considered as the physical space for imaging, the camera is located at an origin o (0,0,0) and points in the positive y-axis direction, also referred to as the camera axis. Thus, the imaging plane (ξ, η) within the camera is perpendicular to the y-axis and its distance from the origin is ycAnd the intersection of the imaging plane and the y-axis is the center point of the imaging plane. Due to the particularity of the formula used in projection imaging, ycAny value can be taken without affecting the result, hereafter assuming that the imaging plane and the center point of the actual plane coincide, i.e. it is taken as the distance between the center point of the actual plane and the camera, as shown in fig. 1, a certain point (x, y, z) in the physical space is projected to a point (ξ, η) in the image plane, according to the projection geometry:
ξ=x yc/y (1)
η=z yc/y (2)
generalizing to all points (x, y, z) on one plane in physical space. Then, when "all points" in a plane are projected onto an image plane based on the above equations (1) and (2), we will obtain an image of the plane. In real space, an object typically has six degrees of freedom of motion: three translational movements along (x, y and z) spatial directions and three rotational movements around (x, y and z) spatial directions. For the plane X-Z, if we place it in the coordinate system shown in fig. 1, centered on the y-axis, with its X-axis and Z-axis parallel to the X-axis and Z-axis, respectively (i.e. the X-Z plane is perpendicular to the y-axis), this leaves four degrees of freedom of movement for its position in three-dimensional space: its distance from the camera and the other three rotational degrees of freedom. It should be noted that the coordinate system X-Z is attached to the plane so it moves with the image surface (but its origin (X-0, Z-0) is fixed); while the coordinate systems x-y-z and ξ - η are fixed in space. In the following we assume that the plane X-Z and the image plane ξ - η initially coincide with each other (i.e. X ═ ξ and Z ═ η). Thus, ycThe values determine the distance between the plane X-Z and the camera, and before any rotation the 3D spatial coordinates of the point (X, Z) are:
x=X (3)
y=yc (4)
z=Z (5)
when the X-Z plane is changed in four degrees of freedom, and its image is imaged in the camera, we observe that:
(1) rotation about the z-axis causes linear geometric distortion of the image;
(2) similarly, rotation about the x-axis also results in linear geometric distortion of the image;
(3) rotation about the y-axis produces an image that is rotated but without geometric distortion;
(4) a change in distance along the y-axis will only change the image size.
In addition, since the infrared optical system design usually requires a large system field of view, which inevitably causes geometric distortion of the image, including both radial distortion and tangential distortion, and the radial distortion is usually much larger than the tangential distortion, the technical solution provided in this embodiment of the present invention is particularly suitable for eliminating barrel distortion of the optical system, and of course, the method can also be applied to other types of image distortion, including error distortion introduced by conventional devices. In addition, since an active excitation light source is generally used in infrared nondestructive inspection, for example, if a single light source is used to point to the center of the surface of the object, each generated image may be superimposed with the illumination intensity variation, causing unevenness in the brightness distribution of the image.
In a specific embodiment, the method proposed by the invention involves a planar imaging involving a variation of 3 degrees of freedom in three dimensions and a variation of 1 degree of freedom in the distance perpendicular to the imaging plane, all of which 4 degrees of freedom may cause imaging errors, since the invention only considers planar imaging, so that once the three angles and distances are determined, the dimensions in the x and z directions are uniquely determined. The technical problems aimed at by the technical scheme provided by the invention comprise the following four aspects of technical problems: the method is used for correcting the geometric distortion of the image caused by the rotation of the object in the three-dimensional direction and correcting the problem of non-uniform actual sizes of pixels caused by different photographing distances. And thirdly, a problem for correcting image distortion due to an infrared optical system. Fourthly, the problem of uneven image brightness distribution caused by the nonuniformity of illumination intensity is solved.
Before discussing the image geometric distortion correction method, we first define the conditions necessary for seamless stitching of multiple images. First, the corrected image should maintain the orthogonality of the straight lines on the plane. Second, the line length in the corrected image should be linearly proportional to the line length on the object surface. Fig. 2 schematically shows a format before correction of a square flat image and a format after correction of the image, and the left image is an actual imaging plane, i.e., an image before correction, and includes geometric distortion of the image due to changes in four degrees of freedom and geometric distortion due to an optical system, and its coordinate system is ξ - η coordinate system with the center of the figure as the origin. The image on the right is a 2D linearly orthogonal image of the actual square plate, i.e. the corrected image, represented by the coordinate system X-Z. This format will ultimately allow for exact geometric matching stitching of the overlaid images, as schematically illustrated in fig. 3. The only thing missing in fig. 2 is the grey (or colour) distribution, which can be obtained from the original image (left image in fig. 2) once the relationship between the points (X, Z) and (ξ, η) is determined. Therefore, the following discussion is directed to establishing such a mapping of points of a pre-corrected image (input image) to a post-corrected image (output image). To address this problem, in one particular embodiment, the method employed by the present invention is as follows:
a method for correcting geometric distortion of an image, the method comprising the steps of:
1) acquiring the length Imax and the width Jmax of the focal plane size of the thermal imager; and the horizontal field angle alpha of the lens, then the vertical field angle alpha Jmax/Imax can be obtained;
2) measuring the distance y from the center of the object plane to the lens of the thermal imagerc;
3) Acquiring original distorted image data, and establishing an original image coordinate system xi-eta with the original image center as an origin, the abscissa as a xi axis and the ordinate as an eta axis;
4) and establishing a coordinate system X-Z of the corrected image, wherein the center of the image is taken as an origin. For simplicity, the corrected image takes the same image size as the original image, i.e., image dimensions with length Imax, width Jmax;
5) we denote the pixel in the final corrected image as (I, J), then for a certain point (X, Z), given a pixel size Δ X, there is I ═ X/Δ X, J ═ Z/Δ X;
6) a pixel in the original image is represented as (i, j), and corresponding to a midpoint (ξ, η) in the original image, i ═ ξ/δ and j ═ η/δ, δ is the size of each pixel in the original image, and is determined by the following formula:
δ=2yctan-1(α/2)/Imax (6)
7) correcting for geometrical distortion caused by rotation of the X-Z plane about the Z-axis, FIG. 4 shows the plane X-Z at an angle a of rotation about the Z-axiszThen the geometric relationship in the x-y plane (at arbitrary z). From this figure we can easily obtain the 3D spatial coordinates of the rotated point (X, Z):
x=X cosaz (7)
y=yc+X sinaz (8)
z=Z (9)
equation (9) indicates that rotation about the Z-axis has no effect on the Z-value.
In fig. 4, we also note a special point in the image plane,
ξ0=yctanaz (10)
which corresponds to the closest point of the plane X-Z to the camera. This point xi0It is usually easy to find from the image, see fig. 5.
For a small angle of rotation, any surface straight line parallel to the x-axis will become a curve, and for the connection line of the relative peak points AB of the two horizontal curves in the figure, the intersection point with the x-axis is xi0(ii) a The rotation angle a of the plane about the z-axis can be calculated using equation (10)z;
8) Correcting for geometric distortion caused by rotation of the X-Z plane about the X-axis, see fig. 6;
similar to equation (5), rotation about the X-axis does not change the value of X, so equations (7) and (10) remain valid. Finding the closest point η to the thermal imager on the z-axis0: for a small angle of rotation, any surface straight line parallel to the z-axis will become a curve, and for the connection line of the relative peak points CD of the two perpendicular curves in the graph, the intersection point with the z-axis is η0;
9) Using formula eta0=yctanax the rotation angle ax of the plane about the x-axis can be calculated;
10) the geometric distortion caused by the rotation of the X-Z plane around the X-axis and the Z-axis can be corrected according to the following formula;
thereby obtaining the mapping relation of the point (X ', Z') to the physical space point (X, y, Z).
11) Correcting for image rotation caused by rotation of the X-Z plane about the y-axis, see fig. 7;
12) geometric distortion caused by rotation about the y-axis can be corrected according to the following equation:
in a preferred embodiment, this may be performed as follows: when executed for the first time, can orderayObtaining corrected image as 0, estimating a according to the deflection angle of the imageyAnd again, program calibration is performed.
Now a relationship between one plane point (X, Z) to point (X ', Z') and then to physical space point (X, y, Z) is established.
From the above process, the point (X ', Z') is equivalent to an intermediate result, and the formula (11) (12) establishes a mapping relationship from the plane point (X, Z) to the point (X ', Y') and then to the physical space point (X, Y, Z), where the point (X ', Y') is the point (X, Z) if there is no distortion around the Y-axis as shown in the formula (12).
13) Correcting geometric distortion of the image, as shown in fig. 8;
14) since the camera detector is a point detector relative to the actual plane X-Z, if the imaging plane is a spherical surface centered on the origin, the imaging image will be distorted, as shown in the schematic diagram of fig. 9. The point on the imaging plane corresponding to the point (x, y, z) on the actual plane should be a new point (ξ ', η'). This new projection formula is:
ξ′=δαx/rΔ (13)
η′=δαz/rΔ (14)
here, the first and second liquid crystal display panels are,
r=(x2+z2)1/2 (15)
α=tan-1(r/y) (16)
Δ=tan-1(δ/yc) (17)
from the above formula, we can finally get the mapping of the point (X, Z) on the actual plane to the point (ξ ', η') on the imaging plane, i.e. its mapping to the point on the original distorted image.
FIG. 9 is a schematic diagram of projection distortion correction with the introduction of an intermediate curved surface to determine a new proxel ξ' in the x-y plane where αx=αx/r。
15) Inputting the pixel size delta X of the specified correction image for each pixel point (I, J) in the correction image, and finding out the corresponding point (X, Z) according to the step 5);
16) obtaining a point (X, y, Z) mapped to a physical space according to the formula (12) in the step 12) and the formula (11) in the step 10), and mapping to a point (xi ', eta') of the original distorted image according to the formulas (13) - (17) in the step 14), namely finally establishing a relation between (X, Z) and (xi ', eta');
17) according to the step 6), the pixel point of the initial distorted image corresponding to the point (xi ', eta') can be found, and the pixel value of the point (I, J) in the corrected image is determined by utilizing an interpolation algorithm according to the coordinates and the pixel values of the pixel point and the adjacent points.
18) Correcting the image size: from the measured value ycThe image size is automatically corrected to a desired size by the above-described steps by specifying the pixel size Δ x of the corrected image.
19) Correcting image brightness non-uniformity;
in many cases, artificial illumination sources are used in photography or photography. For example, if a single light source is used pointing towards the center of the surface, the central part of the image will show a higher intensity (or brightness), which is usually accompanied by a higher contrast (grey scale map). If we draw a straight line right through the center of the grayscale image, the pixel intensity distribution along this line looks like a solid line in FIG. 10. Although the variation in pixel intensity is related to the scene on the image, the smooth dashed line in fig. 10 is directly related to the illumination non-uniformity. Since such unevenness is an artifact superimposed on an image, it should be removed from the image and only the image content is displayed.
20) Selecting one or more rectangular areas from the image for intensity equalization, and performing correction according to the following algorithm by paying attention to avoid certain areas such as defect areas, frames, interested scenes and the like which do not need to be corrected;
21) it is first necessary to obtain an estimate of the 2D illumination distribution over the surface of the area to be corrected. As shown in fig. 10, a smooth dashed line (solid curve boundary centerline) is a good approximation for the distribution of illumination intensity in one dimension. For a two-dimensional image, the median surface of the image intensity distribution can be obtained by a least squares fit of a two-dimensional polynomial function. This two-dimensional polynomial function is represented as:
where M is the order of the polynomial, l and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M, l increasing sequentially from 0 to k. By employing a classical least squares fitting method, for example, the coefficient a corresponding to a fixed value M (e.g., M-4) can be easily obtainedkl。
22) Once p (ξ, η) is determined, the average value p of p over the entire image is calculatedave。paveAnd p (ξ, η) represent the equalized intensity and the raw non-uniform illumination intensity, respectively, we can correct the intensity of each point in the image using the following formula:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η) (19)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity (or grayscale) at point (xi, η), respectively.
23) Applying equation (19) to the pixel intensity distribution curve in fig. 10 will result in a new intensity curve with more uniform average intensity and contrast, as shown in fig. 11. Therefore, equation (19) will eliminate the effect of illumination non-uniformity in the image. FIG. 11 modifies the intensity profile of FIG. 10 along a line through the center of the image.
24) For the intensity value I of effective pixel point in the original imageold(xi, eta) by the above steps, a corresponding new corrected image intensity (or gray scale) I is obtainednew(ξ,η)。
Example 2
In yet another embodiment, the present invention further provides a planar image correction system, which can execute the method as described in embodiment 1, and the structure of the system is only a preferred configuration, and a system structure that is conventionally modified by a person skilled in the art according to the technical solution disclosed in the present invention should be considered to fall within the protection scope of the present invention when the main functions performed by the system structure are the same as the system disclosed in the present invention. Preferably, the system comprises:
the image acquisition module is used for acquiring original distorted image data;
the geometric distortion correction module is used for carrying out geometric distortion correction on the original distorted image and obtaining a size correction image;
the intensity correction module is used for correcting the illumination intensity of the image after the size correction;
an image output module for outputting a final corrected image;
the geometric distortion correction module includes at least: the device comprises an x-axis rotation correction unit, a y-axis rotation correction unit, a z-axis rotation correction unit, a basic data acquisition unit and a size correction unit;
the basic data acquisition unit is used for acquiring basic data of the image acquisition module and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data from the image acquisition module, and establishing an original image coordinate system with the center of the original distorted image as an origin;
the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit are respectively used for correcting geometric distortion caused by rotation around x, y and z axes;
and the size correction unit is used for correcting the image to a required size based on the correction results of the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit and the pixel size delta x of the specified correction image to obtain a size-corrected image.
Preferably, the intensity correction module comprises at least: a two-dimensional illumination distribution estimation unit and an intensity correction unit;
the two-dimensional illumination distribution estimation unit is used for obtaining 2D illumination distribution estimation of an area to be corrected through least square fitting and determining original non-uniform illumination intensity;
and the intensity correction unit is used for correcting the intensity of each point based on the original non-uniform illumination intensity and the balanced intensity.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, the Z-axis rotation correction unit finds the closest point xi from the X-axis to the image acquisition device in the original distorted image plane X-Z for the geometric distortion rotating around the Z-axis0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
preferably, the X-axis rotation correction unit finds, for geometric distortion rotating around the X-axis, a closest point η of the Z-axis to the image acquisition device in the original distorted image plane X-Z0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx。
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
preferably, the y-axis rotation correction unit corrects the geometric distortion rotated around the y-axis according to the following manner:
wherein, ayIs the angle of rotation about the y-axis.
Preferably, when the size correction unit performs the distorted image correction, the mapping relationship from the physical space point (x, y, z) to the point (ξ ', η') of the original distorted image is as follows:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in the two-dimensional illumination distribution estimation unit, the least square fitting is performed in the following manner:
where M is the order of the polynomial, 1 and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M and 1 increasing sequentially from 0 to k.
Preferably, in the intensity correction unit, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.