[go: up one dir, main page]

CN110738608B - Plane image correction method and system - Google Patents

Plane image correction method and system Download PDF

Info

Publication number
CN110738608B
CN110738608B CN201910848287.6A CN201910848287A CN110738608B CN 110738608 B CN110738608 B CN 110738608B CN 201910848287 A CN201910848287 A CN 201910848287A CN 110738608 B CN110738608 B CN 110738608B
Authority
CN
China
Prior art keywords
image
axis
point
plane
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910848287.6A
Other languages
Chinese (zh)
Other versions
CN110738608A (en
Inventor
孙建刚
陶宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Publication of CN110738608A publication Critical patent/CN110738608A/en
Application granted granted Critical
Publication of CN110738608B publication Critical patent/CN110738608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明提供了一种平面图像校正方法及系统,该系统至少包括图像采集模块,用于采集原始畸变图像数据;几何失真校正模块,用于对所述原始畸变图像进行几何失真校正,并获得尺寸校正图像;强度校正模块,用于对所述尺寸校正后图像进行光照强度校正;图像输出模块,用于输出最终校正图像。本发明有效校正了物体在三个维度方向上旋转造成的图像几何失真,以及像素对应实际尺寸大小不统一的问题,同时很好地解决了红外光学系统引起的图像畸变以及照明强度的非均匀性问题,方法便捷,且资源消耗少。

Figure 201910848287

The present invention provides a plane image correction method and system. The system includes at least an image acquisition module for acquiring original distorted image data; a geometric distortion correction module for performing geometric distortion correction on the original distorted image and obtaining size Correcting the image; an intensity correction module for performing illumination intensity correction on the size-corrected image; and an image output module for outputting the final corrected image. The invention effectively corrects the geometric distortion of the image caused by the rotation of the object in the three-dimensional directions, and the problem that the pixel corresponds to the actual size of the non-uniform size, and also solves the image distortion caused by the infrared optical system and the non-uniformity of the illumination intensity. problem, the method is convenient, and the resource consumption is low.

Figure 201910848287

Description

Plane image correction method and system
Technical Field
The invention relates to the field of computer image processing, belongs to the field of plane image correction subdivision, and particularly relates to a method and a system for correcting a plane image based on projection geometric distortion and illumination nonuniformity.
Background
In the field of infrared thermal imaging nondestructive detection, an object to be shot is often too large or the field of view is limited, so that the object cannot be imaged in one image, or a high-resolution image with more details is obtained, and at this time, the object needs to be divided into a plurality of areas to be imaged respectively, and finally the images are spliced to form a complete image. But the mosaic result may be inaccurate if the individual images are not properly corrected. This is because the images taken by the thermal imager follow the rules of projection imaging, which does not guarantee orthogonality of the planes and geometric linearity, possibly causing geometric distortions of the projected images. In addition, to obtain more radiation energy, infrared optical systems are generally designed as large relative caliber systems, which in turn leads to radial distortion of the infrared image. Finally, since the imaging surface typically has illumination with an uneven intensity distribution, each generated image will be superimposed with this illumination intensity variation, causing the brightness distribution of the image to be uneven. In addition, the actual sizes corresponding to the images of the respective subareas are not uniform due to different distances from the thermal imager to the detected object during subarea photographing, which is generally acceptable in single image imaging, but in jigsaw composition of a plurality of overlapped images, due to the problems, puzzles cannot be accurately matched and the strength is not uniform. However, in many scientific and engineering applications, we often need to make an accurate mosaic and make the mosaic appear the same as if it were imaged with a single exposure under uniform illumination.
In addition, in some special use scenarios, such as infrared optical systems, since the design of the infrared optical system usually requires a large field of view of the system, geometric distortion of the image is inevitably caused, which includes both radial distortion and tangential distortion, and the radial distortion is usually much larger than the tangential distortion. And because an active excitation light source is generally used in infrared nondestructive testing. For example, if a single light source is used to point to the center of the object surface, each generated image may be superimposed with this illumination intensity variation, causing the brightness distribution of the image to be uneven.
Based on the existing problems, there is no effective method or system for correcting the image under the conditions of distortion splicing and non-uniform illumination of the image at the same time.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method and a system for correcting a plane image, which can effectively eliminate the distortion of an optical system and the processing of the image under non-uniform illumination. Specifically, the invention provides the following technical scheme:
in one aspect, the present invention provides a method for correcting a planar image, the method including:
s1, acquiring basic data of the image acquisition equipment, and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data, and establishing an original image coordinate system with the center of the original distorted image as an origin;
s2, establishing a coordinate system X-Z of the corrected image, and taking the center of the corrected image as an origin;
s3, correcting geometric distortion caused by rotation of the X-Z plane around the Z axis and the X axis; correcting geometric distortion caused by rotation of the X-Z plane about the y-axis; establishing a relation between a point (X, Z) on an actual plane to a physical space point (X, y, Z);
s4, establishing a mapping from the physical space point (x, y, z) to the point (xi ', eta') of the original distorted image;
s5, determining pixel points of the initial distorted image corresponding to the points (xi ', eta') of the original distorted image, and determining the pixel values of the points in the corrected image by using a difference algorithm according to the coordinates and the pixel values of the adjacent points of the pixel point machine; based on the ycCorrecting the image to a required size by specifying the pixel size delta x of the corrected image to obtain a size-corrected image;
s6, obtaining 2D illumination distribution estimation of a region to be corrected through least square fitting on the image after size correction, and determining original non-uniform illumination intensity p (xi, eta);
s7, calculating the equilibrium strength p based on the p (xi, eta)ave(ii) a And correcting the intensity of each point based on the original non-uniform illumination intensity and the equilibrium intensity.
Preferably, the image acquisition device basic data comprises focal plane size length, width and horizontal field angle of the lens.
It should be noted that, in the method, the steps of correcting the geometric distortion of the image and correcting the illumination intensity may be adjusted back and forth, that is, the illumination intensity of the image may be corrected first, and then the geometric distortion of the image may be corrected, the above steps S3-S5 and steps S6-S7 may be exchanged in order, and the exchanged method shall be regarded as falling within the scope of the claims of the present invention.
Preferably, in step S2, the corrected image and the original distorted image are of the same image size, and more preferably, the length and width are the same as the focal plane.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, in S3, for the geometric distortion rotated around the Z-axis, the closest point ξ from the X-axis to the image acquisition device in the original distorted image plane X-Z is found0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
for geometric distortion rotating around an X-axis, a closest point eta from a Z-axis to an image acquisition device in an original distorted image plane X-Z is searched0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
Figure BDA0002196028910000031
preferably, in S3, the geometric distortion of the rotation around the y-axis is corrected according to the following manner:
Figure BDA0002196028910000041
wherein, ayIs the angle of rotation about the y-axis.
Preferably, in S4, the mapping relationship between the physical space point (x, y, z) and the point (ξ ', η') of the original distorted image is:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in S6, the least squares fitting is performed as follows:
Figure BDA0002196028910000042
where M is the order of the polynomial, 1 and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M and 1 increasing sequentially from 0 to k.
Preferably, in S7, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
In another aspect, the present invention further provides a planar image correction system, including:
the image acquisition module is used for acquiring original distorted image data;
the geometric distortion correction module is used for carrying out geometric distortion correction on the original distorted image and obtaining a size correction image;
the intensity correction module is used for correcting the illumination intensity of the image after the size correction;
an image output module for outputting a final corrected image;
the geometric distortion correction module includes at least: the device comprises an x-axis rotation correction unit, a y-axis rotation correction unit, a z-axis rotation correction unit, a basic data acquisition unit and a size correction unit;
the basic data acquisition unit is used for acquiring basic data of the image acquisition module and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data from the image acquisition module, and establishing an original image coordinate system with the center of the original distorted image as an origin;
the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit are respectively used for correcting geometric distortion caused by rotation around x, y and z axes;
and the size correction unit is used for correcting the image to a required size based on the correction results of the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit and the pixel size delta x of the specified correction image to obtain a size-corrected image.
Preferably, the intensity correction module comprises at least: a two-dimensional illumination distribution estimation unit and an intensity correction unit;
the two-dimensional illumination distribution estimation unit is used for obtaining 2D illumination distribution estimation of an area to be corrected through least square fitting and determining original non-uniform illumination intensity;
and the intensity correction unit is used for correcting the intensity of each point based on the original non-uniform illumination intensity and the balanced intensity.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, the Z-axis rotation correction unit finds the closest point xi from the X-axis to the image acquisition device in the original distorted image plane X-Z for the geometric distortion rotating around the Z-axis0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
preferably, the X-axis rotation correction unit finds, for geometric distortion rotating around the X-axis, a closest point η of the Z-axis to the image acquisition device in the original distorted image plane X-Z0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
Figure BDA0002196028910000061
preferably, the y-axis rotation correction unit corrects the geometric distortion rotated around the y-axis according to the following manner:
Figure BDA0002196028910000062
wherein, ayIs the angle of rotation about the y-axis.
Preferably, when the size correction unit performs the distorted image correction, the mapping relationship from the physical space point (x, y, z) to the point (ξ ', η') of the original distorted image is as follows:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in the two-dimensional illumination distribution estimation unit, the least square fitting is performed in the following manner:
Figure BDA0002196028910000063
where M is the order of the polynomial and 1 and k are the indices of the coefficients and the corresponding power changes of the two variables in the summation polynomial described above.
Preferably, in the intensity correction unit, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
Compared with the prior art, the technical scheme of the invention has the following advantages:
the geometric distortion of the image caused by the rotation of the object in the three dimensional directions is effectively corrected, the problem that the sizes of corresponding actual sizes of pixels are not uniform due to different photographing distances is effectively adjusted, and meanwhile, the image distortion caused by an infrared optical system and the problem of uneven brightness distribution of the image caused by the nonuniformity of illumination intensity are effectively corrected, so that the problems of distortion or uniformity and the like cannot occur after the image is spliced or synthesized.
Drawings
FIG. 1 is a schematic view of a projection imaging system according to an embodiment of the present invention;
FIG. 2 is a comparison of a pre-corrected image and a post-corrected image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an accurate jigsaw puzzle after geometric distortion correction according to an embodiment of the present invention;
FIG. 4 is an X-y plan view of a plane X-Z of an embodiment of the present invention rotated about the Z-axis (both the Z-axis and the Z-axis are directed to the reader);
FIG. 5 is an imaging plane rotated at an angle about the z-axis according to an embodiment of the present invention;
FIG. 6 is an imaging plane rotated by an angle about the x-axis according to an embodiment of the present invention;
FIG. 7 is an imaging plane rotated by an angle about the y-axis according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of correcting geometric distortion of an image according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of projection distortion correction according to an embodiment of the present invention;
FIG. 10 is a graph of the pixel intensity distribution along a line through the center of a grayscale image according to an embodiment of the present invention;
FIG. 11 is an updated intensity curve for an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Example 1
In the present embodiment, in order to further embody the idea of the present invention, a specific implementation of the method is described by taking a camera, particularly an infrared camera as an example, and it should be noted that although an infrared camera is taken as an example below, the application of the technical solution of the present invention is not limited to the subdivision field of the infrared camera, and should not be interpreted as limiting the scope of the present invention.
The imaging of a camera or the like follows the rules of projection imaging, and in an ideal camera system, in combination with fig. 1, without errors (such as lens aberrations) introduced by various components, in a three-dimensional (3D) cartesian coordinate system x-y-z (camera coordinate system), which is considered as the physical space for imaging, the camera is located at an origin o (0,0,0) and points in the positive y-axis direction, also referred to as the camera axis. Thus, the imaging plane (ξ, η) within the camera is perpendicular to the y-axis and its distance from the origin is ycAnd the intersection of the imaging plane and the y-axis is the center point of the imaging plane. Due to the particularity of the formula used in projection imaging, ycAny value can be taken without affecting the result, hereafter assuming that the imaging plane and the center point of the actual plane coincide, i.e. it is taken as the distance between the center point of the actual plane and the camera, as shown in fig. 1, a certain point (x, y, z) in the physical space is projected to a point (ξ, η) in the image plane, according to the projection geometry:
ξ=x yc/y (1)
η=z yc/y (2)
generalizing to all points (x, y, z) on one plane in physical space. Then, when "all points" in a plane are projected onto an image plane based on the above equations (1) and (2), we will obtain an image of the plane. In real space, an object typically has six degrees of freedom of motion: three translational movements along (x, y and z) spatial directions and three rotational movements around (x, y and z) spatial directions. For the plane X-Z, if we place it in the coordinate system shown in fig. 1, centered on the y-axis, with its X-axis and Z-axis parallel to the X-axis and Z-axis, respectively (i.e. the X-Z plane is perpendicular to the y-axis), this leaves four degrees of freedom of movement for its position in three-dimensional space: its distance from the camera and the other three rotational degrees of freedom. It should be noted that the coordinate system X-Z is attached to the plane so it moves with the image surface (but its origin (X-0, Z-0) is fixed); while the coordinate systems x-y-z and ξ - η are fixed in space. In the following we assume that the plane X-Z and the image plane ξ - η initially coincide with each other (i.e. X ═ ξ and Z ═ η). Thus, ycThe values determine the distance between the plane X-Z and the camera, and before any rotation the 3D spatial coordinates of the point (X, Z) are:
x=X (3)
y=yc (4)
z=Z (5)
when the X-Z plane is changed in four degrees of freedom, and its image is imaged in the camera, we observe that:
(1) rotation about the z-axis causes linear geometric distortion of the image;
(2) similarly, rotation about the x-axis also results in linear geometric distortion of the image;
(3) rotation about the y-axis produces an image that is rotated but without geometric distortion;
(4) a change in distance along the y-axis will only change the image size.
In addition, since the infrared optical system design usually requires a large system field of view, which inevitably causes geometric distortion of the image, including both radial distortion and tangential distortion, and the radial distortion is usually much larger than the tangential distortion, the technical solution provided in this embodiment of the present invention is particularly suitable for eliminating barrel distortion of the optical system, and of course, the method can also be applied to other types of image distortion, including error distortion introduced by conventional devices. In addition, since an active excitation light source is generally used in infrared nondestructive inspection, for example, if a single light source is used to point to the center of the surface of the object, each generated image may be superimposed with the illumination intensity variation, causing unevenness in the brightness distribution of the image.
In a specific embodiment, the method proposed by the invention involves a planar imaging involving a variation of 3 degrees of freedom in three dimensions and a variation of 1 degree of freedom in the distance perpendicular to the imaging plane, all of which 4 degrees of freedom may cause imaging errors, since the invention only considers planar imaging, so that once the three angles and distances are determined, the dimensions in the x and z directions are uniquely determined. The technical problems aimed at by the technical scheme provided by the invention comprise the following four aspects of technical problems: the method is used for correcting the geometric distortion of the image caused by the rotation of the object in the three-dimensional direction and correcting the problem of non-uniform actual sizes of pixels caused by different photographing distances. And thirdly, a problem for correcting image distortion due to an infrared optical system. Fourthly, the problem of uneven image brightness distribution caused by the nonuniformity of illumination intensity is solved.
Before discussing the image geometric distortion correction method, we first define the conditions necessary for seamless stitching of multiple images. First, the corrected image should maintain the orthogonality of the straight lines on the plane. Second, the line length in the corrected image should be linearly proportional to the line length on the object surface. Fig. 2 schematically shows a format before correction of a square flat image and a format after correction of the image, and the left image is an actual imaging plane, i.e., an image before correction, and includes geometric distortion of the image due to changes in four degrees of freedom and geometric distortion due to an optical system, and its coordinate system is ξ - η coordinate system with the center of the figure as the origin. The image on the right is a 2D linearly orthogonal image of the actual square plate, i.e. the corrected image, represented by the coordinate system X-Z. This format will ultimately allow for exact geometric matching stitching of the overlaid images, as schematically illustrated in fig. 3. The only thing missing in fig. 2 is the grey (or colour) distribution, which can be obtained from the original image (left image in fig. 2) once the relationship between the points (X, Z) and (ξ, η) is determined. Therefore, the following discussion is directed to establishing such a mapping of points of a pre-corrected image (input image) to a post-corrected image (output image). To address this problem, in one particular embodiment, the method employed by the present invention is as follows:
a method for correcting geometric distortion of an image, the method comprising the steps of:
1) acquiring the length Imax and the width Jmax of the focal plane size of the thermal imager; and the horizontal field angle alpha of the lens, then the vertical field angle alpha Jmax/Imax can be obtained;
2) measuring the distance y from the center of the object plane to the lens of the thermal imagerc
3) Acquiring original distorted image data, and establishing an original image coordinate system xi-eta with the original image center as an origin, the abscissa as a xi axis and the ordinate as an eta axis;
4) and establishing a coordinate system X-Z of the corrected image, wherein the center of the image is taken as an origin. For simplicity, the corrected image takes the same image size as the original image, i.e., image dimensions with length Imax, width Jmax;
5) we denote the pixel in the final corrected image as (I, J), then for a certain point (X, Z), given a pixel size Δ X, there is I ═ X/Δ X, J ═ Z/Δ X;
6) a pixel in the original image is represented as (i, j), and corresponding to a midpoint (ξ, η) in the original image, i ═ ξ/δ and j ═ η/δ, δ is the size of each pixel in the original image, and is determined by the following formula:
δ=2yctan-1(α/2)/Imax (6)
7) correcting for geometrical distortion caused by rotation of the X-Z plane about the Z-axis, FIG. 4 shows the plane X-Z at an angle a of rotation about the Z-axiszThen the geometric relationship in the x-y plane (at arbitrary z). From this figure we can easily obtain the 3D spatial coordinates of the rotated point (X, Z):
x=X cosaz (7)
y=yc+X sinaz (8)
z=Z (9)
equation (9) indicates that rotation about the Z-axis has no effect on the Z-value.
In fig. 4, we also note a special point in the image plane,
ξ0=yctanaz (10)
which corresponds to the closest point of the plane X-Z to the camera. This point xi0It is usually easy to find from the image, see fig. 5.
For a small angle of rotation, any surface straight line parallel to the x-axis will become a curve, and for the connection line of the relative peak points AB of the two horizontal curves in the figure, the intersection point with the x-axis is xi0(ii) a The rotation angle a of the plane about the z-axis can be calculated using equation (10)z
8) Correcting for geometric distortion caused by rotation of the X-Z plane about the X-axis, see fig. 6;
similar to equation (5), rotation about the X-axis does not change the value of X, so equations (7) and (10) remain valid. Finding the closest point η to the thermal imager on the z-axis0: for a small angle of rotation, any surface straight line parallel to the z-axis will become a curve, and for the connection line of the relative peak points CD of the two perpendicular curves in the graph, the intersection point with the z-axis is η0
9) Using formula eta0=yctanax the rotation angle ax of the plane about the x-axis can be calculated;
10) the geometric distortion caused by the rotation of the X-Z plane around the X-axis and the Z-axis can be corrected according to the following formula;
Figure BDA0002196028910000121
thereby obtaining the mapping relation of the point (X ', Z') to the physical space point (X, y, Z).
11) Correcting for image rotation caused by rotation of the X-Z plane about the y-axis, see fig. 7;
12) geometric distortion caused by rotation about the y-axis can be corrected according to the following equation:
Figure BDA0002196028910000122
in a preferred embodiment, this may be performed as follows: when executed for the first time, can orderayObtaining corrected image as 0, estimating a according to the deflection angle of the imageyAnd again, program calibration is performed.
Now a relationship between one plane point (X, Z) to point (X ', Z') and then to physical space point (X, y, Z) is established.
From the above process, the point (X ', Z') is equivalent to an intermediate result, and the formula (11) (12) establishes a mapping relationship from the plane point (X, Z) to the point (X ', Y') and then to the physical space point (X, Y, Z), where the point (X ', Y') is the point (X, Z) if there is no distortion around the Y-axis as shown in the formula (12).
13) Correcting geometric distortion of the image, as shown in fig. 8;
14) since the camera detector is a point detector relative to the actual plane X-Z, if the imaging plane is a spherical surface centered on the origin, the imaging image will be distorted, as shown in the schematic diagram of fig. 9. The point on the imaging plane corresponding to the point (x, y, z) on the actual plane should be a new point (ξ ', η'). This new projection formula is:
ξ′=δαx/rΔ (13)
η′=δαz/rΔ (14)
here, the first and second liquid crystal display panels are,
r=(x2+z2)1/2 (15)
α=tan-1(r/y) (16)
Δ=tan-1(δ/yc) (17)
from the above formula, we can finally get the mapping of the point (X, Z) on the actual plane to the point (ξ ', η') on the imaging plane, i.e. its mapping to the point on the original distorted image.
FIG. 9 is a schematic diagram of projection distortion correction with the introduction of an intermediate curved surface to determine a new proxel ξ' in the x-y plane where αx=αx/r。
15) Inputting the pixel size delta X of the specified correction image for each pixel point (I, J) in the correction image, and finding out the corresponding point (X, Z) according to the step 5);
16) obtaining a point (X, y, Z) mapped to a physical space according to the formula (12) in the step 12) and the formula (11) in the step 10), and mapping to a point (xi ', eta') of the original distorted image according to the formulas (13) - (17) in the step 14), namely finally establishing a relation between (X, Z) and (xi ', eta');
17) according to the step 6), the pixel point of the initial distorted image corresponding to the point (xi ', eta') can be found, and the pixel value of the point (I, J) in the corrected image is determined by utilizing an interpolation algorithm according to the coordinates and the pixel values of the pixel point and the adjacent points.
18) Correcting the image size: from the measured value ycThe image size is automatically corrected to a desired size by the above-described steps by specifying the pixel size Δ x of the corrected image.
19) Correcting image brightness non-uniformity;
in many cases, artificial illumination sources are used in photography or photography. For example, if a single light source is used pointing towards the center of the surface, the central part of the image will show a higher intensity (or brightness), which is usually accompanied by a higher contrast (grey scale map). If we draw a straight line right through the center of the grayscale image, the pixel intensity distribution along this line looks like a solid line in FIG. 10. Although the variation in pixel intensity is related to the scene on the image, the smooth dashed line in fig. 10 is directly related to the illumination non-uniformity. Since such unevenness is an artifact superimposed on an image, it should be removed from the image and only the image content is displayed.
20) Selecting one or more rectangular areas from the image for intensity equalization, and performing correction according to the following algorithm by paying attention to avoid certain areas such as defect areas, frames, interested scenes and the like which do not need to be corrected;
21) it is first necessary to obtain an estimate of the 2D illumination distribution over the surface of the area to be corrected. As shown in fig. 10, a smooth dashed line (solid curve boundary centerline) is a good approximation for the distribution of illumination intensity in one dimension. For a two-dimensional image, the median surface of the image intensity distribution can be obtained by a least squares fit of a two-dimensional polynomial function. This two-dimensional polynomial function is represented as:
Figure BDA0002196028910000151
where M is the order of the polynomial, l and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M, l increasing sequentially from 0 to k. By employing a classical least squares fitting method, for example, the coefficient a corresponding to a fixed value M (e.g., M-4) can be easily obtainedkl
22) Once p (ξ, η) is determined, the average value p of p over the entire image is calculatedave。paveAnd p (ξ, η) represent the equalized intensity and the raw non-uniform illumination intensity, respectively, we can correct the intensity of each point in the image using the following formula:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η) (19)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity (or grayscale) at point (xi, η), respectively.
23) Applying equation (19) to the pixel intensity distribution curve in fig. 10 will result in a new intensity curve with more uniform average intensity and contrast, as shown in fig. 11. Therefore, equation (19) will eliminate the effect of illumination non-uniformity in the image. FIG. 11 modifies the intensity profile of FIG. 10 along a line through the center of the image.
24) For the intensity value I of effective pixel point in the original imageold(xi, eta) by the above steps, a corresponding new corrected image intensity (or gray scale) I is obtainednew(ξ,η)。
Example 2
In yet another embodiment, the present invention further provides a planar image correction system, which can execute the method as described in embodiment 1, and the structure of the system is only a preferred configuration, and a system structure that is conventionally modified by a person skilled in the art according to the technical solution disclosed in the present invention should be considered to fall within the protection scope of the present invention when the main functions performed by the system structure are the same as the system disclosed in the present invention. Preferably, the system comprises:
the image acquisition module is used for acquiring original distorted image data;
the geometric distortion correction module is used for carrying out geometric distortion correction on the original distorted image and obtaining a size correction image;
the intensity correction module is used for correcting the illumination intensity of the image after the size correction;
an image output module for outputting a final corrected image;
the geometric distortion correction module includes at least: the device comprises an x-axis rotation correction unit, a y-axis rotation correction unit, a z-axis rotation correction unit, a basic data acquisition unit and a size correction unit;
the basic data acquisition unit is used for acquiring basic data of the image acquisition module and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data from the image acquisition module, and establishing an original image coordinate system with the center of the original distorted image as an origin;
the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit are respectively used for correcting geometric distortion caused by rotation around x, y and z axes;
and the size correction unit is used for correcting the image to a required size based on the correction results of the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit and the pixel size delta x of the specified correction image to obtain a size-corrected image.
Preferably, the intensity correction module comprises at least: a two-dimensional illumination distribution estimation unit and an intensity correction unit;
the two-dimensional illumination distribution estimation unit is used for obtaining 2D illumination distribution estimation of an area to be corrected through least square fitting and determining original non-uniform illumination intensity;
and the intensity correction unit is used for correcting the intensity of each point based on the original non-uniform illumination intensity and the balanced intensity.
Preferably, the relationship between the point (X, Z) on the actual plane and the pixel (I, J) in the corrected image is expressed as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
Preferably, the Z-axis rotation correction unit finds the closest point xi from the X-axis to the image acquisition device in the original distorted image plane X-Z for the geometric distortion rotating around the Z-axis0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
preferably, the X-axis rotation correction unit finds, for geometric distortion rotating around the X-axis, a closest point η of the Z-axis to the image acquisition device in the original distorted image plane X-Z0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx
Preferably, the relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) is established:
Figure BDA0002196028910000181
preferably, the y-axis rotation correction unit corrects the geometric distortion rotated around the y-axis according to the following manner:
Figure BDA0002196028910000182
wherein, ayIs the angle of rotation about the y-axis.
Preferably, when the size correction unit performs the distorted image correction, the mapping relationship from the physical space point (x, y, z) to the point (ξ ', η') of the original distorted image is as follows:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
Preferably, in the two-dimensional illumination distribution estimation unit, the least square fitting is performed in the following manner:
Figure BDA0002196028910000183
where M is the order of the polynomial, 1 and k are the indices of the coefficients in the summation polynomial described above and the corresponding power change of the two variables, k increasing sequentially from 0 to M and 1 increasing sequentially from 0 to k.
Preferably, in the intensity correction unit, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A planar image correction method, characterized in that the method comprises:
s1, acquiring basic data of the image acquisition equipment, and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data, and establishing an original image coordinate system with the center of the original distorted image as an origin;
s2, establishing a coordinate system X-Z of the corrected image, and taking the center of the corrected image as an origin;
s3, correcting geometric distortion caused by rotation of the X-Z plane around the Z axis and the X axis; correcting geometric distortion caused by rotation of the X-Z plane about the y-axis; establishing a relation between a point (X, Z) on an actual plane to a physical space point (X, y, Z);
s4, establishing a mapping from the physical space point (x, y, z) to the point (xi ', eta') of the original distorted image;
s5, determining pixel points of the initial distorted image corresponding to the points (xi ', eta') of the original distorted image, and determining the pixel values of the points in the corrected image by using a difference algorithm according to the coordinates and the pixel values of the adjacent points of the pixel point machine; based on the ycCorrecting the image to a required size by specifying the pixel size delta x of the corrected image to obtain a size-corrected image;
s6, obtaining 2D illumination distribution estimation of a region to be corrected through least square fitting on the image after size correction, and determining original non-uniform illumination intensity p (xi, eta);
s7, calculating the equilibrium strength p based on the p (xi, eta)ave(ii) a Correcting the intensity of each point based on the original non-uniform illumination intensity and the equilibrium intensity;
in S3, for the geometric distortion rotating around the Z-axis, the closest point xi from the X-axis to the image acquisition equipment in the original distorted image plane X-Z is found0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
for geometric distortion rotating around an X-axis, a closest point eta from a Z-axis to an image acquisition device in an original distorted image plane X-Z is searched0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx
Establishing a relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z):
Figure FDA0003439326830000021
in S3, the geometric distortion of the rotation about the y-axis is corrected in the following manner:
Figure FDA0003439326830000022
wherein, ayIs the angle of rotation about the y-axis.
2. The method of claim 1, wherein the image capture device basis data includes focal plane dimensions length, width, horizontal field of view angle of the lens.
3. The method according to claim 1, wherein in step S2, the corrected image and the original distorted image are in the same image size.
4. Method according to claim 1, characterized in that the relation between a point (X, Z) on the actual plane and a pixel (I, J) in the corrected image is represented as: then I is X/delta X, J is Z/delta X, wherein delta X is the pixel size; the relationship between pixel (i, j) in the original distorted image and point (ξ, η) in the original distorted image is represented as: i ═ ξ/δ, and j ═ η/δ, where δ is the size of each pixel in the original distorted image.
5. The method according to claim 1, wherein in S4, the mapping relation of the physical space point (x, y, z) to the point (ξ ', η') of the original distorted image is:
ξ′=δαx/rΔ
η′=δαz/rΔ
wherein r ═ x2+z2)1/2,α=tan-1(r/y),Δ=tan-1(δ/yc)。
6. The method of claim 1, wherein in the step S6, the least squares fitting is performed by:
Figure FDA0003439326830000031
where M is the order of the polynomial and l and k represent the corresponding power changes of the variable.
7. The method according to claim 1, wherein in S7, the intensity of each point is corrected by:
Inew(ξ,η)=Iold(ξ,η)pave/p(ξ,η)
wherein, Inew(xi, eta) and Iold(xi, η) are the corrected image and the original image intensity at point (xi, η), respectively.
8. A planar image correction system, the system comprising:
the image acquisition module is used for acquiring original distorted image data;
the geometric distortion correction module is used for carrying out geometric distortion correction on the original distorted image and obtaining a size correction image;
the intensity correction module is used for correcting the illumination intensity of the image after the size correction;
an image output module for outputting a final corrected image;
the geometric distortion correction module includes at least: the device comprises an x-axis rotation correction unit, a y-axis rotation correction unit, a z-axis rotation correction unit, a basic data acquisition unit and a size correction unit;
the basic data acquisition unit is used for acquiring basic data of the image acquisition module and measuring the distance y from the center of the object plane to the lenscAcquiring original distorted image data from the image acquisition module, and establishing an original image coordinate system with the center of the original distorted image as an origin;
the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit are respectively used for correcting geometric distortion caused by rotation around x, y and z axes;
the size correction unit is used for correcting the image to a required size based on the correction results of the x-axis rotation correction unit, the y-axis rotation correction unit and the z-axis rotation correction unit and the pixel size delta x of the specified correction image to obtain a size-corrected image;
for geometric distortion rotating around a Z-axis, the closest point xi from the X-axis to the image acquisition equipment in an original distorted image plane X-Z is searched0And is based on xi0And ycDetermining the rotation angle a of the X-Z plane about the Z-axisz(ii) a Based on ycAnd azDetermining the three-dimensional spatial coordinates (X, y, Z) of the rotated point (X, Z);
for geometric distortion rotating around an X-axis, a closest point eta from a Z-axis to an image acquisition device in an original distorted image plane X-Z is searched0And is based on η0And ycDetermining the rotation angle a of the X-Z plane about the X-axisx
For geometric distortion of rotation about the y-axis, correction is made according to the following:
Figure FDA0003439326830000041
wherein, ayIs the angle of rotation about the y-axis;
establishing a relationship between the points (X ', Z') to the three-dimensional spatial coordinates (X, y, Z) as:
Figure FDA0003439326830000042
9. the system of claim 8, wherein the intensity correction module comprises at least: a two-dimensional illumination distribution estimation unit and an intensity correction unit;
the two-dimensional illumination distribution estimation unit is used for obtaining 2D illumination distribution estimation of an area to be corrected through least square fitting and determining original non-uniform illumination intensity;
and the intensity correction unit is used for correcting the intensity of each point based on the original non-uniform illumination intensity and the balanced intensity.
CN201910848287.6A 2019-05-27 2019-09-09 Plane image correction method and system Active CN110738608B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019104469321 2019-05-27
CN201910446932 2019-05-27

Publications (2)

Publication Number Publication Date
CN110738608A CN110738608A (en) 2020-01-31
CN110738608B true CN110738608B (en) 2022-02-25

Family

ID=69268078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910848287.6A Active CN110738608B (en) 2019-05-27 2019-09-09 Plane image correction method and system

Country Status (1)

Country Link
CN (1) CN110738608B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598784B (en) * 2020-04-23 2022-08-02 中国科学院上海技术物理研究所 Image rotation correction method based on 45-degree mirror wide-width multi-element parallel scanning imaging
US11910089B2 (en) * 2020-07-15 2024-02-20 Corephotonics Lid. Point of view aberrations correction in a scanning folded camera
CN112053369B (en) * 2020-08-27 2024-07-19 西安迪威码半导体有限公司 Fusion algorithm for low-delay image de-distortion and barrel mapping
CN112288649B (en) * 2020-10-27 2024-07-19 江苏安狮智能技术有限公司 Image correction method and device for perspective imaging distortion of cylindrical object
CN114511598A (en) * 2021-01-30 2022-05-17 威海威高骨科手术机器人有限公司 X-ray image inspection and correction method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN104997529A (en) * 2015-06-30 2015-10-28 大连理工大学 Method for correcting cone beam CT system geometric distortion based on symmetrically repetitive template
CN106525004A (en) * 2016-11-09 2017-03-22 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measuring method
CN107105209A (en) * 2017-05-22 2017-08-29 长春华懋科技有限公司 Projected image geometric distortion automatic correction system and its bearing calibration
CN108227348A (en) * 2018-01-24 2018-06-29 长春华懋科技有限公司 Geometric distortion auto-correction method based on high-precision vision holder
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102088569A (en) * 2010-10-13 2011-06-08 首都师范大学 Sequence image splicing method and system of low-altitude unmanned vehicle
CN104997529A (en) * 2015-06-30 2015-10-28 大连理工大学 Method for correcting cone beam CT system geometric distortion based on symmetrically repetitive template
CN106525004A (en) * 2016-11-09 2017-03-22 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measuring method
CN107105209A (en) * 2017-05-22 2017-08-29 长春华懋科技有限公司 Projected image geometric distortion automatic correction system and its bearing calibration
CN108227348A (en) * 2018-01-24 2018-06-29 长春华懋科技有限公司 Geometric distortion auto-correction method based on high-precision vision holder
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CCD光学系统几何失真校正方法研究;黄细平等;《中国医疗器械杂志》;20160629;第1-3页 *
不闭合CT扫描数据几何伪影的校正方法;于平等;《光学学报》;20150630;第111-119页 *

Also Published As

Publication number Publication date
CN110738608A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN110738608B (en) Plane image correction method and system
CN106875339B (en) Fisheye image splicing method based on strip-shaped calibration plate
CN108489395B (en) Vision measurement system structural parameters calibration and affine coordinate system construction method and system
CN107248178B (en) Fisheye camera calibration method based on distortion parameters
CN110099267B (en) Trapezoidal correction system, method and projector
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
CN113160339B (en) Projector calibration method based on Molaque law
CN104778694B (en) A kind of parametrization automatic geometric correction method shown towards multi-projection system
Douxchamps et al. High-accuracy and robust localization of large control markers for geometric camera calibration
CN111667536A (en) Parameter calibration method based on zoom camera depth estimation
CN103994732B (en) A kind of method for three-dimensional measurement based on fringe projection
CN109285195B (en) Monocular projection system pixel-by-pixel distortion correction method based on large-size target and application thereof
CN114359405A (en) Calibration method of off-axis Samm 3D line laser camera
CN110345921A (en) Stereoscopic fields of view vision measurement and vertical axial aberration and axial aberration bearing calibration and system
CN109474814A (en) Two-dimensional calibration method, projector and the calibration system of projector
JP2011086111A (en) Imaging apparatus calibration method and image synthesis device
CN111047651B (en) Method for correcting distorted image
Yu et al. Calibration for camera–projector pairs using spheres
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN110443856A (en) A kind of 3D structure optical mode group scaling method, storage medium, electronic equipment
CN115661226B (en) Three-dimensional measuring method of mirror surface object, computer readable storage medium
Wu et al. Unsupervised texture reconstruction method using bidirectional similarity function for 3-D measurements
Von Gioi et al. Towards high-precision lens distortion correction
Ai et al. A method for correcting non-linear geometric distortion in ultra-wide-angle imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant