CN118710564B - Image correction method and electronic equipment - Google Patents
Image correction method and electronic equipmentInfo
- Publication number
- CN118710564B CN118710564B CN202410876796.0A CN202410876796A CN118710564B CN 118710564 B CN118710564 B CN 118710564B CN 202410876796 A CN202410876796 A CN 202410876796A CN 118710564 B CN118710564 B CN 118710564B
- Authority
- CN
- China
- Prior art keywords
- image
- point
- points
- offset
- image block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image correction method and electronic equipment, the method comprises the steps of obtaining a reference image and an image to be corrected, selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image, determining at least one mapping function representing a mapping relation of pixel points in the image to be corrected to corresponding pixel points in the reference image according to the plurality of source control points and the plurality of target points corresponding to the plurality of source control points, determining offset corresponding to each pixel point in the image to be corrected according to the mapping function, and correcting the image to be corrected according to the offset corresponding to each pixel point in the image to be corrected to obtain a corrected image. According to the method and the device, at least one mapping function is determined according to the plurality of source control points and the plurality of target points, the mapping function can be determined based on the plurality of pixel point pairs, and the image to be corrected with depth parallax can be accurately corrected relative to a mode of correcting based on the mapping relation between planes of two images.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image correction method and an electronic device.
Background
In application scenarios where accurate analysis and feature extraction of images are required (e.g. application scenarios where medical images or satellite images are processed), the quality of the images is directly related to the accuracy and efficiency of the subsequent processing. In the actual shooting process, due to the problems of plane cutting quality of a lens of a camera or distribution of adhesives, images acquired by the camera may be deformed to different degrees, so that extraction of image features is affected. Thus requiring correction of the acquired image.
At present, a common image correction method is a homography conversion method. The homography transformation method performs geometric correction on the image to be corrected by establishing a mapping relation between a plane where the image to be corrected is located and a plane where the reference image is located, and mapping the plane where the image to be corrected is located to the plane where the reference image is located.
However, the basic assumption of the homography transformation method is that there is a simple perspective relation between the two planes, and this basic assumption is no longer true when depth information is contained in the image to be corrected, i.e. different objects are located at different distances. Therefore, the image to be corrected, which has depth parallax, often cannot be accurately corrected by adopting the homography conversion method.
Disclosure of Invention
In order to overcome the defects in the prior art, the application provides an image correction method and a core electronic device, wherein at least one mapping function representing a mapping relationship of a pixel point in an image to be corrected to a corresponding pixel point in a reference image is determined according to a plurality of source control points and a plurality of target points, the at least one mapping function can be determined based on a plurality of pixel point pairs, and accurate depth information of the image to be corrected can be reserved relative to a mode of correcting based on the mapping relationship between planes in which two images are located, so that the image to be corrected with depth parallax can be accurately corrected.
In order to solve the problems, the invention provides the following technical scheme:
in a first aspect, an embodiment of the present application provides an image correction method, including:
acquiring a reference image and an image to be corrected;
Selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image, wherein the source control points are in one-to-one correspondence with the target points;
Determining at least one mapping function representing a mapping relationship of pixel points in the image to be corrected to corresponding pixel points in the reference image according to the plurality of source control points and the plurality of target points corresponding to the plurality of source control points one by one;
Determining the offset corresponding to each pixel point in the image to be corrected according to the mapping function;
And correcting the image to be corrected according to the offset corresponding to each pixel point in the image to be corrected, so as to obtain a corrected image.
In some embodiments, the reference image and the image to be corrected include feature patterns, and the selecting the plurality of source control points in the image to be corrected and the plurality of target points in the reference image includes:
selecting the plurality of source control points from the image to be corrected;
And selecting the target points in the reference image, which correspond to the source control points one by one, based on the characteristic pattern in the reference image, the characteristic pattern in the image to be corrected and the source control points.
In some embodiments, the selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image includes:
dividing the image to be corrected into a plurality of first image blocks;
randomly selecting at least one source control point from each first image block to obtain a plurality of source control points;
Dividing the reference image into a plurality of second image blocks corresponding to the first image blocks;
And selecting a corresponding target point in the second image block corresponding to the first image block for the source control point in each first image block.
In some embodiments, the at least one mapping function includes a plurality of image block mapping functions, and the determining at least one mapping function representing a mapping relationship of a pixel point in the image to be corrected to a corresponding pixel point in the reference image according to the plurality of source control points and the plurality of target points corresponding to the plurality of source control points one to one includes:
Constructing a pending image block mapping function representing the mapping relation between each first image block and the corresponding second image block to obtain a plurality of pending image block mapping functions, wherein the pending image block mapping functions comprise pending rotation matrixes and translation matrixes;
constructing a corresponding cost function according to the coordinates of all source control points in each first image block, target points corresponding to all source control points in each first image block and a to-be-determined image block mapping function corresponding to each first image block;
And determining a rotation matrix and a translation matrix in the undetermined image block mapping function with the minimum value of the cost function according to the cost function, the coordinates of all source control points in each first image block and the coordinates of target points corresponding to all the source control points, and obtaining a final image block mapping function corresponding to each first image block.
In some embodiments, the constructing a corresponding cost function according to the coordinates of all source control points in each of the first image blocks, the target points corresponding to all the source control points in each of the first image blocks, and the pending image block mapping function corresponding to each of the first image blocks includes:
Calculating the distance from each source control point to a preset calculation point according to the coordinates of the source control point and the coordinates of the preset calculation point in each first image block;
determining a weight coefficient of the source control point in each first image block according to the distance from each source control point to the preset calculation point and the smoothed coefficient;
and constructing a corresponding cost function according to the coordinates of all source control points in each first image block, target points corresponding to all source control points in each first image block, weight coefficients of all source control points in each first image block and a pending image block mapping function corresponding to each first image block.
In some embodiments, the first image block has at least one corresponding calculation point, the at least one calculation point includes a representative point of the first image block, the offset includes an offset on a first coordinate axis and an offset on a second coordinate axis, and the determining, according to the mapping function, the offset corresponding to each pixel point in the image to be corrected includes:
Determining coordinates of mapping points corresponding to representative points of each first image block according to a final image block mapping function corresponding to each first image block;
Subtracting the coordinates on the first coordinate axis of the representative point from the coordinates on the first coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the first coordinate axis corresponding to the representative point;
Subtracting the coordinates on the second coordinate axis of the representative point from the coordinates on the second coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the second coordinate axis corresponding to the representative point;
Obtaining the offset corresponding to all calculation points corresponding to all the first image blocks according to the offset corresponding to the representative point of each first image block;
And calculating the offset corresponding to other pixel points except the plurality of calculation points in the image to be corrected according to the offset corresponding to all calculation points corresponding to each first image block.
In some embodiments, the first image block includes a first calculation point, a second calculation point, a third calculation point, and a fourth calculation point, and calculating offsets corresponding to pixel points in the image to be corrected, except for a plurality of calculation points, according to offsets corresponding to all calculation points corresponding to each first image block includes:
Determining the corresponding offset of each pixel point except the representative point of the first image block in the first image block according to the coordinates of the first calculation point corresponding to each pixel point and the corresponding first offset, the coordinates of the second calculation point and the corresponding second offset, the coordinates of the third calculation point and the corresponding third offset, the coordinates of the fourth calculation point and the corresponding fourth offset and the coordinates of each pixel point;
and obtaining the offset corresponding to the pixel points except the plurality of calculation points in the image to be corrected based on the offset corresponding to each pixel point in the first image block.
In some embodiments, the determining the offset corresponding to each pixel point in the first image block except the representative point of the first image block according to the coordinates and the corresponding first offset of the first calculation point corresponding to the first image block, the coordinates and the corresponding second offset of the second calculation point, the coordinates and the corresponding third offset of the third calculation point, the coordinates and the corresponding fourth offset of the fourth calculation point, and the coordinates of each pixel point includes:
Determining a first weight, a second weight, a third weight and a fourth weight based on the coordinates of each pixel point, the coordinates of the first calculation point corresponding to the first image block where each pixel point is located, the coordinates of the second calculation point, the coordinates of the third calculation point and the coordinates of the fourth calculation point;
the sum of the first weight multiplied by the first offset, the second weight multiplied by the second offset, the third weight multiplied by the third offset, and the fourth weight multiplied by the fourth offset is taken as the corresponding offset of each of the pixel points in the first image block except the representative point of the first image block.
In some embodiments, the offset includes an offset on a first coordinate axis and an offset on a second coordinate axis, and the determining, according to the mapping function, an offset corresponding to each pixel point in the image to be corrected includes:
Determining a mapping point corresponding to each pixel point according to the mapping function;
Subtracting the coordinates on the first coordinate axis of each pixel point from the coordinates on the first coordinate axis of the mapping point corresponding to each pixel point to obtain the offset on the first coordinate axis corresponding to each pixel point;
And subtracting the coordinates on the second coordinate axis of the corresponding mapping point of each pixel point from the coordinates on the second coordinate axis of the corresponding mapping point of each pixel point to obtain the offset on the second coordinate axis of each pixel point.
In a second aspect, an embodiment of the present application provides an electronic device, including:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image correction method as described in the first aspect.
The application provides an image correction method and electronic equipment, wherein at least one mapping function representing the mapping relation of pixel points in an image to be corrected to corresponding pixel points in a reference image is determined according to a plurality of source control points and a plurality of target points, the at least one mapping function can be determined based on a plurality of pixel point pairs, and accurate depth information of the image to be corrected can be reserved relative to a mode of correcting based on the mapping relation between planes of two images, so that the image to be corrected with depth parallax can be accurately corrected.
Drawings
Fig. 1 is a flowchart of an image correction method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a refinement flow of step S200 in fig. 1.
Fig. 3 is a schematic diagram of a plurality of first image blocks obtained by dividing an image to be corrected.
Fig. 4 is a schematic diagram of a refinement procedure of step S300 in fig. 1.
Fig. 5 is a schematic diagram of a refinement flow of step S400 in fig. 1.
Fig. 6A is a schematic diagram of a second image before correction provided by an embodiment of the present application.
Fig. 6B is a schematic diagram of a corrected second image provided by an embodiment of the present application.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The application provides an image correction method and electronic equipment, wherein at least one mapping function representing the mapping relation of pixel points in an image to be corrected to corresponding pixel points in a reference image is determined according to a plurality of source control points and a plurality of target points, the at least one mapping function can be determined based on a plurality of pixel point pairs, and accurate depth information of the image to be corrected can be reserved relative to a mode of correcting based on the mapping relation between planes of two images, so that the image to be corrected with depth parallax can be accurately corrected.
The image correction method provided by the present application will be specifically described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of an image correction method according to an embodiment of the application. As shown in fig. 1, the image correction method includes steps S100 to S500.
And S100, acquiring a reference image and an image to be corrected.
In some embodiments, a reference image capturing device is used for capturing an image of a target object at a fixed position to obtain a first image, and an image capturing device to be corrected is used for capturing an image of the target object at the fixed position to obtain a second image.
Alternatively, the reference image capturing apparatus and the image capturing apparatus to be corrected are the same in model.
Optionally, the first image and the second image are the same size.
Alternatively, the target object may be any object, such as a denture model or a cup, or the like.
Optionally, the target object is a calibration plate. In this way, it is possible to facilitate the subsequent selection of the target point in the reference image corresponding to the source control point in the image to be corrected according to the characteristic pattern of the calibration plate in the image.
In some embodiments, the first image is a reference image and the second image is an image to be corrected, the first image and the second image each being a single channel image, the single channel image comprising only one channel component image.
In some embodiments, the first image and the second image are both multichannel images.
Optionally, the multi-channel image comprises a plurality of channel component images.
Optionally, the multi-channel image is an RGB (Red, green, blue, red, green, blue) image, the RGB image including a red channel component image, a green channel component image, and a blue channel component image.
Optionally, the single channel image is any one of a red channel component image, a green channel component image, and a blue channel component image in the RGB image.
Optionally, the multi-channel image is an HSV (Hue, saturation, value, hue, saturation, brightness) image, the HSV image including a hue channel component image, a saturation channel component image, and a brightness channel component image.
Optionally, the single channel image is any one of a hue channel component image, a saturation channel component image, and a brightness channel component image in the HSV image.
In some embodiments, the reference image is one channel component image in the first image and the image to be corrected is a channel component image in the second image corresponding to the channel component of the reference image. For example, the reference image is a red channel component image in the first image, and the image to be corrected is a red channel component image in the second image.
In some embodiments, it is necessary to acquire a plurality of channel component images by a dichroic prism of the image capturing apparatus to be corrected in a split-channel manner, and then combine the plurality of channel component images to obtain a final acquired image. However, the plurality of channel component images acquired by the image pickup apparatus to be corrected may be deformed to different degrees, and thus the finally acquired image may be dispersed, affecting the subsequent image processing.
In some embodiments, the image correction method of the present application is used to correct each channel component image in the second image, and a plurality of corrected channel component images are combined to obtain a corrected second image. In this way, the corrected second image can be kept free from chromatic dispersion, thereby improving the accuracy of subsequent image processing.
Step S200 of selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image.
Wherein the source control points are in one-to-one correspondence with the target points.
Alternatively, the target point is considered as a pixel point in the real world that is the same location as the source control point.
For example, the position of the source control point on the first calibration plate when the image to be corrected is acquired is regarded as the same as the position of the target point on the second calibration plate when the reference image is acquired.
In some embodiments, the reference image and the image to be corrected include a feature pattern.
Alternatively, the feature pattern refers to a prominent visual element from the image that is able to identify and distinguish different objects.
Optionally, the feature pattern is a pattern on a calibration plate.
Optionally, step S200 includes the following steps.
(1) And selecting a plurality of source control points in the image to be corrected.
Optionally, a plurality of source control points are randomly selected in the image to be corrected. In this way, the point pairs of the plurality of source control points and the target point can be uniformly distributed, so that the at least one mapping function determined later better represents the mapping relationship of the pixel point in the image to be corrected to the corresponding pixel point in the reference image.
Optionally, the number of the plurality of source control points is more than 3, for example 3,4, 5,6, 10, 16, 32, 64, 100 or 1000, etc.
(2) And selecting a plurality of target points in the reference image, which correspond to the source control points one by one, based on the characteristic pattern in the reference image, the characteristic pattern in the image to be corrected and the source control points.
Alternatively, according to a first feature position of the source control point in the feature pattern in the image to be corrected, a pixel point at a position corresponding to the first feature position in the feature pattern is selected in the reference image as the target point corresponding to the source control point.
It should be noted that, because the image acquired by each image capturing apparatus may be deformed to different degrees, the target point is not necessarily a pixel point identical to the coordinates of the source control point in the reference image, but is a pixel point identical to the features of the source control point. For example, if the source control point is located at the center of 1 circular pattern, the center pixel point of the corresponding circular pattern is selected as the target point corresponding to the source control point in the reference image.
In some embodiments, feature detection is performed on the image to be corrected and the reference image, and the matched feature point pair is taken as a point pair of the source control point and the target point. The point pair of the source control point and the target point, i.e. the pixel point pair.
In some embodiments, the image to be corrected and the reference image are each divided into a plurality of image blocks of the same size, and then a plurality of source control points in the image to be corrected and a plurality of target points in the reference image are selected. In this way, the point pairs of the plurality of source control points and the target point can be further distributed uniformly, so that the at least one mapping function determined later can be further better used for representing the mapping relation of the pixel points in the image to be corrected to the corresponding pixel points in the reference image.
Referring to fig. 2, fig. 2 is a schematic diagram of a refinement flow of step S200 in fig. 1. As shown in fig. 2, step S200 includes steps S210 to S240.
Step S210, dividing the image to be corrected into a plurality of first image blocks.
Optionally, the sizes of the plurality of first image blocks are the same.
Optionally, the number of the plurality of first image blocks is 3 or more, for example 3, 4, 5, 6, 10, 16, 32 or 64, etc.
Referring to fig. 3, fig. 3 is a schematic diagram of a plurality of first image blocks obtained by dividing an image to be corrected. As shown in fig. 3, in some embodiments, the image 1 to be corrected is divided into 9 first image blocks, the 9 first image blocks including block 1, block 2, block 3, block 4, block 5, block 6, block 7, block 8, and block 9.
Step S220, randomly selecting at least one source control point in each first image block to obtain a plurality of source control points.
Step S230, dividing the reference image into a plurality of second image blocks corresponding to the first image blocks.
Optionally, the partitioning rule of the second image block is the same as the partitioning rule of the first image block.
Optionally, the size of the second image block is the same as the size of the first image block.
Optionally, the number of second image blocks is the same as the number of first image blocks.
By the method, the second image block can be conveniently corresponding to the position of the first image block, so that a corresponding target point can be conveniently selected from the second image block corresponding to the first image block.
Step S240, selecting a corresponding target point in a second image block corresponding to the first image block for the source control point in each first image block.
Optionally, the pixel point in each second image block has a corresponding pixel point in the corresponding first image block, and coordinates of the corresponding pixel point in the two image blocks are the same, for example, a pixel point with coordinates of (1, 1) in the second image block, and coordinates of the corresponding pixel point in the first image block are also (1, 1).
Optionally, the target point in the reference image corresponding to the source control point is selected based on the feature pattern in the reference image, the feature pattern in the image to be corrected and the source control point, the method being as described above.
And step S300, determining at least one mapping function representing the mapping relation of the pixel points in the image to be corrected to the corresponding pixel points in the reference image according to the source control points and the target points corresponding to the source control points one by one.
In some embodiments, the mapping function is a linear transformation function or a nonlinear transformation function.
In some embodiments, the image blocks are not divided in the image to be corrected and the reference image, and the number of mapping functions is 1.
By the method, one mapping function can be determined based on the pixel point pairs, and accurate depth information of the image to be corrected can be reserved relative to a mode of correcting based on the mapping relation between planes of two images, so that the image to be corrected with depth parallax can be accurately corrected.
In some embodiments, the image to be corrected and the reference image are each divided into a plurality of image blocks, each image block having a corresponding mapping function, the number of mapping functions being the same as the number of first image blocks. By means of the method, each first image block of the image to be corrected can be corrected respectively, namely, the image to be corrected is discretized and then corrected, depth information of the image to be corrected can be better reserved in the correction process, and therefore the image to be corrected with depth parallax can be corrected more accurately.
Referring to fig. 4, fig. 4 is a schematic diagram of a refinement flow of step S300 in fig. 1. As shown in fig. 4, step S300 includes steps S310 to S330.
And step S310, constructing a pending image block mapping function representing the mapping relation between each first image block and the corresponding second image block to obtain a plurality of pending image block mapping functions.
Wherein the pending image block mapping function comprises a pending rotation matrix and a translation matrix.
In some embodiments, the mapping relationship of each first image block and the corresponding second image block is considered a rigid body transformation relationship.
In some embodiments, the formula of the pending image block mapping function is: , wherein, A mapping point representing a source control point,The source control point is indicated as such,The rotation matrix is represented by a matrix of rotations,Representing the translation matrix.
Step S320, constructing a corresponding cost function according to the coordinates of all source control points in each first image block, target points corresponding to all source control points in each first image block and the undetermined image block mapping function corresponding to each first image block.
In some embodiments, step S320 includes the following steps.
(1) And calculating the distance from each source control point to the preset calculation point according to the coordinates of the source control point in each first image block and the coordinates of the preset calculation point.
Optionally, the preset calculation point is any point in the image to be corrected.
Optionally, the preset calculation point is a representative point of the first image block where the source control point is located.
Optionally, each first image block has a representative point in the first image block.
Optionally, the representative point of the first image block is any one of four corner points of the first image block.
Optionally, the representative point of the first image block is any one of an upper left corner, a lower left corner, an upper right corner, and a lower right corner of the first image block.
As shown in fig. 3, alternatively, the representative point of block 1 is point a.
Optionally, the representative point of each first image block is located at the same position within the first image block, e.g. the representative point of each first image block is the upper left corner of the first image block.
As shown in fig. 3, alternatively, when the representative point of block 1 is point a, the representative point of block 2 is point E, the representative point of block 3 is point J, the representative points of the other blocks, and so on.
(2) And determining the weight coefficient of the source control point in each first image block according to the distance from each source control point to the preset calculation point and the smoothing coefficient.
In some embodiments, the formula for calculating the weight coefficient of the source control point is:
,
Wherein, the Represent the firstThe weights of the individual source control points,Represent the firstThe number of source control points is chosen to be,The preset calculation point is indicated to be the preset calculation point,Represent the firstThe distance from the individual source control point to the preset calculation point,Representing the smoothing coefficients.
(3) And constructing a corresponding cost function according to the coordinates of all source control points in each first image block, the target points corresponding to all source control points in each first image block, the weight coefficients of all source control points in each first image block and the mapping function of the undetermined image block corresponding to each first image block.
In some embodiments, an initial cost function is constructed from a point pair of a source control point and a target point in the first image block, where the initial cost function is formulated as:
,
Wherein, the Represent the firstMapping points of the individual source control points,Represent the firstTarget points corresponding to the source control points.AndIs a row vector.
According to the formula of the undetermined image block mapping function: Obtaining 。
By the above way, the mapping function ensuresDirect one-to-one mapping toThe mapping function is smooth and consistent so that the image to be corrected can then be accurately corrected by the mapping function.
To simplify the solving process, it is known that when the initial cost function takes an extremum, the first derivative of the initial cost function is 0, so the initial cost function is solvedIs first derivative obtained after partial derivativeThat is to saySolving to obtainWherein,AndIs the center of weight. Erasing in a pending image block mapping functionObtaining the productThe initial cost function is reduced toWherein。
In some embodiments, the initial cost function is vectorized according to point pairs of all source control points and target points in the first image block, resulting in a final cost function.
Optionally, the formula of the final cost function is:
,
Wherein, the ,,,、AndAre all matrices.
Referring back to fig. 4, step S330 is to determine a rotation matrix and a translation matrix in the pending image block mapping function that minimizes the value of the cost function according to the cost function, the coordinates of all the source control points in each first image block, and the coordinates of the target points corresponding to all the source control points, so as to obtain a final image block mapping function corresponding to each first image block.
In some embodiments, minimizing the value of the final cost function requires solving the least squares problem using normal equations, which can be obtainedTherefore, it is,AndA sequence number representing the source control point. Substitution intoObtaining the productWhereinDue to rigid body transformationIs an orthogonal array, meets the following requirementsAssume thatCan be divided into two column vectorsThenWhereinIs such thatIs substituted into two-dimensional vector operator of (2)Is available in (1)Wherein。
The formula for obtaining the final image block mapping function of the first image block is as follows:
, wherein, 。
Referring back to fig. 1, step S400 is to determine an offset corresponding to each pixel in the image to be corrected according to the mapping function.
In some embodiments, the first image block has a corresponding at least one computation point.
Optionally, the number of at least one calculation point is 1,2, 3, 4 or 8, etc.
In some embodiments, the at least one computation point corresponding to the first image block includes a representative point of the first image block.
In some embodiments, the at least one computing point further comprises a representative point of other first image blocks adjacent to the current first image block.
In some embodiments, the offset includes an offset on a first coordinate axis and an offset on a second coordinate axis.
Optionally, the first coordinate axis is perpendicular to the second coordinate axis.
Optionally, the first coordinate axis is one of an X axis or a Y axis, and the second coordinate axis is the other of the X axis or the Y axis different from the first coordinate axis.
Referring to fig. 5, fig. 5 is a schematic diagram of a refinement process of step S400 in fig. 1. As shown in fig. 5, in some embodiments, step S400 includes steps S410 through S450.
Step S410, determining coordinates of mapping points corresponding to representative points of each first image block according to the final image block mapping function corresponding to each first image block.
As described above, each first image block has a corresponding final image block mapping function.
In some embodiments, the coordinates of the representative points are substituted into the final image block mapping function of the current first image block to obtain the coordinates of the mapping points corresponding to the representative points.
As shown in fig. 3, for example, a point a is a representative point of the block 1, and coordinates of the point a are substituted into a final image block mapping function of the block 1 to obtain coordinates of a mapping point corresponding to the point a. For example, the point E is a representative point of the block 2, and the coordinates of the point E are substituted into the final image block mapping function of the block 2 to obtain the coordinates of the mapping point corresponding to the point E. And substituting the coordinates of the point F into a final image block mapping function of the block 4 to obtain the coordinates of the mapping point corresponding to the point F. And substituting the coordinates of the point G into the final image block mapping function of the block 5 to obtain the coordinates of the mapping point corresponding to the point G. The coordinates of the mapping points corresponding to the other representative points are calculated as described above.
And S420, subtracting the coordinates on the first coordinate axis of the representative point from the coordinates on the first coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the first coordinate axis corresponding to the representative point.
And S430, subtracting the coordinates on the second coordinate axis of the representative point from the coordinates on the second coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the second coordinate axis corresponding to the representative point.
And S440, obtaining the offset corresponding to all the calculation points corresponding to all the first image blocks according to the offset corresponding to the representative point of each first image block.
In some embodiments, the number of computing points corresponding to the first image block is 4, and the computing points corresponding to the first image block include a first computing point, a second computing point, a third computing point, and a fourth computing point.
Optionally, taking the representative point of the current first image block as the first calculation point of the current first image block, and selecting the representative points of a plurality of first image blocks adjacent to the current first image block as the second calculation point, the third calculation point and the fourth calculation point respectively.
In some embodiments, the first, second, third, and fourth computing points of the current first image block satisfy that all pixel points within the current first image block lie within (including the sides of) a rectangle made up of the first, second, third, and fourth computing points.
As shown in fig. 3, alternatively, the representative point of the block 1 is a point a, the representative point of the block 2 is a point E, the representative point of the block 4 is a point F, and the representative point of the block 5 is a point G, where the point a is a first calculation point of the block 1, the selected point E is a second calculation point of the block 1, the point F is a third calculation point of the block 1, and the point G is a fourth calculation point of the block 1.
In some embodiments, the first, second, third, and fourth computing points of the current first image block satisfy that a row or column of pixel points within the current first image block are located on sides of a rectangle made up of the first, second, third, and fourth computing points.
As shown in fig. 3, alternatively, the representative point of the block 7 is point O, and then point O is the first calculation point of the block 7, point F is selected to be the second calculation point of the block 7, point G is the third calculation point of the block 7, and point P is the fourth calculation point of the block 6.
As shown in fig. 3, alternatively, the representative point of the block 8 is a point P, the representative point of the block 6 is a point K, and the representative point of the block 9 is a point L, where the point P is a first calculation point of the block 8, the selected point G is a second calculation point of the block 8, the point K is a third calculation point of the block 8, and the point L is a fourth calculation point of the block 8.
As shown in fig. 3, alternatively, the representative point of the block 3 is a point J, where the point J is a first calculation point of the block 3, the selected point E is a second calculation point of the block 3, the point G is a third calculation point of the block 3, and the point K is a fourth calculation point of the block 3.
As shown in fig. 3, alternatively, the representative point of the block 6 is a point K, the representative point of the block 8 is a point P, and the representative point of the block 9 is a point L, where the point K is a first calculation point of the block 6, the selected point G is a second calculation point of the block 6, the point P is a third calculation point of the block 6, and the point L is a fourth calculation point of the block 6.
In some embodiments, the first image block corresponds to 1 calculation point, and the representative point of the first image block is taken as the 1 calculation point corresponding to the first image block.
As shown in fig. 3, alternatively, the representative point of the block 9 is a point H, and the point H is taken as the calculation point of the block 9.
The offset corresponding to the representative point of each first image block has been obtained above, and thus the offsets corresponding to all the calculation points corresponding to all the first image blocks can be obtained.
And S450, calculating the offset corresponding to other pixel points except a plurality of calculation points in the image to be corrected according to the offset corresponding to all calculation points corresponding to each first image block.
In some embodiments, an interpolation method is adopted to calculate the offset corresponding to the pixel points except for the plurality of calculation points in the image to be corrected according to the offset corresponding to all calculation points corresponding to each first image block.
Alternatively, the interpolation method includes a bilinear interpolation method, a linear interpolation method, a nearest neighbor interpolation method, a bicubic interpolation method, and the like.
As described above, in some embodiments, the first image block includes a first computation point, a second computation point, a third computation point, and a fourth computation point.
In some embodiments, the offset for each pixel in the first image block other than the representative point of the first image block is determined according to the coordinates of the first calculated point and the corresponding first offset for the first image block where each pixel is located, the coordinates of the second calculated point and the corresponding second offset, the coordinates of the third calculated point and the corresponding third offset, the coordinates of the fourth calculated point and the corresponding fourth offset, and the coordinates of each pixel.
Optionally, the interpolation method is a bilinear interpolation method.
In some embodiments, when the first calculation point, the second calculation point, the third calculation point and the fourth calculation point of the first image block are satisfied that all pixel points in the current first image block are located in a rectangle formed by the first calculation point, the second calculation point, the third calculation point and the fourth calculation point (including the edge of the rectangle), calculating the offset corresponding to other pixel points except the representative point in the current first image block according to the offset corresponding to all calculation points corresponding to the current first image block by adopting a bilinear interpolation method.
As shown in fig. 3, the offset corresponding to the pixel points except the calculated points in the block 1, the block 2, the block 4 and the block 5 is optionally calculated by using a bilinear interpolation method.
In some embodiments, when the interpolation method is a bilinear interpolation method, step S450 includes the following steps.
(1) And determining the first weight, the second weight, the third weight and the fourth weight based on the coordinates of each pixel point, the coordinates of the first calculation point corresponding to the first image block where each pixel point is located, the coordinates of the second calculation point, the coordinates of the third calculation point and the coordinates of the fourth calculation point.
Optionally, the coordinates of the current pixel point are expressed as%,) The coordinates of the first calculation point corresponding to the first image block where the current pixel point is located are expressed as%,) The coordinates of the second calculation point are%,) The coordinates of the third calculation point are expressed as%,) The coordinates of the fourth calculation point are expressed as%,). It will be appreciated that the coordinate representation of each computation point is merely exemplary and does not limit the position or order of each computation point.
Optionally, the calculation formula of the first weight is:。
Optionally, the calculation formula of the second weight is: 。
optionally, the calculation formula of the third weight is: 。
optionally, the calculation formula of the fourth weight is: 。
(2) The sum of the first weight multiplied by the first offset, the second weight multiplied by the second offset, the third weight multiplied by the third offset, and the fourth weight multiplied by the fourth offset is taken as the corresponding offset for each pixel point in the first image block except for the representative point of the first image block.
In some embodiments, the offset corresponding to the pixel point in the image to be corrected except for the plurality of calculation points is obtained based on the offset corresponding to each pixel point in the first image block.
In some embodiments, when the first calculation point, the second calculation point, the third calculation point and the fourth calculation point of the first image block are satisfied that a row or a column of pixel points in the current first image block are located on the edge of a rectangle formed by the first calculation point, the second calculation point, the third calculation point and the fourth calculation point, firstly, calculating offset corresponding to pixels except for the calculation points in a row or a column of pixel points in the current first image block according to offset corresponding to all calculation points corresponding to the current first image block by adopting a bilinear interpolation method, and then calculating offset corresponding to other pixel points in the current first image block. At this time, step S450 includes the following steps.
(1) And calculating the offset corresponding to the pixel points except the calculated point in one row or one column of pixel points in the first image block by adopting the bilinear interpolation method.
As shown in fig. 3, alternatively, the offset amounts corresponding to the pixels except for the point J in the column of the pixels where the point J and the point R of the block 3 are located are calculated using the bilinear interpolation method as described above.
As shown in fig. 3, alternatively, the offset amounts corresponding to the pixels except for the point K in the column of the pixels where the point K and the point M of the block 6 are located are calculated using the bilinear interpolation method as described above.
As shown in fig. 3, alternatively, the offset amounts corresponding to the pixels except for the point O in the row of the pixels where the point O and the point T of the block 7 are located are calculated using the bilinear interpolation method as described above. As shown in fig. 3, alternatively, the offset amounts corresponding to the pixels except for the point P in the line of pixels where the point P and the point S of the block 8 are located are calculated using the bilinear interpolation method as described above.
(2) And calculating the offset corresponding to other pixel points in the first image block according to the obtained offset of one row or one column of pixel points in the first image block by adopting a linear interpolation method.
Alternatively, the current pixel point coordinates are expressed asWhereinRepresent the firstThe number of rows of the device is,Represent the firstColumns.
In some embodiments, for each column of pixels of block 3 or block 6 for which no offset is calculated, the calculation formula for the offset for the current pixel is:
,
Wherein, the Representing the offset corresponding to the current pixel point,Representing the offset of the current pixel point on the x-axis,Representing the offset of the current pixel point on the y-axis,Represent the firstLine 1The offset corresponding to the pixel points of the column,Represent the firstLine 1The offset corresponding to the pixel points of the column. At this time, the current pixel does not belong to the pixel with the obtained offset, and the column number of the current pixel is greater than that of the pixel with the obtained offset.
In some embodiments, for each row of pixels of block 7 or block 8 for which no offset is calculated, the calculation formula for the offset for the current pixel is:
,
Wherein, the Represent the firstLine 1The offset corresponding to the pixel points of the column,Represent the firstLine 1The offset corresponding to the pixel points of the column. At this time, the current pixel does not belong to the pixel with the obtained offset, and the number of lines of the current pixel is greater than that of the pixel with the obtained offset.
In some embodiments, when the first image block has only the representative point of the current first image block as the calculation point, the offset corresponding to the pixel point in the image to be corrected except for the plurality of calculation points is obtained based on the offset corresponding to each pixel point in the first image block, and step S450 includes the following steps.
(1) And determining the offset corresponding to the pixel points in other first image blocks adjacent to the current first image block.
As shown in fig. 3, block 9 has only the representative point L of block 9 as a calculation point, and the offsets corresponding to the pixel points in block 6 and block 8 adjacent to block 9 are determined first according to the method described above.
(2) And determining the offset corresponding to the pixel point in the current first image block according to the offsets corresponding to the pixel points in other first image blocks adjacent to the current first image block.
In some embodiments, for the pixels in block 9 other than point L, the calculation formula of the offset of the current pixel is:
,
Wherein, the Represent the firstLine 1The offset corresponding to the pixel points of the column,Represent the firstLine 1The offset corresponding to the pixel points of the column,Represent the firstLine 1The offset corresponding to the pixel points of the column. At this time, the number of rows of the current pixel is greater than the number of rows of the point L, and the number of columns of the current pixel is greater than the number of columns of the point L.
By adopting the method, firstly, the offset is calculated without substituting each pixel point into the mapping function, so that the calculated amount can be reduced, and secondly, because each first image block is provided with the corresponding mapping function, the offset corresponding to the pixels in the first image blocks can be linked by calculating the offset corresponding to the pixels except for the plurality of calculation points in the image to be corrected according to the offset corresponding to all calculation points, and the offset corresponding to the pixels in the first image blocks can be smoothly transited, so that the visual effect of the corrected image is improved. I.e. to make the corrected image look more natural.
In some embodiments, when the pixel point is not a representative point of the current first image block and is a corner point of the image to be corrected, substituting the coordinates of the pixel point into a final image block mapping function of the first image block where the pixel point is located, to obtain coordinates of a mapping point corresponding to the pixel point.
As shown in fig. 3, optionally, the point H is not a representative point of the block 9, and is a corner point of the image 1 to be corrected, and the coordinates of the point H are substituted into the final image block mapping function of the block 9, so as to obtain the coordinates of the corresponding mapping point of the point H.
In some embodiments, the image to be corrected may not be divided into the plurality of first image blocks, and step S400 includes the following steps.
(1) And determining a mapping point corresponding to each pixel point according to the mapping function.
Optionally, substituting each pixel point into a final mapping function corresponding to the first image block where the pixel point is located, to obtain coordinates of a mapping point corresponding to each pixel point.
(2) Subtracting the coordinates on the first coordinate axis of each pixel point from the coordinates on the first coordinate axis of the mapping point corresponding to each pixel point to obtain the offset on the first coordinate axis corresponding to each pixel point.
(3) Subtracting the coordinates on the second coordinate axis of each pixel point from the coordinates on the second coordinate axis of the mapping point corresponding to each pixel point to obtain the offset on the second coordinate axis corresponding to each pixel point.
And S500, correcting the image to be corrected according to the offset corresponding to each pixel point in the image to be corrected, so as to obtain a corrected image.
In some embodiments, the value of each pixel in the image to be corrected is added to the value of the corresponding offset to obtain the corrected image.
Optionally, adding the coordinate value of each pixel point in the image to be corrected on the first coordinate axis by the offset on the first coordinate axis, and adding the coordinate value on the second coordinate axis by the offset on the second coordinate axis to obtain the value of each pixel point after correction, thereby obtaining the corrected image.
In some embodiments, the image to be corrected is represented by a matrix of pixels, the offset corresponding to a plurality of pixels forms an offset matrix, and the corrected image is obtained by adding the offset matrix to the image to be corrected.
As described above, the image correction method of the present application can be used to correct each channel component image in the second image, and combine the plurality of corrected channel component images to obtain the corrected second image, so that the corrected second image is free from chromatic dispersion, thereby improving the accuracy of subsequent image processing.
Referring to fig. 6A and 6B, fig. 6A is a schematic diagram of a second image before correction according to an embodiment of the application. Fig. 6B is a schematic diagram of a corrected second image provided by an embodiment of the present application. As shown in fig. 6A, the second image before correction exhibits chromatic dispersion (shown as shading in fig. 6A) because the multiple channel component images are not aligned. As shown in fig. 6B, the corrected second image is very clear and has no dispersion.
In summary, the image correction method provided by the embodiment of the application has the following advantages:
1. By determining at least one mapping function representing a mapping relationship of a pixel point in an image to be corrected to a corresponding pixel point in a reference image from a plurality of source control points and a plurality of target points, it is possible to determine at least one mapping function based on a plurality of pixel point pairs, and to preserve accurate depth information of the image to be corrected relative to a manner of correction based on a mapping relationship between planes in which the two images lie, so that the image to be corrected having depth parallax can be accurately corrected.
2. By dividing the image to be corrected and the reference image into a plurality of image blocks, the point pairs of the plurality of source control points and the target points can be uniformly distributed, so that at least one mapping function determined later better represents the mapping relation of the pixel points in the image to be corrected to the corresponding pixel points in the reference image.
3. By dividing the image to be corrected and the reference image into a plurality of image blocks, each image block has a corresponding mapping function, each first image block of the image to be corrected can be corrected respectively, namely, the image to be corrected is discretized and then corrected, depth information of the image to be corrected can be better reserved in the correction process, and therefore the image to be corrected with depth parallax can be corrected more accurately.
4. According to the method, the offset corresponding to the other pixel points except the plurality of calculation points in the image to be corrected is calculated according to the offset corresponding to all the calculation points corresponding to each first image block, and firstly, the calculation amount can be reduced without substituting each pixel point into a mapping function to calculate the offset; second, since each first image block has a corresponding mapping function, offset amounts corresponding to the pixels in the plurality of first image blocks can be linked by calculating offset amounts corresponding to other pixels in the image to be corrected except for the plurality of calculation points according to offset amounts corresponding to all calculation points, so that the offset amounts of the pixels in the plurality of first image blocks can be smoothly transited, and the visual effect of the corrected image is improved. I.e. to make the corrected image look more natural.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 7, the electronic device 400 includes one or more processors 410 and a memory 420, one processor 410 being illustrated in fig. 7.
In some embodiments, processor 410 and memory 420 may be connected by a bus or otherwise, for example in FIG. 7.
In some embodiments, a processor 410 is used to acquire a reference image and an image to be corrected;
selecting a plurality of source control points in an image to be corrected and a plurality of target points in a reference image, wherein the source control points correspond to the target points one by one;
determining at least one mapping function representing a mapping relationship of pixel points in an image to be corrected to corresponding pixel points in a reference image according to a plurality of source control points and a plurality of target points corresponding to the source control points one by one;
determining the offset corresponding to each pixel point in the image to be corrected according to the mapping function;
And correcting the image to be corrected according to the offset corresponding to each pixel point in the image to be corrected, so as to obtain a corrected image.
In some embodiments, the memory 420 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules for the image correction method in the embodiments of the present application. The processor 410 executes various functional applications and data processing of the electronic device 400 by running non-volatile software programs, instructions and modules stored in the memory 420, i.e., implements the image correction method of the above-described method embodiments.
In some embodiments, the memory 420 may include a storage program area that may store an operating system, application programs required for at least one function, a storage data area that may store data created according to the use of the electronic device, and the like. In addition, memory 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 420 optionally includes memory remotely located relative to processor 410, which may be connected to the controller via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In some implementations, one or more modules are stored in memory 420 that, when executed by one or more processors 410, perform the image correction method in any of the method embodiments described above, e.g., perform method steps S100 through S500 in fig. 1 described above.
In summary, the present application provides an image correction method and an electronic device, where at least one mapping function representing a mapping relationship between a pixel point in an image to be corrected and a corresponding pixel point in a reference image is determined according to a plurality of source control points and a plurality of target points, so that the at least one mapping function can be determined based on a plurality of pixel point pairs, and accurate depth information of the image to be corrected can be maintained relative to a manner of correcting based on a mapping relationship between planes in which two images are located, thereby accurately correcting the image to be corrected having depth parallax.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it will be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or replacements do not drive the essence of the corresponding technical solution to deviate from the spirit and scope of the technical solution of the embodiments of the present application.
Claims (7)
1. An image correction method, comprising:
acquiring a reference image and an image to be corrected;
Selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image, wherein the source control points are in one-to-one correspondence with the target points;
The selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image comprises:
dividing the image to be corrected into a plurality of first image blocks;
randomly selecting at least one source control point from each first image block to obtain a plurality of source control points;
Dividing the reference image into a plurality of second image blocks corresponding to the first image blocks;
Selecting a corresponding target point in the second image block corresponding to the first image block for the source control point in each first image block;
Determining at least one mapping function representing a mapping relationship of pixel points in the image to be corrected to corresponding pixel points in the reference image according to the plurality of source control points and the plurality of target points corresponding to the plurality of source control points one by one;
The at least one mapping function includes a plurality of image block mapping functions, and the determining at least one mapping function representing a mapping relationship of a pixel point in the image to be corrected to a corresponding pixel point in the reference image according to the plurality of source control points and the plurality of target points corresponding to the plurality of source control points one to one includes:
Constructing a pending image block mapping function representing the mapping relation between each first image block and the corresponding second image block to obtain a plurality of pending image block mapping functions, wherein the pending image block mapping functions comprise pending rotation matrixes and translation matrixes;
constructing a corresponding cost function according to the coordinates of all source control points in each first image block, target points corresponding to all source control points in each first image block and a to-be-determined image block mapping function corresponding to each first image block;
Determining a rotation matrix and a translation matrix in the undetermined image block mapping function with the minimum value of the cost function according to the cost function, the coordinates of all source control points in each first image block and the coordinates of target points corresponding to all the source control points, and obtaining a final image block mapping function corresponding to each first image block;
Determining the offset corresponding to each pixel point in the image to be corrected according to the mapping function;
the first image block has at least one corresponding calculation point, the at least one calculation point includes a representative point of the first image block, the offset includes an offset on a first coordinate axis and an offset on a second coordinate axis, and the determining, according to the mapping function, an offset corresponding to each pixel point in the image to be corrected includes:
Determining coordinates of mapping points corresponding to representative points of each first image block according to a final image block mapping function corresponding to each first image block;
Subtracting the coordinates on the first coordinate axis of the representative point from the coordinates on the first coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the first coordinate axis corresponding to the representative point;
Subtracting the coordinates on the second coordinate axis of the representative point from the coordinates on the second coordinate axis of the mapping point corresponding to the representative point to obtain the offset on the second coordinate axis corresponding to the representative point;
Obtaining the offset corresponding to all calculation points corresponding to all the first image blocks according to the offset corresponding to the representative point of each first image block;
Calculating offset corresponding to other pixel points except a plurality of calculation points in the image to be corrected according to the offset corresponding to all calculation points corresponding to each first image block;
And correcting the image to be corrected according to the offset corresponding to each pixel point in the image to be corrected, so as to obtain a corrected image.
2. The image correction method according to claim 1, wherein the reference image and the image to be corrected include feature patterns, the selecting a plurality of source control points in the image to be corrected and a plurality of target points in the reference image includes:
selecting the plurality of source control points from the image to be corrected;
And selecting the target points in the reference image, which correspond to the source control points one by one, based on the characteristic pattern in the reference image, the characteristic pattern in the image to be corrected and the source control points.
3. The method for correcting an image according to claim 1, wherein,
The constructing a corresponding cost function according to the coordinates of all source control points in each first image block, the target points corresponding to all source control points in each first image block and the undetermined image block mapping function corresponding to each first image block, including:
Calculating the distance from each source control point to a preset calculation point according to the coordinates of the source control point and the coordinates of the preset calculation point in each first image block;
determining a weight coefficient of the source control point in each first image block according to the distance from each source control point to the preset calculation point and the smoothed coefficient;
and constructing a corresponding cost function according to the coordinates of all source control points in each first image block, target points corresponding to all source control points in each first image block, weight coefficients of all source control points in each first image block and a pending image block mapping function corresponding to each first image block.
4. The image correction method according to claim 1, wherein the first image block includes a first calculation point, a second calculation point, a third calculation point, and a fourth calculation point, the calculating offset amounts corresponding to other pixel points in the image to be corrected than the plurality of calculation points according to offset amounts corresponding to all calculation points corresponding to each of the first image block includes:
Determining the corresponding offset of each pixel point except the representative point of the first image block in the first image block according to the coordinates of the first calculation point corresponding to each pixel point and the corresponding first offset, the coordinates of the second calculation point and the corresponding second offset, the coordinates of the third calculation point and the corresponding third offset, the coordinates of the fourth calculation point and the corresponding fourth offset and the coordinates of each pixel point;
and obtaining the offset corresponding to the pixel points except the plurality of calculation points in the image to be corrected based on the offset corresponding to each pixel point in the first image block.
5. The image correction method according to claim 4, wherein the determining the offset amount corresponding to each of the pixel points in the first image block other than the representative point of the first image block based on the coordinates and the corresponding first offset amount of the first calculation point corresponding to the first image block where each of the pixel points is located, the coordinates and the corresponding second offset amount of the second calculation point, the coordinates and the corresponding third offset amount of the third calculation point, the coordinates and the corresponding fourth offset amount of the fourth calculation point, and the coordinates of each of the pixel points includes:
Determining a first weight, a second weight, a third weight and a fourth weight based on the coordinates of each pixel point, the coordinates of the first calculation point corresponding to the first image block where each pixel point is located, the coordinates of the second calculation point, the coordinates of the third calculation point and the coordinates of the fourth calculation point;
the sum of the first weight multiplied by the first offset, the second weight multiplied by the second offset, the third weight multiplied by the third offset, and the fourth weight multiplied by the fourth offset is taken as the corresponding offset of each of the pixel points in the first image block except the representative point of the first image block.
6. The image correction method according to claim 1, wherein the offset includes an offset on a first coordinate axis and an offset on a second coordinate axis, and the determining the offset corresponding to each pixel point in the image to be corrected according to the mapping function includes:
Determining a mapping point corresponding to each pixel point according to the mapping function;
Subtracting the coordinates on the first coordinate axis of each pixel point from the coordinates on the first coordinate axis of the mapping point corresponding to each pixel point to obtain the offset on the first coordinate axis corresponding to each pixel point;
And subtracting the coordinates on the second coordinate axis of the corresponding mapping point of each pixel point from the coordinates on the second coordinate axis of the corresponding mapping point of each pixel point to obtain the offset on the second coordinate axis of each pixel point.
7. An electronic device, the electronic device comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image correction method of any one of claims 1 to 6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410876796.0A CN118710564B (en) | 2024-07-02 | 2024-07-02 | Image correction method and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410876796.0A CN118710564B (en) | 2024-07-02 | 2024-07-02 | Image correction method and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118710564A CN118710564A (en) | 2024-09-27 |
| CN118710564B true CN118710564B (en) | 2025-08-08 |
Family
ID=92811044
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410876796.0A Active CN118710564B (en) | 2024-07-02 | 2024-07-02 | Image correction method and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118710564B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107146208A (en) * | 2017-04-27 | 2017-09-08 | 扬州大学 | Restoration Method of Local Deformation of Incomplete Model Based on Thin Plate Spline Basis Function Optimization |
| CN114693532A (en) * | 2020-12-28 | 2022-07-01 | 富泰华工业(深圳)有限公司 | Image correction method and related equipment |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110610465B (en) * | 2019-08-26 | 2022-05-17 | Oppo广东移动通信有限公司 | Image correction method and device, electronic equipment and computer readable storage medium |
| CN112243518A (en) * | 2019-08-29 | 2021-01-19 | 深圳市大疆创新科技有限公司 | Method, device and computer storage medium for acquiring depth map |
| CN115131224B (en) * | 2021-03-29 | 2025-02-28 | 北京小米移动软件有限公司 | Image correction method and device, electronic device and storage medium |
-
2024
- 2024-07-02 CN CN202410876796.0A patent/CN118710564B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107146208A (en) * | 2017-04-27 | 2017-09-08 | 扬州大学 | Restoration Method of Local Deformation of Incomplete Model Based on Thin Plate Spline Basis Function Optimization |
| CN114693532A (en) * | 2020-12-28 | 2022-07-01 | 富泰华工业(深圳)有限公司 | Image correction method and related equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118710564A (en) | 2024-09-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107633536B (en) | Camera calibration method and system based on two-dimensional plane template | |
| CN109754427A (en) | A method and apparatus for calibration | |
| CN104200454A (en) | Fisheye image distortion correction method and device | |
| JP2020523703A5 (en) | ||
| Hugemann | Correcting lens distortions in digital photographs | |
| CN101577004B (en) | A polar line correction method, device and system | |
| CN112862895A (en) | Fisheye camera calibration method, device and system | |
| CN115965697B (en) | Projector calibration method, system and device based on Moh's law | |
| CN112288826A (en) | Calibration method and device of binocular camera and terminal | |
| CN111681186A (en) | Image processing method and device, electronic equipment and readable storage medium | |
| CN116051634A (en) | A visual positioning method, terminal and storage medium | |
| CN115131273A (en) | Information processing method, ranging method and device | |
| US20190147616A1 (en) | Method and device for image rectification | |
| CN107945136B (en) | Fisheye image correction method, fisheye image correction system, fisheye image correction equipment and computer storage medium | |
| CN111553969B (en) | A Gradient Field-Based Texture Mapping Method, Medium, Terminal and Device | |
| CN112446926A (en) | Method and device for calibrating relative position of laser radar and multi-eye fisheye camera | |
| CN115753019B (en) | Pose adjustment method, device and equipment of acquisition equipment and readable storage medium | |
| CN118710564B (en) | Image correction method and electronic equipment | |
| CN116402934A (en) | An automatic texture mapping method, terminal and storage medium for 3D reconstruction | |
| CN113516722B (en) | Vehicle camera calibration method and device, computer equipment and storage medium | |
| CN117523009B (en) | Binocular camera calibration method, system, device and storage medium | |
| CN112669388B (en) | Calibration method and device for laser radar and camera device and readable storage medium | |
| CN116894907B (en) | RGBD camera texture mapping optimization method and system | |
| CN115127481B (en) | Fringe projection 3D measurement method, terminal device and computer-readable storage medium | |
| CN119067844A (en) | Image stitching method, device, non-volatile storage medium and computer equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |