Disclosure of Invention
In view of this, the present invention provides an image synthesis method and an image synthesis device, so as to solve the problem that in the prior art, a clear high-depth-of-field image of each pixel point cannot be obtained quickly and effectively.
The invention provides an image synthesis method, which comprises the following steps: A. respectively obtaining the brightness information of each pixel point in each original image; B. extracting high-frequency components of corresponding pixel points from the brightness information of the pixel points; C. determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the step C comprises the following steps: c11, obtaining a high-frequency component intensity value of each pixel point in each original image; c12, comparing the high-frequency component intensity values of the pixel points at the same positions in different original images, and determining the original image with the maximum high-frequency component intensity value of each pixel point at each position; and C13, determining the value of the corresponding position pixel point in the synthetic image according to the original image with the maximum high-frequency component intensity value of each position pixel point.
The step C13 includes: directly taking the value of the pixel point at the corresponding position in the original image with the maximum high-frequency component intensity value of the pixel point at each position as the value of the pixel point at the corresponding position in the synthesized image; or setting a mask plane with the same size as the original image, setting the value of the corresponding position point in the mask plane according to the original image with the maximum high-frequency component intensity value of each position pixel point, performing smooth filtering on the mask plane after the high-frequency component intensity values of all the pixel points in each original image are compared to obtain a smooth plane, setting a threshold according to the values of all the points on the smooth plane, comparing the value of each point in the smooth plane with the threshold, and taking the value of the corresponding position pixel point in the original image as the value of the corresponding position pixel point in the synthesized image according to the comparison result.
The present invention providesThe image synthesizing method of (1), the method comprising: A. respectively obtaining the brightness information of each pixel point in each original image; B. extracting high-frequency components of corresponding pixel points from the brightness information of the pixel points; C. determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the step C comprises the following steps: calculating the high-frequency component intensity value of each pixel point in each original image, calculating the weighting coefficient of each pixel point according to the high-frequency component intensity value of the pixel point at the same position in each original image, and obtaining the value of each pixel point in the synthesized image according to the weighting coefficient; wherein, in the N original images with the size of m × N, the weighting coefficient k of the pixel point at the position (p, q) in the ith original imagei(p, q) is:
<math>
<mrow>
<msub>
<mi>k</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mi>abs</mi>
<mo>_</mo>
<msub>
<mi>edge</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<mi>abs</mi>
<mo>_</mo>
<msub>
<mi>edge</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
</math>
wherein abs _ edgei(m, N) is the high-frequency component intensity value of the pixel point at the position (p, q) on the ith original image, and i is 1 to NA positive integer, p is a positive integer from 1 to m, and q is a positive integer from 1 to n.
The invention provides another image synthesis method, which comprises the following steps: A. respectively obtaining the brightness information of each pixel point in each original image; B. extracting high-frequency components of corresponding pixel points from the brightness information of the pixel points; C. determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the step C comprises the following steps: c21, taking two original images; c22, synthesizing the two images into an intermediate synthetic image; c23, judging whether all the original images are synthesized, if so, taking the obtained intermediate synthetic image as a final synthetic image, otherwise, continuing to execute C24 and continuing the next iteration process; c24, selecting the intermediate synthetic image and an original image which is not subjected to the synthetic operation, and returning to execute C22; wherein the step C22 includes: obtaining a high-frequency component intensity value of each pixel point in the two images; determining an image with the maximum high-frequency component intensity value of the pixel points at the same position in the two images, and determining the value of the pixel point at the corresponding position in the intermediate synthetic image according to the image with the maximum high-frequency component intensity value; or calculating the weighting coefficient of each pixel point according to the high-frequency component intensity values of the pixel points at the same positions in the two images, and obtaining the value of each pixel point of the intermediate synthetic image according to the weighting coefficient.
In any of the above image synthesis methods, when extracting the high-frequency component of the corresponding pixel from the luminance information of the pixel in step B, the method further includes: extracting low-frequency components of pixel points in an original image; the step C is further followed by: and determining the value of the pixel point at the corresponding position in the low-frequency synthetic image according to the values of the pixel points with the low-frequency components at the same positions in all the original images, and processing the low-frequency synthetic image and the synthetic image to obtain the synthetic image containing the high-frequency components and the low-frequency components.
The above determining the value of the pixel point at the corresponding position in the low-frequency synthesized image includes: taking the weighted average of the values of all pixel points with low-frequency components at the corresponding positions of all the original images as the values of the pixel points at the corresponding positions in the low-frequency synthesized image; or, the value of the pixel point at the corresponding position of one of the original images with the low-frequency component at the corresponding position of each original image is used as the value of the pixel point at the corresponding position in the low-frequency synthetic image.
An image synthesizing apparatus according to the present invention includes: the system comprises a brightness information extraction unit, a high-frequency component extraction unit and a high-frequency synthesis unit, wherein the brightness information extraction unit is used for extracting the brightness information of each pixel point in each original image; the high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point in each original image according to the brightness information of each pixel point in each original image; the high-frequency synthesis unit is used for determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the high-frequency synthesis unit comprises a high-frequency component intensity value calculation unit, a comparison unit and an assignment unit, wherein the high-frequency component intensity value calculation unit is used for calculating a high-frequency component intensity value of a corresponding pixel point according to the high-frequency component of each pixel point in each original image, the comparison unit is used for comparing the high-frequency component intensity values of the pixel points at the same positions in different original images to determine an original image with the maximum high-frequency component intensity value of each pixel point, and the assignment unit is used for determining the value of the pixel point at the corresponding position in the synthesized image according to the original image with the maximum high-frequency component intensity value of each pixel point.
In the image synthesis device, the assignment unit comprises a mask plane setting unit, a smoothing filtering unit and a threshold value comparison unit, wherein the mask plane setting unit is used for setting a mask plane with the same size as that of an original image, and determining the value of a corresponding position point on the mask plane according to the original image with the maximum high-frequency component intensity value of each position pixel point; the smoothing filtering unit is used for performing smoothing filtering on the mask plane to obtain a smooth plane after the comparison of the high-frequency component intensity values of all the pixel points in each original image is finished; the threshold comparison unit is used for setting a threshold according to the values of all the points on the smooth plane, comparing the value of each point on the smooth plane with the threshold, and taking the value of the pixel point at the corresponding position of the corresponding original image as the value of the pixel point at the corresponding position in the synthesized image according to the comparison result.
Another image synthesizing apparatus according to the present invention includes: the system comprises a brightness information extraction unit, a high-frequency component extraction unit and a high-frequency synthesis unit, wherein the brightness information extraction unit is used for extracting the brightness information of each pixel point in each original image; the high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point in each original image according to the brightness information of each pixel point in each original image; the high-frequency synthesis unit is used for determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the high-frequency synthesis unit comprises a high-frequency component intensity value calculation unit, a weighting coefficient calculation unit and a weighting unit, wherein the high-frequency component intensity value calculation unit is used for calculating a high-frequency component intensity value of a corresponding pixel point according to the high-frequency component of each pixel point in each original image, the weighting coefficient calculation unit is used for calculating a weighting coefficient of each pixel point in each original image according to the high-frequency component intensity value of each pixel point in each original image, and the weighting unit is used for weighting and synthesizing the values of the pixel points of all the original images at the same position into the value of the pixel point at the corresponding position in the synthesized image according to the weighting coefficient of each pixel point in each original image.
The present invention provides still another image synthesizing apparatus, comprising: the system comprises a brightness information extraction unit, a high-frequency component extraction unit and a high-frequency synthesis unit, wherein the brightness information extraction unit is used for extracting the brightness information of each pixel point in each original image; the high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point in each original image according to the brightness information of each pixel point in each original image; the high-frequency synthesis unit is used for determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image; the high-frequency synthesizing unit includes a first high-frequency component intensity value calculating unit, a first storage unit, a third storage unit, a two-map synthesizing unit, a second storage unit, a second luminance information extracting unit, a second high-frequency component intensity value calculating unit, and a counting unit. The first high-frequency component intensity value calculation unit is used for calculating a high-frequency component intensity value of a corresponding pixel point according to the high-frequency component of each pixel point in each original image; the first storage unit is used for storing the high-frequency component intensity value of each pixel point in each original image, sequentially numbering the high-frequency component intensity values by taking the original image as a unit, and providing the high-frequency component intensity value of each pixel point in the original image with the number corresponding to the current count value when receiving the current count value provided by the counting unit; the third storage unit is used for storing each original image, numbering each original image in sequence, and providing the original image with the number corresponding to the current count value when receiving the current count value provided by the counting unit; the two-image synthesis unit is used for determining the value of the pixel point at the corresponding position in the intermediate synthesis image according to the high-frequency component intensity value of each pixel point in the two images and providing a counting notification message; the second storage unit is used for storing the intermediate synthetic image, providing the intermediate synthetic image according to the current counting value provided by the counting unit and outputting the intermediate synthetic image as a final synthetic image according to the output notification message provided by the counting unit; the second brightness information extraction unit is used for extracting the brightness information of each pixel point in the intermediate synthetic image; the second high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point according to the brightness information of each pixel point in the intermediate synthetic image; the second high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of each pixel point according to the high-frequency component of each pixel point in the intermediate synthetic image; and the counting unit is used for adding 1 to the counting value according to the counting notification message to serve as a current counting value, judging whether all the original images are synthesized or not according to the current counting value, if so, sending an output notification message to the second storage unit, and otherwise, respectively providing the current counting value to the first storage unit and the third storage unit.
Still another image synthesizing apparatus provided by the present invention includes: the device comprises a first storage unit, a second storage unit, a brightness information extraction unit, a high-frequency component intensity value calculation unit, a two-image synthesis unit and a counting unit, wherein the first storage unit is used for storing each original image, sequentially numbering the original images and providing the original images corresponding to the current count value according to the current count value provided by the counting unit; the second storage unit is used for storing the intermediate synthetic image, directly providing the intermediate synthetic image to the brightness information extraction unit, and outputting the intermediate synthetic image as a final synthetic image according to the output notification message provided by the counting unit; the brightness information extraction unit is used for extracting the brightness information of each pixel point in the image; the high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point according to the brightness information of each pixel point in the image; the high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of each pixel point according to the high-frequency component of each pixel point in the image; the two-image synthesis unit is used for determining the value of a corresponding pixel point in an intermediate synthesis image according to the high-frequency component intensity value of each pixel point in the two images, providing the intermediate synthesis image and providing a counting notification message; and the counting unit is used for adding 1 to the counting value according to the counting notification message to obtain a current counting value, judging whether all the original images are synthesized according to the current counting value, if so, sending an output notification message to the second storage unit, and otherwise, respectively providing the current counting value to the first storage unit and the second storage unit.
Any of the image synthesizing apparatuses described above further includes: the low-frequency component extraction unit is connected with the brightness information extraction unit or the high-frequency component extraction unit and is used for acquiring the low-frequency components of the pixel points in each original image and selecting the pixel points with the low-frequency components at the same position in different original images; the low-frequency synthesis unit is connected with the low-frequency component extraction unit and used for determining the value of the pixel point at the corresponding position in the low-frequency synthesized image according to the values of the pixel points at the same position in all the original images with the low-frequency component; the synthesis unit is connected with the low-frequency synthesis unit and the high-frequency synthesis unit or the two-image synthesis unit and is used for processing the synthesis image and the low-frequency synthesis image to obtain a synthesis image containing high-frequency components and low-frequency components.
The invention uses an image synthesis scheme based on pixel points to expand the depth of field, in the scheme provided by the invention, the brightness information of each pixel point in each original image is respectively obtained, the high-frequency component of the corresponding pixel point is extracted from the brightness information of the pixel point, and the value of the pixel point at the corresponding position in the synthesized image is determined according to the high-frequency component of each pixel point in each original image. In the invention, the calculation of the high-frequency component intensity value of each pixel point in the original image is only simple operation, and the process of determining the value of each pixel point in the synthetic image only needs simple comparison or four arithmetic operations, so compared with the prior art, the scheme provided by the invention greatly reduces the operation amount in the image synthetic process, and can conveniently and quickly obtain the required synthetic image. Because the scheme provided by the invention is based on the operation of each pixel point, compared with the prior art, the details of each pixel point of the synthetic image obtained by adopting the scheme provided by the invention are clearer.
In addition, in the method of the invention, the corresponding pixel points in the synthesized image are generated without using the values of the high-frequency component and the low-frequency component of each pixel point in the original image, but the values of the pixel points at the corresponding positions in the original image are directly used as the values of the pixel points at the corresponding positions in the synthesized image, or the values of the pixel points at the corresponding positions in the original image are weighted and averaged to be used as the values of the pixel points at the corresponding positions in the synthesized image, so that compared with the prior art, the method of the invention can not cause the deviation of the details of the synthesized image and the details of the original image due to the related operations of wavelet transformation and the like on.
In summary, the scheme provided by the invention can quickly and effectively obtain the synthetic image with high depth of field.
Detailed Description
The image synthesis method provided by the invention comprises the steps of firstly obtaining the brightness information of each pixel point of each original image in a plurality of original images; then extracting high-frequency components from the brightness information of the pixel points; and determining the value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point.
The method is suitable for all monochrome images and color images, and for monochrome images, the value of a pixel point refers to the gray value of the pixel point in the monochrome images; for a color image, the value of a pixel point refers to the Red, Green, Blue (RGB) value of the pixel point in the color image. The implementation steps of the method provided by the present invention are specifically described below by taking a color image with a size of m × n as an example, where m and n are both positive integers.
Please refer to fig. 1, which is a flowchart illustrating image synthesis according to the present invention, and the specific implementation steps are as follows:
s100: and respectively obtaining the brightness information Y (p, q) of each pixel point in each original image from the plurality of original images. Wherein the value of p is any positive integer from 1 to m, and the value of q is any positive integer from 1 to n.
Here, since the RGB color space is ideal for hardware implementation, the original image is represented by the RGB color space. When the human eye observes a colored object, the hue, saturation and brightness can be used to describe the object. The human eye is sensitive to luminance versus chrominance and luminance is a key parameter describing color perception, so luminance information of an image is also an important parameter characterizing an image. However, since the brightness is only a subjective description in the RGB color space, the RGB color space is often converted into a luminance-chrominance (YUV) space, and since the luminance and chrominance in the YUV space are separated, the luminance information of the pixel point of the original image can be extracted in the YUV space.
Wherein, Y (p, q) is the weighted sum of the RGB components of the pixels at the corresponding positions of the original image, that is:
Y(p,q)=a1*R(p,q)+a2*G(P,q)+a3*B(P,q)
wherein, a1, a2, and a3 are weighting coefficients of Red (Red, R), Green (Green, G), and Blue (Blue, B) components of pixels at corresponding positions in the original image, respectively.
S101: since the high-frequency component of the image can represent the detailed part of the image, the detailed part of the image refers to the information such as the edge and other sharp changes of the image, and the like, the high-frequency component of the corresponding pixel point is extracted from the brightness information Y (p, q) of each pixel point, which is equivalent to the extraction of the detailed information of the image at the corresponding pixel point.
The extraction of the high-frequency components of the pixel points can be realized through a high-pass filter, and the high-frequency components of all the pixel points are obtained after the original image passes through the high-pass filter with a set threshold value.
S102: and determining the RGB value of the pixel point at the corresponding position in the synthesized image according to the high-frequency component of each pixel point in each original image.
The above S102 can be implemented by three ways: a comparison method, a weighting method, and an iterative method based on the comparison method or the weighting method, and the following describes in detail three implementation manners of S102, respectively.
Please refer to fig. 2, which is a flowchart illustrating image synthesis using a comparison method according to the present invention, including:
s200: and respectively taking the absolute value of the high-frequency component of each pixel point of each original image to obtain the high-frequency component intensity value of each pixel point, wherein the size of the high-frequency component intensity value represents the definition degree of the pixel point of the original image at the corresponding position. Here, the high-frequency component intensity value of the pixel point of the original image at the position (p, q) is represented by abs _ edge (p, q).
S201: and comparing the high-frequency component intensity values of the pixel points at the same positions in different original images, and determining the original image with the maximum high-frequency component intensity value of the pixel point at the position aiming at each position.
S202: and determining the RGB value of the pixel point at the corresponding position in the synthesized image according to the original image with the maximum high-frequency component intensity value of each pixel point at the position.
The above S202 has two implementation manners, and the first implementation manner is: and directly taking the RGB value of the pixel point at the corresponding position of the original image with the maximum high-frequency component intensity value of the pixel point at each position as the RGB value of the pixel point at the corresponding position of the synthetic image.
Fig. 3 is a flowchart of a second implementation manner of S202, including:
s301: setting a mask plane with the same size as the original image, wherein the mask is used for representing the mask plane; and for each specific position, setting the value of the point at the corresponding position on the mask according to the original image with the maximum high-frequency component intensity value of the pixel point at the position.
For example, there are N original images, abs _ edge (p, q) represents the intensity value of the high frequency component of the pixel at the position (p, q) of any one of the original images, and if abs _ edge (p, q) is the ith original image at the maximum, the value of mask (p, q) of the point at the position (p, q) in the mask may be set to i, and different original images correspond to different values of mask (p, q).
S302: after abs _ edge (p, q) corresponding to all position pixel points in the original image are compared, smooth filtering is carried out on mask to obtain a smooth plane, the smooth plane is represented by mask _ smooth, a threshold value is set according to values of all mask _ smooth (p, q), and N-1 threshold values are required to be set for N original images.
Wherein the smoothing filtering may be implemented by a normalized smoothing filter. The reason why the smoothing filtering is performed is to eliminate the noise left in the original image when the high frequency components of the pixel points are extracted, because the noise is also extracted as one of the high frequency components in S101, which results in inaccurate results in the subsequent S201. In the process of smoothing filtering, the value of a point on the mask is determined by comprehensively using the values of the mask (p, q) of the current point and the points around the current point, so that the reliability of the final result is improved. For example, for a 5 × 5 smoothing window, there are 24 1 s, and only the center point is 2, we consider that 2 of the center point is not reliable, and the output result of the center point after smoothing filtering is close to 1, e.g., the output value of the center point of the smoothing window is 26/25 ═ 1.04.
S303: and comparing the value of each mask _ smooth (p, q) in the smooth plane with the set threshold value, and taking the RGB value of the pixel point corresponding to the corresponding position (p, q) in the original image as the RGB value of the pixel point corresponding to the corresponding position in the synthesized image according to the comparison result.
The following describes the second implementation of the comparison method in detail by taking an example of the composition of two original images and three original images.
Referring to fig. 4, the present invention is a flow chart comparing two images. The method specifically comprises the following steps:
s400: and judging whether the abs _ edge1(p, q) which is not compared is larger than abs _ edge2(p, q), if so, executing S401, otherwise, executing S402.
Wherein abs _ edge1(p, q) and abs _ edge2(p, q) represent the intensity values of the high frequency components of the pixels at the same position in the original image 1 and the original image 2, respectively.
S401: the value of the corresponding position point of the mask is set, for example, mask (p, q) ═ 1 is set.
S402: the value of the corresponding position point of the mask is set, for example, mask (p, q) ═ 2 is set.
S403: and judging whether the high-frequency component intensity values of the pixel points at the same positions in the original image are compared completely, namely judging whether abs _ edge1(p, q) and abs _ edge2(p, q) are compared completely, if so, continuing to execute S404, and otherwise, executing S420.
S404: carrying out smooth filtering on the mask to obtain a mask _ smooth; the threshold Th is set according to the values of all mask _ smooth (p, q), and then S405 is executed. For example, when mask (p, q) is set to 1 or 2, since the values of all points on mask _ smooth after smoothing filtering are close to either 1 or 2, the threshold Th may be set to 1.5.
S420: selecting abs _ edge1(p, q) and abs _ edge2(p, q) corresponding to the pixel point at the same position (p, q) where no comparison is performed, and returning to execute S400.
S405: and judging whether the mask _ smooth (p, q) which is not compared with the threshold value is smaller than Th, if so, executing S406, and otherwise, executing S407.
S406:C(p,q)=C1(p, q), namely, the RGB values of the pixel points at the corresponding positions in the original image 1 are used as the RGB values of the pixel points at the corresponding positions in the synthesized image, and then S408 is executed.
S407:C(p,q)=C2(p, q), namely, the RGB values of the pixel points at the corresponding positions in the original image 2 are used as the RGB values of the pixel points at the corresponding positions in the synthesized image.
Wherein, C (p, q), C1(p, q) and C2And (p, q) respectively represent the RGB values of the pixel points at the same position of the composite image, the original image 1 and the original image 2.
S408: and judging whether the comparison of the values of all the points on the mask _ smooth with the threshold value is finished, if so, finishing the operation, and otherwise, executing S409.
S409: and selecting the mask _ smooth (p, q) which is not compared, and returning to execute the step S405.
Referring again to FIG. 5, a flow chart for synthesizing three images is compared in the present invention. The method comprises the following steps:
s500: and comparing the high-frequency component intensity values of the pixel points corresponding to the same position in the three original images, namely comparing the same (p, q) corresponding abs _ edge1(p, q), abs _ edge2(p, q) and abs _ edge3(p, q), and finding out the original image with the maximum high-frequency component intensity value of the pixel point at the position.
S501: setting the values of all corresponding position points on the mask according to the comparison result, for example, if the maximum high-frequency component intensity value of a certain position pixel point obtained by the comparison in S500 is the ith original image, where i is 1, 2, and 3, then the mask (p, q) may be i.
S502: and judging whether the comparison of the high-frequency component intensity values of the pixel points at the same positions in the three original images is finished, if so, continuing to execute S503, otherwise, executing S520.
S503: and carrying out smooth filtering on the mask to obtain the mask _ smooth. 2 thresholds Th1 and Th2 are set according to the values of all mask _ smooth (p, q), and then S504 is executed. For example, when mask (p, q) ═ i is set, since the values of all points on mask _ smooth after smoothing filtering are close to either 1 or 2, or 3, Th1 can be set to 1.5 and Th2 can be set to 2.5.
S520: selecting abs _ edge1(p, q), abs _ edge2(p, q) and abs _ edge3(p, q) corresponding to the pixel point at the same position (p, q) where no comparison is performed, and returning to perform S500.
S504: and judging whether the mask _ smooth (p, q) which is not compared with the threshold value is smaller than Th1, if so, executing S505, otherwise, executing S506.
S505:C(p,q)=C1(p, q), the RGB values of the pixels at the corresponding positions in the original image 1 are used as the RGB values of the pixels at the corresponding positions in the synthesized image, and then S509 is executed.
S506: and judging whether the mask _ smooth (p, q) is larger than Th2, if so, executing S507, otherwise, executing S508.
S507:C(p,q)=C3(p, q), namely, the RGB values of the pixels at the corresponding positions in the original image 3 are taken as the RGB values of the pixels at the corresponding positions in the synthesized image, and then S5 is executed09。
S508:C(p,q)=C2(p, q), namely, the RGB value of the pixel point at the corresponding position of the original image 2 is used as the RGB value of the pixel point at the corresponding position in the synthetic image.
S509: and judging whether the comparison of the values of all the points on the mask _ smooth with the threshold value is finished, if so, ending the operation, and otherwise, executing S510.
S510: and selecting the mask _ smooth (p, q) which is not compared, and returning to execute S504.
Fig. 6 is a flowchart of image synthesis using a weighting method, including:
s600: calculating the high-frequency component intensity value of each pixel point in each original image, for example, for N original images m × N, the high-frequency component intensity value of the pixel point at the position (p, q) on the ith original image is abs _ edgei(p,q)。
S601: and calculating the weighting coefficient of each pixel point according to the high-frequency component intensity value of the pixel point at the same position in each original image.
If N original images exist, the weighting coefficient k of the pixel point at the position (p, q) on the ith original imagei(p, q) is:
<math>
<mrow>
<msub>
<mi>k</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mi>abs</mi>
<mo>_</mo>
<msub>
<mi>edge</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<mi>abs</mi>
<mo>_</mo>
<msub>
<mi>edge</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
</math>
wherein i is any positive integer from 1 to N, ki(p, q) satisfies:
<math>
<mrow>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msub>
<mi>k</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
</mrow>
</math>
such normalization is performed to make the overall brightness of the final composite image constant for each original image.
S602: synthesizing the original image into a synthesized image according to the weighting coefficient, wherein pixel points at each position of the synthesized image satisfy the following conditions:
<math>
<mrow>
<mi>C</mi>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mi>Σ</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msub>
<mi>k</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>C</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<mi>q</mi>
<mo>)</mo>
</mrow>
</mrow>
</math>
wherein C (p, q) represents the RGB value of the pixel point at the position (p, q) in the composite image, CiAnd (p, q) represents the RGB value of a pixel point at the position (p, q) in the ith original image, and the RGB value of each pixel point on the synthesized image is obtained by weighted average of the RGB values of the pixel points at the corresponding positions of all the original images. Due to the weighting coefficient kiThe (p, q) represents the proportion of the high-frequency component of the pixel point at the corresponding position in each original image, namely the amount of detail information on the pixel point at the corresponding position in each original image, so that the contribution of the pixel point at the corresponding position in each original image to the pixel point at the corresponding position in the synthesized image can be determined according to the method, and the clearer synthetic image with the large depth of field can be obtained.
Fig. 7 is a flowchart of image synthesis using an iterative method according to the present invention, and the specific implementation steps are as follows.
S700: two original images are arbitrarily selected.
S701: synthesizing the two images into an intermediate synthesized image according to the comparison method or the weighting method for the two images;
s702: and judging whether all the original images are synthesized, if so, taking the obtained intermediate synthesized image as a final synthesized image, otherwise, continuing to execute S703 and continuing the next iteration process.
S703: and selecting the intermediate synthetic image obtained in the last iteration process, randomly selecting an original image which is not subjected to the synthetic operation, and returning to execute the step S701.
Please refer to fig. 8, which is a schematic diagram illustrating the image synthesis using the iterative method according to the present invention. Wherein, for N original images, the iteration process is as follows: synthesizing the original image 1 and the original image 2 according to a comparison method or a weighting method aiming at the two images to obtain an intermediate synthetic image New _ 1; the intermediate synthetic image New _1 and the original image 3 are synthesized according to a comparison method or a weighting method aiming at the two images to obtain an intermediate synthetic image New _ 2; the intermediate synthetic image New _2 and the original image 4 are synthesized according to a comparison method or a weighting method aiming at the two images to obtain an intermediate synthetic image New _ 3; and sequentially iterating until the intermediate synthetic image New _ N-2 and the original image N are synthesized according to a comparison method or a weighting method aiming at the two images to obtain an intermediate synthetic image New _ N-1, wherein the intermediate synthetic image is the final synthetic image.
The iteration rule is formulated as:
CNew_1=C1+C2
CNew_i-1=CNew_i-2+Ci
wherein, CiRepresenting the ith original image, CNew_i-1Denotes an intermediate composite image obtained through i-1 image composition operations, likewise, CNew_i-2And i is an arbitrary positive integer from 1 to N, and represents an intermediate synthetic image obtained through i-2 image synthesis operations. The final composite image can be obtained after N original images need to be subjected to N-1 image synthesis operations. The method has the beneficial effects that: the iteration logic is simple, the occupied memory during calculation is very small, and the whole iteration process only needs one set of processing scheme for synthesizing two images, so that the method has strong expandability.
It should be noted that, in order to express the method of the present invention more clearly, all the above formulas are expressed based on the value of a single pixel, and during the specific calculation, for example, for an original image with a size of m × n, the implementation of the above scheme only needs to perform simple size comparison or four arithmetic operations on a plurality of m × n matrices in a computer, and does not need too many complicated calculations. Therefore, compared with the prior art, the method is simpler and easier to implement, and greatly simplifies the calculation amount in the image synthesizing process.
Based on the above method for synthesizing the high frequency components of the pixel points, if all the pixel points of the original images at the same position have no high frequency component, that is, the pixel points of the positions of all the original images only have low frequency components. These small amounts of low frequency components can be further extracted when extracting the high frequency components of the pixels of the original image. For example, when extracting the high frequency component, the frequency component needing to be filtered, which is lower than the threshold value of the high pass filter, may be additionally stored, and for the same position of different original images, if there are all frequency components needing to be filtered, the frequency component stored corresponding to the position is the low frequency component.
Determining the value of the pixel point at the corresponding position in the synthesized image according to the values of the pixel points at the position of all the original images with the low-frequency components at the same position, wherein the specific processing can adopt two processing methods: (1) directly carrying out weighted average on the RGB values of the pixel points at the positions of the original images, wherein for N original images, the weighting coefficients are respectively 1/N, namely the RGB value of the pixel point at the corresponding position in the final synthesized image is the average value of the sum of the RGB values of the pixel points at the corresponding positions in all the original images; (2) and taking the RGB value of the pixel point of any original image at the position as the RGB value of the pixel point at the corresponding position in the synthetic image. And finally, processing the low-frequency synthetic image and the synthetic image obtained in the previous step to obtain a synthetic image containing high-frequency components and low-frequency components.
Fig. 9 is a schematic structural diagram of a first image synthesis apparatus according to the present invention, as shown in fig. 9, the apparatus includes: a luminance information extraction unit, a high-frequency component extraction unit, and a high-frequency synthesis unit. The function of each unit is specifically described below.
The brightness information extraction unit is used for extracting the brightness information of each pixel point in each original image and providing the brightness information of each pixel point in each original image to the high-frequency component extraction unit.
The high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point in each original image according to the brightness information of each pixel point in each original image and providing the high-frequency component to the high-frequency synthesis unit.
The high-frequency synthesis unit is used for determining the value of the pixel point at the corresponding position in the synthetic image according to the high-frequency component of each pixel point in each original image, namely synthesizing the value of the corresponding pixel point in the original image into the value of the corresponding pixel point in the synthetic image.
When the values of the pixels at the corresponding positions in the synthesized image are determined according to the high-frequency components of each pixel in each original image in different manners, the high-frequency synthesizing unit may include different sub-units, and the structure of the high-frequency synthesizing unit is described below with reference to the schematic diagram in sequence.
Fig. 10 is a schematic structural diagram of a medium-high frequency synthesizing unit in an image synthesizing apparatus of the present invention, which includes: the device comprises a high-frequency component intensity value calculation unit, a comparison unit and an assignment unit.
The high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of the corresponding pixel point according to the high-frequency component of each pixel point in each original image and providing the high-frequency component intensity value of each pixel point in each original image to the comparison unit.
The comparison unit is used for comparing the high-frequency component intensity values of the pixel points at the same positions in different original images, determining the original image with the maximum high-frequency component intensity value of each pixel point at the position and informing the assignment unit.
And the assignment unit is used for determining the value of the pixel point at the corresponding position in the synthesized image according to the original image with the maximum high-frequency component intensity value of each pixel point at the position. The assignment unit can directly use the value of the pixel point at the corresponding position of the original image with the maximum high-frequency component intensity value of the pixel point at each position as the value of the pixel point at the corresponding position of the synthesized image.
In addition, the assignment unit may also perform some processing according to the original image with the maximum high-frequency component intensity value of each position pixel point, and finally obtain the value of the pixel point at the corresponding position of the synthesized image, as shown in fig. 11, the assignment unit includes: the device comprises a mask plane setting unit, a smoothing filtering unit and a threshold value comparing unit. The specific functions of each unit are as follows:
the mask plane setting unit is used for setting a mask plane with the same size as the original image, determining the value of a point at a corresponding position on the mask plane according to the original image with the maximum high-frequency component intensity value of each position pixel point, and providing the mask plane with the assigned values of all the positions to the smoothing filtering unit.
For example, if there are N original images, and the comparison unit determines that the highest intensity value of the high-frequency component of the pixel point at the position is the ith original image for a same position in each original image, the mask plane setting unit may set the value of the point corresponding to the position in the mask plane as i. And for the value of the point at each position, ensuring that the determined values of the points at the positions on the original image and the mask plane are in one-to-one correspondence, namely that the determined original images are different, and the set values of the points at the corresponding positions on the mask plane are different.
The smoothing filtering unit is used for setting a threshold according to the values of the points at all positions of the mask plane, performing smoothing filtering on the received mask plane, and providing the set threshold and the smoothing plane obtained after filtering to the threshold comparison unit. The smoothing filter unit may be implemented by a normalized smoothing filter. The smoothing filtering can eliminate the noise left in the original image when the high-frequency components of the pixel points are extracted.
And the threshold comparison unit is used for comparing the value of each position point on the smooth plane with the threshold value, and taking the value of the corresponding position pixel point of the corresponding original image as the value of the corresponding position pixel point in the synthesized image according to the comparison result.
Fig. 12 is a schematic structural diagram of a medium-high frequency synthesizing unit in the first image synthesizing apparatus according to the present invention. The method comprises the following steps: a high-frequency component intensity value calculation unit, a weighting coefficient calculation unit and a weighting unit. Wherein each unit functions as follows:
the high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of the corresponding pixel point according to the high-frequency component of each pixel point in each original image and providing the high-frequency component intensity value of each pixel point in each original image to the weighting coefficient calculation unit.
The weighting coefficient calculation unit is used for calculating the weighting coefficient of each pixel point in each original image according to the high-frequency component intensity value of each pixel point in each original image and providing the weighting coefficient to the weighting unit.
And the weighting unit is used for weighting and synthesizing the values of the pixel points at the same positions of all the original images into the value of the pixel point at the corresponding position in the synthesized image according to the weighting coefficient of each pixel point in each original image.
Fig. 13 is a schematic structural diagram of a medium-high frequency synthesizing unit in the first image synthesizing apparatus of the present invention, which includes a first high-frequency component intensity value calculating unit, a first storage unit, a third storage unit, a two-image synthesizing unit, a second storage unit, a second luminance information extracting unit, a second high-frequency component intensity value calculating unit, and a counting unit, wherein the units function as follows:
the first high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of the corresponding pixel point according to the high-frequency component of each pixel point in each original image and providing the high-frequency component intensity value of each pixel point in each original image to the first storage unit.
The first storage unit is used for storing the high-frequency component intensity value of each pixel point in each original image and numbering the high-frequency component intensity values in sequence by taking the original image as a unit; in addition, the first storage unit is further configured to select a high-frequency component intensity value of each pixel point in the original image corresponding to the current count value provided by the counting unit, and provide the high-frequency component intensity value of each pixel point in the original image to the two-image synthesizing unit. For example, if the current count value provided by the counting unit is i, the first storage unit provides the stored high-frequency component intensity value of each pixel point in the ith original image to the two-image synthesizing unit.
The third storage unit is used for storing each original image and sequentially numbering each original image, and is also used for selecting the original image with the number corresponding to the current counting value according to the current counting value provided by the counting unit and providing the original image for the two-image synthesis unit.
The third storage unit is used for storing the high-frequency component intensity values of the pixels in the original image, wherein the serial number sequence of the original image coded by the third storage unit is consistent with the serial number sequence of the high-frequency component intensity values of the pixels in the same original image in the first storage unit. For example, if the current count value provided by the counting unit is i, the first storage unit provides the high-frequency component intensity value of each pixel point in the stored ith original image to the two-image synthesis unit, and the third storage unit provides the stored ith original image to the two-image synthesis unit.
The two-image synthesizing unit is used for determining the value of the pixel point at the corresponding position in the intermediate synthesized image according to the received high-frequency component intensity value of each pixel point in the two images, the specific processing can be carried out according to a comparison method or a weighting method aiming at the two images, the obtained intermediate synthesized image is provided for the second storage unit, and in addition, the two-image synthesizing unit also provides a counting notification message for the counting unit after obtaining the intermediate synthesized image. The first image synthesis processing by the two-image synthesis unit is performed for two original images, and the subsequent image synthesis processing is performed for one original image and one intermediate synthesized image.
In an initial situation, the first storage unit may directly provide the high-frequency component intensity values of the pixel points in the first original image and the second original image to the two-image synthesizing unit, and the two-image synthesizing unit synthesizes the two corresponding images into the first intermediate synthesized image according to a comparison method or a weighting method for the two images.
The second storage unit is used for storing the intermediate synthetic image and respectively providing the stored intermediate synthetic image to the second brightness information extraction unit and the two-image synthesis unit according to the current counting value provided by the counting unit; further, the second storage unit is also configured to output the intermediate composite image as a final composite image according to the output notification message provided by the counting unit.
The second brightness information extraction unit is used for extracting the brightness information of each pixel point in the intermediate synthetic image and providing the brightness information of each pixel point in the intermediate synthetic image to the second high-frequency component extraction unit.
The second high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point according to the brightness information of each pixel point in the intermediate synthetic image and providing the high-frequency component to the second high-frequency component intensity value calculation unit.
The second high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of each pixel point according to the high-frequency component of each pixel point in the intermediate synthetic image and providing the high-frequency component intensity value of each pixel point in the intermediate synthetic image to the two-image synthesis unit.
And the counting unit is used for adding 1 to the counting value as a current counting value when receiving the counting notification message, judging whether all the original images are synthesized according to the current counting value, if so, sending an output notification message to the second storage unit, and otherwise, respectively providing the current counting value to the first storage unit and the third storage unit.
The counting unit may be implemented by a counter, for example, the initial value of the counter may be set to 2, the counter may be incremented by 1 when receiving the count notification message, for example, for N original images, the current count value of the counter is i before the count notification message, when receiving the count notification message, the count value of the counter becomes i +1, and i +1 is taken as the current count value. And judging whether all the original images are synthesized or not according to the current count value, namely judging whether the current count value is greater than N or not, if so, indicating that all the original images participate in image synthesis, and sending an output notification message to a second storage unit, otherwise, respectively providing the current count value to a first storage unit and a third storage unit by a counting unit.
As shown in fig. 14, a schematic configuration diagram of a second image synthesizing apparatus according to the present invention includes: the device comprises a first storage unit, a second storage unit, a brightness information extraction unit, a high-frequency component intensity value calculation unit, a two-graph synthesis unit and a counting unit. Wherein each unit functions as follows:
the first storage unit is used for storing each original image and numbering each original image in sequence; in addition, the first storage unit is used for selecting an original image corresponding to the current count value according to the current count value provided by the counting unit and providing the original image to the two-image synthesizing unit. For example, if the current count value provided by the counting unit is i, the first storage unit provides the stored ith original image to the two-image synthesizing unit.
The second storage unit is used for storing the intermediate synthetic image currently provided by the two-image synthesis unit and directly providing the intermediate synthetic image to the brightness information extraction unit when receiving the current counting value provided by the counting unit; further, the second storage unit is also configured to output the intermediate composite image as a final composite image according to the output notification message provided by the counting unit.
The brightness information extraction unit is used for extracting the brightness information of each pixel point in the image and providing the brightness information of each pixel point in the image to the high-frequency component extraction unit.
The high-frequency component extraction unit is used for obtaining the high-frequency component of each pixel point according to the brightness information of each pixel point in the image and providing the high-frequency component to the high-frequency component intensity value calculation unit.
The high-frequency component intensity value calculation unit is used for calculating the high-frequency component intensity value of each pixel point according to the high-frequency component of each pixel point in the image and providing the high-frequency component intensity value to the two-image synthesis unit.
The images processed in the luminance information extraction unit are the original image and the intermediate composite image, respectively, so that the images processed in the high-frequency component extraction unit and the high-frequency component intensity value calculation unit are also the original image and the intermediate composite image, respectively.
The two-image synthesizing unit is used for determining the value of the pixel point at the corresponding position in the intermediate synthesized image according to the received high-frequency component intensity value of each pixel point in the two images, the specific processing can be carried out according to a comparison method or a weighting method aiming at the two images, the intermediate synthesized image is provided for the second storage unit, and in addition, the two-image synthesizing unit also provides a counting notification message for the counting unit after obtaining the intermediate synthesized image.
In an initial situation, the first storage unit may directly provide the high-frequency component intensity values of the pixel points in the first original image and the second original image to the two-image synthesizing unit, and the two-image synthesizing unit synthesizes the two corresponding images into the first intermediate synthesized image according to a comparison method or a weighting method for the two images.
And the counting unit is used for adding 1 to the counting value as a current counting value when receiving the counting notification message, judging whether all the original images are synthesized according to the current counting value, if so, sending an output notification message to the second storage unit, and otherwise, respectively providing the current counting value to the first storage unit and the second storage unit.
The counting unit may be implemented by a counter, for example, the initial value of the counter may be set to 2, the counter may be incremented by 1 when receiving the count notification message, for example, for N original images, the current count value of the counter is i before the count notification message, when receiving the count notification message, the count value of the counter becomes i +1, and i +1 is taken as the current count value. And judging whether all the original images are synthesized or not according to the current count value, namely judging whether the current count value is greater than N or not, if so, indicating that all the original images participate in image synthesis, and at the moment, sending an output notification message to a second storage unit by a counting unit, otherwise, respectively providing the current count value to a first storage unit and a second storage unit by the counting unit.
All of the above-described image synthesizing apparatuses provided by the present invention further include a low-frequency component extracting unit, a low-frequency synthesizing unit, and a synthesizing unit. Referring to fig. 15, the high-frequency image synthesizing apparatus is the apparatus shown in fig. 9 and 14, and each additional unit functions as follows:
the low-frequency component extraction unit is used for acquiring low-frequency components of pixel points in each original image, selecting pixel points at corresponding positions of different original images with low-frequency components at the same position, and providing values of the pixel points to the low-frequency synthesis unit. The implementation of obtaining the low-frequency component of each pixel point in each original image roughly includes the following two ways. One mode is that the brightness information extraction unit can further provide the brightness information of each pixel point in each original image to the low-frequency component extraction unit; the low-frequency component extraction unit acquires the low-frequency components of the pixel points lower than a set threshold value according to the brightness information of each pixel point in each original image. The threshold may be set as needed, for example, the threshold is a threshold of a high-pass filter when extracting the high-frequency component. The other mode is that the high-frequency component extraction unit is further used for providing the filtered frequency components for the low-frequency component extraction unit, and the frequency components are the low-frequency components of the pixel points in each original image finally obtained by the low-frequency component extraction unit.
The low-frequency synthesis unit is used for determining the value of the pixel point at the corresponding position in the low-frequency synthesis image according to the values of the pixel points at the position of all the original images with the same position and low-frequency components, and providing the low-frequency synthesis image for the synthesis unit.
The low-frequency synthesis unit can directly take the value of the pixel point at the corresponding position in any original image as the value of the pixel point at the corresponding position in the low-frequency synthesis image; or taking the weighted average value of the values of the pixel points at the corresponding positions of all the original images as the value of the pixel point at the corresponding position of the low-frequency synthetic image.
The synthesis unit is used for processing the received synthesis image and the low-frequency synthesis image to obtain a synthesis image containing a high-frequency component and a low-frequency component. The synthesized image received by the synthesizing unit can be from the high-frequency synthesizing unit in FIG. 9 or the two-image synthesizing unit in FIG. 14; the synthesized image received by the synthesizing unit may specifically come from the assigning unit in fig. 10, the threshold comparing unit in fig. 11, the weighting unit in fig. 12, and the two-map synthesizing unit in fig. 13, corresponding to the specific structure of the high-frequency synthesizing unit.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.