CN111768355A - Method for enhancing image of refrigeration type infrared sensor - Google Patents
Method for enhancing image of refrigeration type infrared sensor Download PDFInfo
- Publication number
- CN111768355A CN111768355A CN202010506218.XA CN202010506218A CN111768355A CN 111768355 A CN111768355 A CN 111768355A CN 202010506218 A CN202010506218 A CN 202010506218A CN 111768355 A CN111768355 A CN 111768355A
- Authority
- CN
- China
- Prior art keywords
- image
- pix
- template
- executing
- equal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for enhancing an image of a refrigeration type infrared sensor, which comprises the steps of firstly correcting a histogram of an original image through the distribution characteristics of a pixel region, then correcting the histogram again by using a self-adaptive platform histogram, calculating a probability distribution function by using the corrected histogram, balancing the original image by using the probability distribution function, filtering and layering the balanced image by using a layered filtering template generated by a Gaussian space confidence coefficient template and a low-pass spatial domain template, separating a basic layer image and a detail layer image, and further amplifying the detail image layer to obtain a final enhanced image. The invention can not only keep the overall characteristics of the image, but also enlarge the local characteristics and simultaneously keep the image details; the invention is suitable for various scenes and shows excellent detail display capability in various scenes; the invention has high efficiency and low complexity, is easy to realize by software and hardware, and is particularly suitable for being applied to embedded equipment with limited resources.
Description
Technical Field
The invention belongs to the technical field of infrared image detail enhancement, and particularly relates to a method for enhancing an image of a refrigeration type infrared sensor.
Background
The refrigeration type infrared detector has the characteristics that the contrast of the collected image is low, the details are not obvious, the noise is large, and the infrared image enhancement technology is developed according to the characteristics. Infrared image enhancement techniques have achieved extensive success over the years. The implementation method is more and more abundant, algorithms based on histogram enhancement include HE (histogram equalization), PHE (platform histogram equalization), APHE (adaptive platform histogram equalization), CLAHE (contrast-limiting adaptive histogram equalization), and the algorithms assign the contrast to the probability characteristic of the histogram, and the algorithms include subsequent improvement algorithms and prevent the problems of contrast excess enhancement and detail loss by limiting the contrast, and CLAHE performs a local contrast enhancement technology aiming at the problem of detail loss, but loses the integral brightness characteristic of the image although the details are increased. The histogram equalization algorithm (adaptive platform histogram equalization for spatial block distribution correction) adopted by the invention takes the characteristics of the spatial distribution into consideration, and can inhibit uniform strong background and give consideration to local details. The method has strong adaptability in a plurality of scenes, and the comprehensive sensory effect of human eyes is obvious because the characteristics of spatial distribution of the method are considered, such as monotonous background of sky texture or rich background of ground texture, the integral contrast characteristic can be ensured, and the detail characteristic can be amplified as much as possible. Due to the basic characteristics of the histogram enhancement technology, even if the contrast can be enhanced, the amplification capability of the edge contour detail information of the image is still limited, so that image detail enhancement algorithms based on frequency domain and spatial domain are proposed in succession, and the spatial domain method comprises improved unsharp masking method, Retinex, HF (homomorphism filtering), mathematical morphology and other methods. The frequency domain method includes fourier transform, wavelet transform, contourlet transform, and the like. The biggest obstacles limiting the application of the methods in small-sized equipment are the operation complexity and the space complexity, and although the methods such as wavelet transformation, contour transformation and the like have great advantages in the aspect of contour detail enhancement, the algorithm complexity and the space complexity are great, so that the application of the methods in the equipment is severely limited. The spatial domain method based on filtering hierarchy has limited performance although the algorithm complexity and the spatial complexity are not high.
Disclosure of Invention
The invention aims to provide a method for enhancing an image of a refrigeration type infrared sensor, which solves the problems of excessive contrast enhancement and detail loss caused by an infrared image enhancement technology in the prior art.
The technical scheme adopted by the invention is that a method for enhancing the image of a refrigeration type infrared sensor is implemented according to the following steps:
step 1, counting a histogram of the whole image;
step 2, correcting a block distribution histogram;
step 3, correcting the histogram of the self-adaptive platform;
step 4, image equalization;
and 5, enhancing image details.
The present invention is also characterized in that,
the step 1 is as follows:
step 1.1, defining x as an abscissa variable of an image, wherein the range is 1-N, N is the maximum value of the abscissa of the image, defining y as an ordinate variable of the image, wherein the range is 1-M, M is the maximum value of the ordinate of the image, defining image as an original image, and representing pixel values of x on the abscissa and y on the ordinate in the original image by using image (x, y), wherein the value range is 0-2PBW-1, PBW represents the pixel bit width, pix is defined as a pixel value variable, and the value ranges from 0 to 2PBW-1, defining f (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x be 1, y be 1, f (pix) be 0, within the range of pix 0-2PBWAll F (pix) in-1 are initialized to 0;
step 1.3, executing F (image (x, y)) + 1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y to be 1, and skipping to step 1.5, otherwise, executing y to be y +1, and skipping to step 1.3;
step 1.5, determining whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x ═ x +1, and skipping to step 1.3;
and step 1.6, completing the statistics of F (pix), and carrying out the next operation.
The step 2 is as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, Hblock represents the division number of a horizontal coordinate, the number of the blocks divided by the original image is Hblock Wblock, the size of the vertical coordinate of each divided block is M/Wblock, and the size of the horizontal coordinate of each divided block is N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, the pix range being 0-2PBW-1, all b (pix) are initialized to 0, b (pix) has a value ranging from 0 to Hblock xWblock, Hblock is defined as an abscissa block variable, the value ranges from 1 to Hblock, Wblock is defined as an ordinate block variable, the value ranges from 1 to Wblock; defining xblock as the horizontal coordinate variable of pixels in one block in the range of 1-n, and defining yblock as the vertical coordinate variable of pixels in one block in the range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix be 0, hblock be 1, wblock be 1, xblock be 1, yblock be 1; skipping to step 2.2.3;
step 2.2.3, referring to f (pix) counted in step 1, if f (pix) >0, jumping to step 2.2.5, and if f (pix) >0, jumping to step 2.2.4;
step 2.2.4, judge whether pix equals 2PBW1, if equal to 2PBW1, jumping to step 2.2.10, otherwise, executing pix ═ pix +1, and jumping to step 2.2.3;
step 2.2.5, a block of the hblock row and the wblock column is taken, a pixel value image ((hblock-1) × N + xblock, (wblock-1) × M + yblock) of the xblock and the yblock ordinate in the block is extracted, if the pixel value is equal to pix, b (pix) = b (pix) +1, xblock ═ 1, yblock ═ 1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock to be 1, and jumping to step 2.2.7, otherwise, executing the yblock to be yblock +1, and jumping to step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock to be 1, and skipping to step 2.2.8, otherwise, executing xblock to be xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if yes, executing Wblock to be 1, and jumping to step 2.2.9, otherwise, executing Wblock to be Wblock +1, and jumping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock to be 1, and jumping to step 2.2.4, otherwise, executing Hblock to be Hblock +1, and jumping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted pixel block number;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) ═ F (pix)/B (pix), wherein N (pix) is the modified histogram, and traversing all effective pixel values of pix, namely pix ∈ (0-2)PBW-1),B(pix)>0。
The step 3 is as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of nonzero pixels, wherein the initial value is nz _ pix is 0; let pix be 0;
step 3.1.2, determining whether f (pix) is greater than 0, if so, executing nz _ pix ═ nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge whether pix equals 2PBW1, if equal to 2PBW1, jumping to step 3.1.4, otherwise, executing pix ═ pix +1, and jumping to step 3.1.2;
step 3.1.4, when nz _ pix is already calculated, the value is the number of all the appeared pixel values of the original image;
step 3.2, calculateL is the plateau value that will be used for the correction of the underlying histogram;
step 3.3, correcting n (pix) generated in step 2.3, and performing plateau histogram topping using the plateau value L obtained in step 3.2, that is, { p (pix) ═ L, n (pix) > L; p (pix) ═ n (pix), n (pix) < ═ L }, and p (pix) is the histogram after the platform correction;
Step 3.5, calculating a probability accumulation function CDF (pix) by using the P (pix) generated in the step 3.3 and the SP generated in the step 3.4, wherein the formula is
The step 4 is as follows:
step 4.1, defining image _ eq as an equalized image, defining route (pix) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and rounding (pix) the pix value to make x equal to 1 and y equal to 1;
step 4.2, calculate image _ eq (x, y) ═ ROUND (CDF (pix (x, y)) × (2)PBW-1));
Step 4.3, judging whether y is equal to M, if so, executing y to be 1, and skipping to step 4.4; otherwise, executing y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x ═ x +1, and skipping to the step 4.2;
and 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image.
The step 5 is as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formulaWherein (x ', y') is the pixel coordinate of the central point of the gaussian space confidence template, and the value should be x '═ y' ═ W +1)/2, (x, y) is the coordinate position adjacent to (x ', y'), the value ranges from 1 to W, σSThe value is generally σ for the distance standard deviationSGs (x, y) is a coefficient with x as the abscissa and y as the ordinate in the gaussian space confidence coefficient template;
step 5.2, generating a low-pass spatial domain template:
firstly, setting the window size of the low-pass frequency domain template to be W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the low-pass frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y. The template has values that when x, y belongs to (1, 1; 1, 2; 2, 1; W-1, 1; 1, W-1), FLpass (x, y) is 1, otherwise, FLpass (x, y) is 0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass spatial domain template K Lpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing point multiplication on the gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, wherein GL (x, y) is Gs (x, y) or KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
and 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
image _ eq (x, y) is the pixel value with x abscissa and y ordinate in the equalized image generated in step 4, and defines image _ base as the filtered base layer image, image _ base (x, y) is the pixel value with x abscissa and y ordinate in the filtered base layer image, image _ detail is the detail image, image _ detail (x, y) is the pixel value with x abscissa and y ordinate in the detail image, and the calculation formula is image _ detail (x, y) -image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ en hand (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
The method has the advantages that the original image is enhanced by utilizing a histogram modification and histogram equalization method, the enhanced image is filtered and layered by utilizing a new layered filtering template generated by a Gaussian space confidence coefficient template and a Fourier low-pass filtering airspace template, the background and the details of the image are separated, and the layered details are further amplified to obtain the final enhanced image. The method of the invention simultaneously considers some excellent characteristics of the spatial domain and frequency domain processing method, and improves the capability of enhancing details and outlines on the premise of not increasing the operation complexity and the space complexity. The method has the advantages of good scene adaptability, good contrast performance in various scenes, finer and clearer detail information and low algorithm complexity, and is particularly suitable for being applied to embedded equipment with limited resources.
Drawings
Fig. 1 is a diagram showing a comparison of smoke clouds, in which fig. 1(a) is an original, fig. 1(b) is HE enhancement, fig. 1(c) is APHE enhancement, fig. 1(d) is CLAHE enhancement, fig. 1(e) is APHE + EM, and fig. 1(f) is an algorithm of the present invention;
fig. 2 is an indoor comparison diagram, in which fig. 2(a) is an original, fig. 2(b) is HE enhancement, fig. 2(c) is APHE enhancement, fig. 2(d) is CLAHE enhancement, fig. 2(e) is APHE + EM, and fig. 2(f) is an algorithm of the present invention;
fig. 3 is a diagram of rich background outdoor contrast, where fig. 3(a) is original, fig. 3(b) is HE enhancement, fig. 3(c) is APHE enhancement, fig. 3(d) is CLAHE enhancement, fig. 3(e) is APHE + EM, and fig. 3(f) is an algorithm of the present invention;
fig. 4 is a comparison diagram of an outdoor with a normal background, where fig. 4(a) is an original, fig. 4(b) is HE enhancement, fig. 4(c) is APHE enhancement, fig. 4(d) is CLAHE enhancement, fig. 4(e) is APHE + EM, and fig. 4(f) is an algorithm of the present invention;
fig. 5 is a comparison diagram of the artificial method, where fig. 5(a) shows original, fig. 5(b) shows HE enhancement, fig. 5(c) shows APHE enhancement, fig. 5(d) shows CLAHE enhancement, fig. 5(e) shows APHE + EM, and fig. 5(f) shows the algorithm of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a method for enhancing an image of a refrigeration type infrared sensor, which is implemented according to the following steps:
step 1, counting a histogram of the whole image;
step 2, correcting a block distribution histogram;
step 3, correcting the histogram of the self-adaptive platform;
step 4, image equalization;
and 5, enhancing image details.
Wherein, each step is implemented as follows:
the step 1 is as follows:
step 1.1, defining x as an abscissa variable of an image, wherein the range is 1-N, N is the maximum value of the abscissa of the image, defining y as an ordinate variable of the image, wherein the range is 1-M, M is the maximum value of the ordinate of the image, defining image as an original image, and representing pixel values of x on the abscissa and y on the ordinate in the original image by using image (x, y), wherein the value range is 0-2PBW-1, PBW represents the pixel bit width, pix is defined as a pixel value variable, and the value ranges from 0 to 2PBW-1, defining f (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x be 1, y be 1, f (pix) be 0, within the range of pix 0-2PBWAll F (pix) in-1 are initialized to 0;
step 1.3, executing F (image (x, y)) + 1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y to be 1, and skipping to step 1.5, otherwise, executing y to be y +1, and skipping to step 1.3;
step 1.5, determining whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x ═ x +1, and skipping to step 1.3;
and step 1.6, completing the statistics of F (pix), and carrying out the next operation.
The step 2 is as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, Hblock represents the division number of a horizontal coordinate, the number of the blocks divided by the original image is Hblock Wblock, the size of the vertical coordinate of each divided block is M/Wblock, and the size of the horizontal coordinate of each divided block is N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, the pix range being 0-2PBW-1, all b (pix) are initialized to 0, b (pix) has a value ranging from 0 to Hblock xWblock, Hblock is defined as an abscissa block variable, the value ranges from 1 to Hblock, Wblock is defined as an ordinate block variable, the value ranges from 1 to Wblock; defining xblock as the horizontal coordinate variable of pixels in one block in the range of 1-n, and defining yblock as the vertical coordinate variable of pixels in one block in the range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix be 0, hblock be 1, wblock be 1, xblock be 1, yblock be 1; skipping to step 2.2.3;
step 2.2.3, referring to f (pix) counted in step 1, if f (pix) >0, jumping to step 2.2.5, and if f (pix) >0, jumping to step 2.2.4;
step 2.2.4, judge whether pix equals 2PBW1, if equal to 2PBW-1, jump to step 2.2.10, otherwise execute pix +1, jump toStep 2.2.3;
step 2.2.5, a block of the hblock row and the wblock column is taken, a pixel value image ((hblock-1) × N + xblock, (wblock-1) × M + yblock) of the xblock and the yblock ordinate in the block is extracted, if the pixel value is equal to pix, b (pix) = b (pix) +1, xblock ═ 1, yblock ═ 1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock to be 1, and jumping to step 2.2.7, otherwise, executing the yblock to be yblock +1, and jumping to step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock to be 1, and skipping to step 2.2.8, otherwise, executing xblock to be xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if yes, executing Wblock to be 1, and jumping to step 2.2.9, otherwise, executing Wblock to be Wblock +1, and jumping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock to be 1, and jumping to step 2.2.4, otherwise, executing Hblock to be Hblock +1, and jumping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted pixel block number;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) ═ F (pix)/B (pix), wherein N (pix) is the modified histogram, and traversing all effective pixel values of pix, namely pix ∈ (0-2)PBW-1),B(pix)>0。
The step 3 is as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of nonzero pixels, wherein the initial value is nz _ pix is 0; let pix be 0;
step 3.1.2, determining whether f (pix) is greater than 0, if so, executing nz _ pix ═ nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge whether pix is equal or notIn 2PBW1, if equal to 2PBW1, jumping to step 3.1.4, otherwise, executing pix ═ pix +1, and jumping to step 3.1.2;
step 3.1.4, when nz _ pix is already calculated, the value is the number of all the appeared pixel values of the original image;
step 3.2, calculateL is the plateau value that will be used for the correction of the underlying histogram;
step 3.3, correcting n (pix) generated in step 2.3, and performing plateau histogram topping using the plateau value L obtained in step 3.2, that is, { p (pix) ═ L, n (pix) > L; p (pix) ═ n (pix), n (pix) < ═ L }, and p (pix) is the histogram after the platform correction;
Step 3.5, calculating a probability accumulation function CDF (pix) by using the P (pix) generated in the step 3.3 and the SP generated in the step 3.4, wherein the formula is
The step 4 is as follows:
step 4.1, defining image _ eq as an equalized image, defining route (pix) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and rounding (pix) the pix value to make x equal to 1 and y equal to 1;
step 4.2, calculate image _ eq (x, y) ═ ROUND (CDF (pix (x, y)) × (2)PBW-1));
Step 4.3, judging whether y is equal to M, if so, executing y to be 1, and skipping to step 4.4; otherwise, executing y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x ═ x +1, and skipping to the step 4.2;
and 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image.
The step 5 is as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formulaWherein (x ', y') is the pixel coordinate of the central point of the gaussian space confidence template, and the value should be x '═ y' ═ W +1)/2, (x, y) is the coordinate position adjacent to (x ', y'), the value ranges from 1 to W, σSThe value is generally σ for the distance standard deviationSGs (x, y) is a coefficient with x as the abscissa and y as the ordinate in the gaussian space confidence coefficient template;
step 5.2, generating a low-pass spatial domain template:
firstly, setting the window size of the low-pass frequency domain template to be W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the low-pass frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y. The template has values that when x, y belongs to (1, 1; 1, 2; 2, 1; W-1, 1; 1, W-1), FLpass (x, y) is 1, otherwise, FLpass (x, y) is 0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass spatial domain template K Lpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing point multiplication on the gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, wherein GL (x, y) is Gs (x, y) or KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
and 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
image _ eq (x, y) is the pixel value with x abscissa and y ordinate in the equalized image generated in step 4, and defines image _ base as the filtered base layer image, image _ base (x, y) is the pixel value with x abscissa and y ordinate in the filtered base layer image, image _ detail is the detail image, image _ detail (x, y) is the pixel value with x abscissa and y ordinate in the detail image, and the calculation formula is image _ detail (x, y) -image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ en hand (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
The method of the invention combines and calculates the filtering template through the pre-calculated spatial Gaussian distribution template and the pre-calculated filtering characteristic template of the inverse Fourier transform, and adopts a filtering layering method to separate the image background and the details.
In order to detect the performance of the method, 5 scenes of test image samples are collected from a refrigeration type image sensor and common picture materials, image Enhancement intuitive sensory effect comparison and quantitative analysis comparison are carried out on the 5 scenes and the methods, and the Enhancement effect of the image is measured by adopting two evaluation indexes of Entropy Enhancement (EME) and Average Gradient (AVG) in the quantitative analysis comparison.
EME is a common evaluation index in enhancement, the approximate contrast of an image is obtained by using a blocking method, and the larger the value is, the image has a visual effect more in accordance with the Weber's law. It is defined as:
when EME is used for measuring image quality, an image is divided into blocks, the image is divided into K1 x K2 blocks, and the maximum value of the gray scale of each image block is obtainedAnd minimum valueAnd finally, taking the logarithm to sum and averaging the blocks to obtain a measurement value.
The AVG reflects the characteristics of tiny detail contrast and texture change in the image and the definition of the image, and the larger the value, the clearer the image is. The calculation method comprises the following steps:
in the formula (I), the compound is shown in the specification,andrespectively representing the gradient of a certain pixel in the image in the horizontal and vertical directions. M, N indicates the number of rows and columns in the image
First, comparison of enhancement effects of a monotonous background smoke cloud image is shown in fig. 1, where fig. 1(a) is an original image, fig. 1(b) is HE enhancement, fig. 1(c) is APHE enhancement, fig. 1(d) is CLAHE enhancement, fig. 1(e) is APHE + EM, and fig. 1(f) is an algorithm of the present invention.
TABLE 1 Smoke cloud chart EME, AVG comparison
ORRIGINAL | HE | APHE | CLAHE | APHE+EM | MY | |
EME | 0.515 | 14.67 | 11.27 | 44.91 | 42.93 | 38.22 |
AVG | 17.28 | 2221.5 | 505.00 | 7502.5 | 1551.5 | 2041.8 |
1. First, compare the original diagram (1 (a) with the algorithm diagram 1 (f):
from EME and AVG indexes, the contrast and the detail are enhanced by dozens of times, from the visual angle of human eyes, the original image is very fuzzy and the contrast is poor, the image processed by the algorithm of the invention has rich and clear details and the contrast is obviously enhanced.
2. Comparing HE enhancement fig. 1(b) with the inventive algorithm fig. 1 (f):
the contrast is obviously enhanced from the EME index, and the visual contrast of human eyes can find that the HE algorithm has the conditions of drowning details and excessively enhancing the background, the image details are almost completely lost, a large amount of background noise is amplified, the quality of the image after enhancement is poor, and the reason why the AVG index is high (the details are the noise which we do not want to see) is also. On the basis of not amplifying background noise, the algorithm amplifies a large amount of details of the smoke cloud, and the image is clear and fine.
3. Comparing APHE enhancement FIG. 1(c) with the algorithm of the present invention FIG. 1 (f):
from EME and AVE indexes, the algorithm of the invention has the contrast and detail display capability which are obviously higher than APHE by a plurality of times, and the algorithm of the invention can obviously feel that the contrast of the APHE algorithm is general and the details are fuzzy from the visual contrast of human eyes, but the algorithm of the invention has strong contrast, fine details and clear images.
4. Compare CLAHE enhancement fig. 1(d) with the inventive algorithm fig. 1 (f):
the two images are analyzed from the visual angle of human eyes, and the fact that the CLAHE algorithm excessively enhances the background, a large amount of background noise is amplified, the integral brightness of the image is lost, the fact that the temperature of the smoke cloud is obviously higher than that of the background sky cannot be distinguished, and the image enhancement effect is poor.
5. Comparing APHE + EM enhancement FIG. 1(e) with the algorithm of the present invention FIG. 1 (f):
the AVG indexes show that the detail information of the algorithm is obviously higher than that of the APHE + EM algorithm, clear images can be obviously sensed from the visual angle of human eyes, the details are fine and smooth, and the overall perception is obviously better than that of the APHE + EM algorithm.
Second, comparison of enhancement effects of rich-background indoor images is shown in fig. 2, where fig. 2(a) is original, fig. 2(b) is HE enhancement, fig. 2(c) is APHE enhancement, fig. 2(d) is CLAHE enhancement, fig. 2(e) is APHE + EM, and fig. 2(f) is the algorithm of the present invention.
TABLE 2 indoor graphs EME, AVG comparison
ORRIGINAL | HE | APHE | CLAHE | APHE+EM | MY | |
EME | 0.65 | 11.02 | 8.41 | 39.58 | 14.34 | 47.29 |
AVG | 49.48 | 1282.8 | 849.51 | 4318.8 | 1530 | 1928.5 |
1. Comparing the original figure 2(a) with the algorithm of the present invention figure 2 (f):
from the EME and AVG indexes, the contrast and detail of the original image are obviously raised by dozens of times, and from the visual perception of human eyes, the contrast of the original image is extremely poor and fuzzy, and it is almost difficult to distinguish each object in the indoor scene. After the algorithm is enhanced, the contrast is obviously enhanced, each indoor object can be clearly distinguished, in addition, the detail lines of the objects and even the screws can be distinguished, the overall contrast of the image is strong, and the details are clear.
2. Comparing HE enhancement algorithm fig. 2(b) with the inventive algorithm fig. 2 (f):
according to the EME index, the contrast intensity of the algorithm is 4 times that of the HE algorithm, and the AVG index is obviously higher than that of the HE algorithm. From the visual perception of human eyes, HE algorithm obviously submerges many details of the image (for example, the upper left part of the image is a hot air port of an air conditioner, a wind guide cloth is arranged at the hot air port, the wind guide cloth shakes with wind, so that hot air heats the image unevenly, and hot and cold stripes are required to be formed, but all the details of the wind guide cloth are completely submerged by the HE algorithm, even the wind guide cloth is not seen clearly), and small objects such as small parts, screws and the like of a plurality of objects in the scene cannot be distinguished.
3. Comparing APHE enhancement algorithm FIG. 2(c) with the present invention algorithm FIG. 2 (f):
according to the EME and AVG indexes, the contrast and detail showing capability are obviously superior to the performance of the APHE algorithm by several times, the APHE algorithm discards a large amount of details from the sense of human eyes, the air guide cloth loses the brightness change, objects in a scene look a little fuzzy, and the air guide cloth can distinguish the brightness change, the objects in the scene are clear and fine, and the overall appearance is obviously superior to the algorithm.
4. Compare the CLAHE enhancement algorithm of FIG. 2(d) with the algorithm of the present invention of FIG. 2(f)
The comparison of human eye senses shows that the CLAHE algorithm completely loses the brightness information of the whole image, and although a lot of details are displayed, the fact that the temperature of a hot air port is obviously higher than the temperature of other objects in a scene disappears, so that the infrared image is not beneficial to the identification of high-temperature abnormity, and the algorithm amplifies a lot of image details under the condition that the whole brightness information is not lost. Furthermore, from the index of EME, the contrast of the whole graph is higher than that of the CLAHE algorithm.
5. Comparing APHE + EM enhancement algorithm FIG. 2(e) with the inventive algorithm FIG. 2 (f):
according to two indexes of EME and AVG, the algorithm is obviously superior to an APHE + EM algorithm, from the intuition and sense of human eyes, the light and shade information of the wind guide cloth is lost, the details of the APHE + EM algorithm of objects in other scenes are better in showing capability, but compared with the method, the method is obviously different, and the details of the algorithm are more exquisite and clear.
Third, comparing the outdoor image enhancement effect with rich background as shown in fig. 3, fig. 3(a) is original image, fig. 3(b) is HE enhancement, fig. 3(c) is APHE enhancement, fig. 3(d) is CLAHE enhancement, fig. 3(e) is APHE + EM, and fig. 3(f) is the algorithm of the present invention.
TABLE 3 Rich background outdoor EME, AVG contrast
ORRIGINAL | HE | APHE | CLAHE | APHE+EM | MY | |
EME | 1.83 | 13.10 | 13.10 | 44.24 | 23.87 | 71.03 |
AVG | 124.37 | 1491 | 1394.4 | 4262.9 | 2243.2 | 4232.3 |
1. Comparing the original image fig. 3(a) with the algorithm of the present invention fig. 3 (f):
according to EME and AVG indexes, the contrast and detail showing capability of the algorithm are increased by dozens of times after the algorithm is enhanced, the contrast of an original image is poor, the image is fuzzy and the details of a building can not be distinguished from the human eye sense, and the image is enhanced by the method and has strong overall contrast, clear and fine details.
2. Comparing HE enhancement algorithm fig. 3(b) with the inventive algorithm fig. 3 (f):
from the EME and AVG indexes, the algorithm has the performance that the contrast and the detail display capability are several times higher than that of the HE algorithm, and from the perspective of human eyes, the difference between an HE enhanced image and a building and a background is excessively enlarged, a large amount of details are submerged, and the image quality is poor.
3. Comparing the APHE enhancement algorithm FIG. 3(c) with the inventive algorithm FIG. 3 (f):
from the EME and AVG indexes, the performance of the algorithm of the invention is several times higher than that of the APHE algorithm in terms of contrast and detail showing capability, from the perspective of human eyes, the image edge enhanced by the APHE algorithm is fuzzy, the building contrast strength is too large, the algorithm of the invention has strong contrast without losing information, and the detail is clear and fine.
4. Compare the CLAHE enhancement algorithm fig. 3(d) with the inventive algorithm fig. 3 (f):
from the EME index, the overall contrast of the image is obviously superior to that of the CLAHE algorithm, from the human eye sense, the CLAHE algorithm loses the overall information of the image, the difference between the temperature of a building and the temperature of the sky cannot be distinguished, and the algorithm amplifies details on the basis of retaining the overall light and shade information, so that the overall light and shade are strong, and the details are clear and fine.
5. Comparing APHE + EM enhancement algorithm FIG. 3(e) with the inventive algorithm FIG. 3 (f):
from the two indexes of EME and AVG, the algorithm is obviously superior to the APHE + EM algorithm, and from the visual perception of human eyes, the APHE + EM algorithm has good detail showing capability but has obvious difference in detail showing capability compared with the algorithm.
Fourth, comparing outdoor image enhancement effects with a single background as shown in fig. 4, fig. 4(a) is original image, fig. 4(b) is HE enhancement, fig. 4(c) is APHE enhancement, fig. 4(d) is CLAHE enhancement, fig. 4(e) is APHE + EM, and fig. 4(f) is the algorithm of the present invention.
TABLE 4 common background outdoor map EME, AVG comparison
ORRIGINAL | HE | APHE | CLAHE | APHE+EM | MY | |
EME | 0.89 | 6.16 | 6.67 | 27.93 | 11.87 | 35.42 |
AVG | 185.5 | 477.58 | 513.65 | 5503.1 | 1165.2 | 2519.2 |
1. Comparing the original image fig. 4(a) with the algorithm of the present invention fig. 4 (f):
according to the EME and AVG indexes, the performance that the contrast and the detail display capability of an original image are greatly increased by tens of times after the original image is enhanced by the algorithm is found, the original image is dim in effect, such as a feeling of thin fog, and the details are fuzzy and indistinguishable, and telegraph poles, trees and buildings in a scene are almost indistinguishable from the visual perception of human eyes.
2. Comparing HE enhancement algorithm fig. 4(b) with the inventive algorithm fig. 4 (f):
according to EME and AVG index comparison, the performance of the two indexes of the algorithm is several times higher than that of the HE enhancement algorithm, and the HE algorithm is hazy and still has halos, is excessively hard when cold and hot, and excessively enhances the background from the visual perception of human eyes.
3. Comparing APHE enhancement algorithm FIG. 4(c) with the present invention algorithm FIG. 4 (f):
according to EME and AVG index comparison, the performance of two indexes of the algorithm is several times higher than that of an HE enhancement algorithm, and from the visual perception of human eyes, the APHE algorithm has good defogging effect but still has obvious halo, although scenery can be distinguished, the image is slightly fuzzy, but the algorithm does not have halo, and the image is clear and bright and has clear details.
4. Compare the CLAHE enhancement algorithm fig. 4(d) with the inventive algorithm fig. 4 (f):
according to EME indexes, the algorithm is obviously superior to the CLAHE algorithm in the contrast of the whole image, from the visual perception of human eyes, the CLAHE lacks the whole information of the image, such as the sun, the sky, trees and buildings, and cannot distinguish who is hot and cold, and the CLAHE can hardly be used in the places sensitive to heat analysis.
5. Comparing APHE + EM enhancement algorithm FIG. 4(e) with the inventive algorithm FIG. 4 (f):
according to the EME and AVG indexes, the two indexes of the algorithm are obviously superior to those of the APHE + EM algorithm, and from the visual perception of human eyes, the APHE + EM processing result is ideal, the transition is soft, the details are clear, almost no halo exists, the defogging effect is good, but the algorithm of the invention obviously shows more excellent processing capability than the APHE + EM algorithm, and the details are clearer and finer.
Fig. 5 shows comparison of enhancement effects of artificially created strong background and weak detail images, where fig. 5(a) shows original images, fig. 5(b) shows HE enhancement, fig. 5(c) shows APHE enhancement, fig. 5(d) shows CLAHE enhancement, fig. 5(e) shows APHE + EM, and fig. 5(f) shows the algorithm of the present invention.
TABLE 5 Artificial images EME, AVG comparison
ORRIGINAL | HE | APHE | CLAHE | APHE+EM | MY | |
EME | 4.24 | 4.61 | 4.71 | 6.71 | 4.82 | 7.54 |
AVG | 207.58 | 173.11 | 362.96 | 521.61 | 429.27 | 1141.3 |
To further verify the scene adaptability of the algorithm of the present invention, an image was artificially created, which features a very large background (background pixels occupy almost 94% of the pixels), very weak details, and only 32 gray levels of the pixels contained in the details.
1. Comparing the original image fig. 5(a) with the algorithm of the present invention fig. 5 (f):
according to EME and AVG indexes, the contrast and detail showing capability are obviously enhanced after the algorithm processing, and the original image can be seen from the visual angle of human eyes and can hardly see the upper half image, and the image hidden under the black background can be displayed after the algorithm processing.
2. Comparing HE enhancement algorithm fig. 5(b) with the inventive algorithm fig. 5 (f):
according to the EME and AVG indexes, the algorithm is obviously superior to the HE algorithm, from the perspective of human vision, an HE enhanced image is blurred, image details are not distinguished completely, and trees and foreground characters in a distant mountain, a near road and a scene can be seen clearly after the algorithm is processed.
3. Comparing the APHE enhancement algorithm FIG. 5(c) with the inventive algorithm FIG. 5 (f):
from the EME index and the AVG index, the algorithm of the invention is obviously superior to the APHE algorithm, from the visual angle of human eyes, the APHE algorithm recovers the image hidden in the black background, but the overall contrast of the image is poor, the details are also fuzzy, and the overall contrast of the algorithm of the invention is obviously superior to the APHE and the details are clear.
4. Compare CLAHE enhancement algorithm fig. 5(d) with the inventive algorithm fig. 5 (f):
from EME and AVG indexes, the algorithm of the invention is obviously superior to the CLAHE algorithm, from the visual angle of human eyes, the brightness of the image processed by the CLAHE algorithm is almost the same as that of the upper half part and the lower half part, and the actual situation is that the image of the upper half part is hidden in a black background, the brightness of the image is dark, the image of the lower half part is bright, and the light and shade information of the whole image lost after the CLAHE processing is lost.
5. Comparing APHE + EM enhancement algorithm FIG. 5(e) with the inventive algorithm FIG. 5 (f):
according to EME and AVG indexes, the algorithm is obviously superior to an APHE + EM algorithm, from the perspective of human vision, the processing result of the APHE + EM algorithm is obviously weak in contrast and clear in details, but is still slightly fuzzy, and the image processed by the algorithm is strong in contrast, fine and clear in details, can clearly see the remote mountain, the near road, trees in the scenery and foreground characters, and can be easily recognized and read.
By combining the five scenes, the HE algorithm can lose a large amount of image details in almost all scenes, the APHE algorithm retains a lot of details but has weak contrast improvement capability and only performs well in outdoor image scenes with rich backgrounds, the CLAHE algorithm has strong detail extraction capability but almost all scenes lose whole light and shade information, and the strong detail extraction capability of the CLAHE algorithm is quickly reduced on artificial images. The details of the APHE + EM algorithm are improved greatly, but the poor contrast of the APHE algorithm cannot be solved, the algorithms have certain advantages in certain scenes, but the performance fluctuation of the algorithm is large and the visual perception is poor when a plurality of scenes are switched, the performance of the algorithm is excellent in each scene, the performance fluctuation is small when the scenes are switched, the scene adaptive capacity is strong, the details are clear, and the algorithm is not complex and can be easily realized in embedded equipment with limited resources.
Claims (6)
1. A method for enhancing an image of a refrigeration type infrared sensor is characterized by comprising the following steps:
step 1, counting a histogram of the whole image;
step 2, correcting a block distribution histogram;
step 3, correcting the histogram of the self-adaptive platform;
step 4, image equalization;
and 5, enhancing image details.
2. The method for image enhancement of a refrigeration-type infrared sensor according to claim 1, wherein the step 1 is as follows:
step 1.1, defining x as an abscissa variable of an image, wherein the range is 1-N, N is the maximum value of the abscissa of the image, defining y as an ordinate variable of the image, wherein the range is 1-M, M is the maximum value of the ordinate of the image, defining image as an original image, and representing pixel values of x on the abscissa and y on the ordinate in the original image by using image (x, y), wherein the value range is 0-2PBW-1, PBW represents the pixel bit width, and pix is defined as a pixel value variable, the value range being 0-2PBW-1, defining f (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x be 1, y be 1, f (pix) be 0, within the range of pix 0-2PBWAll F (pix) in-1 are initialized to 0;
step 1.3, executing F (image (x, y)) + 1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y to be 1, and skipping to step 1.5, otherwise, executing y to be y +1, and skipping to step 1.3;
step 1.5, determining whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x ═ x +1, and skipping to step 1.3;
and step 1.6, completing the statistics of F (pix), and carrying out the next operation.
3. The method for image enhancement of a refrigeration-type infrared sensor according to claim 2, wherein the step 2 is as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, Hblock represents the division number of a horizontal coordinate, the number of the blocks divided by the original image is Hblock Wblock, the size of the vertical coordinate of each divided block is M/Wblock, and the size of the horizontal coordinate of each divided block is N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, the pix range being 0-2PBW-1, all b (pix) are initialized to 0, b (pix) has a value ranging from 0 to Hblock xWblock, Hblock is defined as an abscissa block variable, the value ranges from 1 to Hblock, Wblock is defined as an ordinate block variable, the value ranges from 1 to Wblock; defining xblock as the horizontal coordinate variable of pixels in one block in the range of 1-n, and defining yblock as the vertical coordinate variable of pixels in one block in the range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix be 0, hblock be 1, wblock be 1, xblock be 1, yblock be 1; skipping to step 2.2.3;
step 2.2.3, referring to f (pix) counted in step 1, if f (pix) >0, jumping to step 2.2.5, and if f (pix) >0, jumping to step 2.2.4;
step 2.2.4, judge whether pix equals 2PBW1, if equal to 2PBW1, jumping to step 2.2.10, otherwise, executing pix ═ pix +1, and jumping to step 2.2.3;
step 2.2.5, a block of the hblock row and the wblock column is taken, a pixel value image ((hblock-1) × N + xblock, (wblock-1) × M + yblock) of the xblock and the yblock ordinate in the block is extracted, if the pixel value is equal to pix, b (pix) = b (pix) +1, xblock ═ 1, yblock ═ 1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock to be 1, and jumping to step 2.2.7, otherwise, executing the yblock to be yblock +1, and jumping to step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock to be 1, and skipping to step 2.2.8, otherwise, executing xblock to be xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if yes, executing Wblock to be 1, and jumping to step 2.2.9, otherwise, executing Wblock to be Wblock +1, and jumping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock to be 1, and jumping to step 2.2.4, otherwise, executing Hblock to be Hblock +1, and jumping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted pixel block number;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) ═ F (pix)/B (pix), wherein N (pix) is the modified histogram, and traversing all effective pixel values of pix, namely pix ∈ (0-2)PBW-1),B(pix)>0。
4. The method for image enhancement of a refrigeration-type infrared sensor according to claim 3, wherein the step 3 is as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of nonzero pixels, wherein the initial value is nz _ pix is 0; let pix be 0;
step 3.1.2, determining whether f (pix) is greater than 0, if so, executing nz _ pix ═ nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge whether pix equals 2PBW1, if equal to 2PBW1, jumping to step 3.1.4, otherwise, executing pix ═ pix +1, and jumping to step 3.1.2;
step 3.1.4, when nz _ pix is already calculated, the value is the number of all the appeared pixel values of the original image;
step 3.2, calculateL is the plateau value that will be used for the correction of the underlying histogram;
step 3.3, correcting n (pix) generated in step 2.3, and performing plateau histogram topping using the plateau value L obtained in step 3.2, that is, { p (pix) ═ L, n (pix) > L; p (pix) ═ n (pix), n (pix) < ═ L }, and p (pix) is the histogram after the platform correction;
5. The method for image enhancement of a refrigeration-type infrared sensor according to claim 4, wherein the step 4 is as follows:
step 4.1, defining image _ eq as an equalized image, defining route (pix) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and rounding (pix) the pix value to make x equal to 1 and y equal to 1;
step 4.2, calculate image _ eq (x, y) ═ ROUND (CDF (pix (x, y)) × (2)PBW-1));
Step 4.3, judging whether y is equal to M, if so, executing y to be 1, and skipping to step 4.4; otherwise, executing y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x ═ x +1, and skipping to the step 4.2;
and 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image.
6. The method for image enhancement of a refrigeration-type infrared sensor according to claim 5, wherein the step 5 is as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formulaWherein (x ', y') is the pixel coordinate of the central point of the gaussian space confidence template, and the value should be x '═ y' ═ W +1)/2, (x, y) is the coordinate position adjacent to (x ', y'), the value ranges from 1 to W, σSThe value is generally σ for the distance standard deviationSGs (x, y) is a coefficient with x as the abscissa and y as the ordinate in the gaussian space confidence coefficient template;
step 5.2, generating a low-pass spatial domain template:
firstly, setting the window size of the low-pass frequency domain template to be W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the low-pass frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y. The template has values that when x, y belongs to (1, 1; 1, 2; 2, 1; W-1, 1; 1, W-1), FLpass (x, y) is 1, otherwise, FLpass (x, y) is 0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass spatial domain template KLpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing point multiplication on the gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, wherein GL (x, y) is Gs (x, y) or KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
and 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
image _ eq (x, y) is the pixel value with x abscissa and y ordinate in the equalized image generated in step 4, and defines image _ base as the filtered base layer image, image _ base (x, y) is the pixel value with x abscissa and y ordinate in the filtered base layer image, image _ detail is the detail image, image _ detail (x, y) is the pixel value with x abscissa and y ordinate in the detail image, and the calculation formula is image _ detail (x, y) -image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ enhance (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010506218.XA CN111768355B (en) | 2020-06-05 | 2020-06-05 | Method for enhancing image of refrigeration type infrared sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010506218.XA CN111768355B (en) | 2020-06-05 | 2020-06-05 | Method for enhancing image of refrigeration type infrared sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111768355A true CN111768355A (en) | 2020-10-13 |
CN111768355B CN111768355B (en) | 2023-02-10 |
Family
ID=72720086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010506218.XA Active CN111768355B (en) | 2020-06-05 | 2020-06-05 | Method for enhancing image of refrigeration type infrared sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111768355B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381736A (en) * | 2020-11-17 | 2021-02-19 | 深圳市歌华智能科技有限公司 | Image enhancement method based on scene block |
CN114283156A (en) * | 2021-12-02 | 2022-04-05 | 珠海移科智能科技有限公司 | Method and device for removing document image color and handwriting |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980282A (en) * | 2010-10-21 | 2011-02-23 | 电子科技大学 | A method for enhancing dynamic details of infrared images |
CN103177429A (en) * | 2013-04-16 | 2013-06-26 | 南京理工大学 | FPGA (field programmable gate array)-based infrared image detail enhancing system and method |
CN105654438A (en) * | 2015-12-27 | 2016-06-08 | 西南技术物理研究所 | Gray scale image fitting enhancement method based on local histogram equalization |
-
2020
- 2020-06-05 CN CN202010506218.XA patent/CN111768355B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980282A (en) * | 2010-10-21 | 2011-02-23 | 电子科技大学 | A method for enhancing dynamic details of infrared images |
CN103177429A (en) * | 2013-04-16 | 2013-06-26 | 南京理工大学 | FPGA (field programmable gate array)-based infrared image detail enhancing system and method |
CN105654438A (en) * | 2015-12-27 | 2016-06-08 | 西南技术物理研究所 | Gray scale image fitting enhancement method based on local histogram equalization |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381736A (en) * | 2020-11-17 | 2021-02-19 | 深圳市歌华智能科技有限公司 | Image enhancement method based on scene block |
CN114283156A (en) * | 2021-12-02 | 2022-04-05 | 珠海移科智能科技有限公司 | Method and device for removing document image color and handwriting |
CN114283156B (en) * | 2021-12-02 | 2024-03-05 | 珠海移科智能科技有限公司 | Method and device for removing document image color and handwriting |
Also Published As
Publication number | Publication date |
---|---|
CN111768355B (en) | 2023-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tan et al. | Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images | |
CN107527332B (en) | Low-illumination image color retention enhancement method based on improved Retinex | |
CN103679173B (en) | Method for detecting image salient region | |
Lee et al. | Adaptive multiscale retinex for image contrast enhancement | |
Celik | Two-dimensional histogram equalization and contrast enhancement | |
Kim et al. | Optimized contrast enhancement for real-time image and video dehazing | |
CN108765336B (en) | Image dehazing method based on dark and bright primary color prior and adaptive parameter optimization | |
CN105046677B (en) | A kind of enhancing treating method and apparatus for traffic video image | |
CN111476725A (en) | Image defogging enhancement algorithm based on gradient domain oriented filtering and multi-scale Retinex theory | |
CN114118144A (en) | Anti-interference accurate aerial remote sensing image shadow detection method | |
CN110084760A (en) | A kind of adaptive grayscale image enhancement method of the overall situation based on double gamma corrections | |
CN110827218B (en) | Airborne image defogging method based on weighted correction of HSV (hue, saturation, value) transmissivity of image | |
CN103295191A (en) | Multi-scale vision self-adaptation image enhancing method and evaluating method | |
Dharejo et al. | A color enhancement scene estimation approach for single image haze removal | |
CN114331873A (en) | Non-uniform illumination color image correction method based on region division | |
CN111768355B (en) | Method for enhancing image of refrigeration type infrared sensor | |
Kim et al. | Single image haze removal using hazy particle maps | |
CN113496531A (en) | Infrared image dynamic range compression method and system | |
CN114219732A (en) | Image defogging method and system based on sky region segmentation and transmissivity refinement | |
CN114677289A (en) | An image dehazing method, system, computer equipment, storage medium and terminal | |
Yuan et al. | Image dehazing based on a transmission fusion strategy by automatic image matting | |
CN111598812A (en) | Image defogging method based on RGB and HSV double-color space | |
CN110349113B (en) | Adaptive image defogging method based on dark primary color priori improvement | |
CN108550124B (en) | Illumination compensation and image enhancement method based on bionic spiral | |
CN104715456A (en) | Image defogging method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |