A kind of multi-focus image fusing method and system based on FGF
Technical field
The invention belongs to optical image security technical field, a kind of multi-focus image fusing method more particularly to one are designed
Multi-focus image fusing method and system of the kind based on FGF.
Background technique
Since the target in focus can only be clearly imaged in picture plane by optical sensor imaging system, and focus
Target imaging outside point is fuzzy.Therefore, the limited problem of focusing range is easy to cause optical imaging system can not be to field
Scape target whole blur-free imaging.If understood completely entire scene objects, need to analyze a considerable amount of similar diagrams
Picture was not only wasted time but also was frittered away energy, and will also result in the waste on memory space.It is same that a width is obtained by image interfusion method
All objects all clearly images in one scene, make its more comprehensively, true reflection scene information is for accurate point of image
Analysis and understanding are of great significance, and multi-focus image fusion is one of the effective technology means for realizing this target.
Multi-focus image fusion is exactly to be obtained under the identical image-forming condition to process registration about more in a certain scene
Width focusedimage detects by activity measurement and extracts the clear area of every width focusedimage, so using certain blending algorithm
These region merging techniques are generated by all objects all clearly images in the width scene according to certain fusion rule afterwards.It is more
Focusedimage integration technology can be characterized extraction with the characterization scene objects information of complete display, and target identification and tracking etc. are established
Good basis is determined, so that the utilization rate and system that effectively improve image information are to the reliable of object table detection identification
Property, space-time unique is extended, uncertainty is reduced.
The key of Multi-focus image fusion is to make accurate characterization to focal zone characteristic, is accurately positioned and extracts
Region in focusing range or pixel out, this is also that asking of solving very well is not yet received in multi-focus image fusion technology so far
One of topic.Currently, image co-registration research continue for for more than 30 years.With the continuous development of computer and imaging technique, state
Inside and outside researcher determines for focal zone present in multi-focus image fusion technology and extracts problem, proposes several hundred kinds
The blending algorithm haveing excellent performance.These Multi-focus image fusions are broadly divided into two classes: spatial domain multi-focus image fusion is calculated
Method and transform domain Multi-focus image fusion.Wherein, spatial domain Image Fusion is according to the gray scale of pixel in source images
It is worth size, is come out the pixel of focal zone or extracted region using different focal zone evaluating characteristics, according to melts
Normally obtain blending image.The advantages of algorithm is that method is simple, is easily performed, and computation complexity is low, blending image packet
Raw information containing source images.The disadvantage is that being also easy to produce " blocking artifact " vulnerable to noise jamming.Transform domain image blending algorithm pair
Source images are converted, and are handled according to fusion rule transformation coefficient, and by treated, transformation coefficient progress inverse transformation is obtained
To blending image.Its shortcoming is mainly manifested in decomposable process complexity, time-consuming, and high frequency coefficient space hold is big, fusion process
In easily cause information to lose.If changing a transformation coefficient of blending image, the airspace gray value of whole image all will
It changes, as a result during enhancing some image-region attributes, introduces unnecessary artificial interference.It is more common
Pixel-level Multi-focus image fusion have it is following several:
(1) it is based on the multi-focus image fusing method of laplacian pyramid (Laplacian Pyramid, LAP).Its
Main process is to carry out Laplacian pyramid to source images, suitable fusion rule is then used, by high and low frequency
Coefficient is merged, and the progress inverse transformation of fused pyramid coefficient is obtained blending image.This method has good time-frequency
Local characteristics achieve good results, but each inter-layer data that decomposes has redundancy, can not determine the data phase on each decomposition layer
Guan Xing.It is poor to extract detailed information ability, decomposable process medium-high frequency information is lost seriously, and fused image quality is directly affected.
(2) it is based on the multi-focus image fusing method of wavelet transformation (Discrete Wavelet Transform, DWT).
Its main process is to carry out wavelet decomposition to source images, then uses suitable fusion rule, high and low frequency coefficient is carried out
Fused wavelet coefficient progress wavelet inverse transformation is obtained blending image by fusion.This method has good time-frequency part special
Property, it achieves good results, but 2-d wavelet base is made of by way of tensor product one-dimensional wavelet basis, for image
In the expression of singular point be optimal, but the line and face unusual for image can not carry out rarefaction representation.In addition DWT belongs to
It is converted in down-sampling, lacks translation invariance, the loss of information is easily caused in fusion process, blending image is caused to be distorted.
(3) contourlet transform based on non-lower sampling (Non-sub-sampled Contourlet Transform,
NSCT multi-focus image fusing method).Its main process is to carry out NSCT decomposition to source images, then uses and suitably melts
Normally, high and low frequency coefficient is merged, the progress NSCT inverse transformation of fused wavelet coefficient is obtained into fusion figure
Picture.This method can obtain good syncretizing effect, but the speed of service is slower, and decomposition coefficient needs to occupy a large amount of memory space.
(4) it is based on the multi-focus image fusion of principal component analysis (Principal Component Analysis, PCA)
Method.Its main process is source images to be preferentially converted into column vector according to row major or column, and calculate covariance, according to
Covariance matrix seeks feature vector, determines the corresponding feature vector of first principal component and determines therefrom that each source images fusion
Weight is weighted fusion according to weight.When this method has certain common characteristics between source images, it can obtain preferably
Syncretizing effect;And the feature difference between source images it is larger when, then be easy to introduce false information in blending image,
Fusion results are caused to be distorted.This method calculates simply, and speed is fast, but where the gray value of single pixel point can not indicate
The focus characteristics of image-region cause blending image soft edge, the low problem of contrast occur.
(5) it is based on the multi-focus image fusing method of spatial frequency (Spatial Frequency, SF).Its main process
It is that source images are subjected to block segmentation, then calculates each piece of SF, the SF of source images corresponding blocks is compared, by the big correspondence image of SF value
Merged block obtains blending image.This method is simply easy to implement, but piecemeal size is difficult to adaptive determination, and piecemeal is too big, Yi Jiang
Pixel outside focus is all included, and reduces fusion mass, is declined blending image contrast, is also easy to produce blocking artifact, piecemeal is too
It is small limited to region readability characterization ability, easily there is the wrong choice of block, so that consistency is poor between adjacent sub-blocks, is handing over
Occur obvious detail differences at boundary, generates " blocking artifact ".In addition, the focus characteristics of image subblock are difficult to accurate description, it is how sharp
With the focus characteristics of the image subblock local feature accurate description sub-block, will directly affect the accuracy for focusing sub-block selection and
The quality of blending image.
(6) multiple focussing image of convolution rarefaction representation (convolutional sparse representation, CSR)
Fusion method.Its main process is to carry out CSR decomposition to source images, the basal layer and levels of detail of source images is obtained, then to base
Plinth layer and levels of detail are merged, and finally merge the basal layer of fusion and levels of detail to obtain blending image.This method is not direct
Dependent on the focus characteristics of source images, but source images are determined by source images basal layer and the significant characteristics of levels of detail
Focal zone, to noise have robustness.
(7) multi-focus of (cartoon-texture decomposition, CTD) is decomposed based on cartoon-texture image
Image interfusion method.Its main process is that multi-focus source images are carried out with cartoon-texture image respectively to decompose, and obtains multi-focus
The cartoon ingredient and texture ingredient of source images, and the cartoon ingredient and texture ingredient of multi-focus source images are merged respectively,
Merge fused cartoon ingredient and texture ingredient obtains blending image.Its fusion rule be cartoon ingredient based on image and
The focus characteristics design of texture ingredient, the focus characteristics of source images are not directly dependent on, to have to noise and scratch breakage
There is robustness.
(8) multi-focus image fusing method based on Steerable filter (Guided Filter Fusion, GFF).It is main
Process be using guiding image filter by picture breakdown be the basal layer comprising large scale Strength Changes and comprising small scale it is thin
The levels of detail of section, then using the conspicuousness and Space Consistency of basal layer and levels of detail building blending weight figure, and as
Basis merges the basal layer of source images and levels of detail respectively, finally the basal layer of fusion and levels of detail is merged to obtain final
Blending image, this method can obtain good syncretizing effect, but lack robustness to noise.
Above-mentioned eight kinds of methods are more common multi-focus image fusing methods, but in these methods, wavelet transformation
(DWT) geometrical characteristic possessed by image data itself cannot be made full use of, cannot optimal or most " sparse " expression image,
Blending image is easily caused offset and information Loss occur;Contourlet transform (NSCT) method based on non-lower sampling due to
Decomposable process is complicated, and the speed of service is slower, and in addition decomposition coefficient needs to occupy a large amount of memory space.Principal component analysis (PCA)
Method is easily reduced blending image contrast, influences fused image quality.Convolution rarefaction representation (CSR), cartoon texture image point
Solution (CTD), Steerable filter (GFF) are all the new methods proposed in recent years, good syncretizing effect are all achieved, wherein being oriented to
Filtering (GFF) is that edge holding and translation invariant operation are carried out based on local nonlinearity model, and computational efficiency is high;It can use
While iteration frame restores large scale edge, the small details of adjacent edges is eliminated;Preceding four kinds of common fusion methods all exist
Different disadvantages, be difficult to reconcile between speed and fusion mass, limit the application and popularization of these methods, the 8th kind of method
It is the more excellent blending algorithm of current fusion performance, but Steerable filter is not filtered source images directly, is easy
Lost part source image information, at the same average weight integration technology with affect fusion performance to a certain extent.
In conclusion problem of the existing technology is:
In the prior art, (1) traditional Space domain mainly uses region partitioning method to carry out, region division size
It is excessive to will lead to exterior domain in focus and be located at the same area, cause fused image quality to decline;Region division is undersized, son
Provincial characteristics cannot sufficiently reflect the provincial characteristics, be easy to cause the judgement inaccuracy of focal zone pixel and generate and falsely drop, make
Consistency is poor between obtaining adjacent area, obvious detail differences occurs in intersection, generates " blocking artifact ".(2) traditional based on more rulers
It spends in the multi-focus image fusion method decomposed, is always handled whole picture multi-focus source images as single entirety, detailed information
It extracts imperfect, the detailed information such as source images Edge texture cannot be preferably indicated in blending image, affect blending image pair
The integrality of source images potential information description, and then influence fused image quality.
Summary of the invention
In view of the problems of the existing technology, the present invention provides one kind can effectively eliminate " blocking artifact ", expansion optical
The imaging system depth of field and can the subjective and objective quality of significant increase blending image multi-focus image fusing method based on FGF and be
System.It overcomes focal zone present in multi-focus image fusion and determines inaccuracy, it cannot effective extraction source image border texture
Information, blending image minutia characterize imperfect, part loss in detail, " blocking artifact ", the problems such as contrast decline.
(1) source images are carried out smoothly with mean filter, is decomposed to obtain the basal layer of source images and thin to source images
Ganglionic layer;(2) Laplce's filtering and Gassian low-pass filter are successively filtered source images, obtain the aobvious of source images
Write figure;(3) the weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;(4) using source images as drawing
It leads image and DECOMPOSED OPTIMIZATION is carried out to weight figure using FGF, respectively obtain the weight figure basal layer and levels of detail of optimization;(5) root
Basal layer and levels of detail respective pixel are melted using the weight figure basal layer and levels of detail of optimization according to certain fusion rule
It closes;(6) fused basal layer and levels of detail are merged, obtains blending image.
The invention is realized in this way basal layer and levels of detail will be decomposed into source images with mean filter first;Then
Conspicuousness detection is carried out to source images using Laplce's filtering and Gassian low-pass filter, obtains the notable figure of source images;Then
The weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;And simultaneously using source images as navigational figure
DECOMPOSED OPTIMIZATION is carried out to weight figure using FGF, respectively obtains the weight figure basal layer and levels of detail of optimization;It is then based on decision
Matrix respectively merges basal layer and levels of detail respective pixel according to certain fusion rule;Finally by fused basal layer
Merge with levels of detail, obtains blending image.
Further, the multi-focus image fusing method based on FGF, to the multiple focussing image I after registration1And I2It carries out
Fusion, I1And I2It is gray level image, and I1, I2∈□M×N,It is the space that size is M × N, M and N are positive integer,
It specifically includes:
(1) using mean value wave device AF respectively to multiple focussing image I1And I2Smooth operation is carried out, source images I is removed1And I2
In small structure, obtain source images basal layer (B1, B2), source images levels of detail (D1, D2).Wherein: (B1, B2)=AF (I1, I2),
(D1, D2)=(I1, I2)-(B1, B2)。
(2) source images are filtered with LF, obtain the high-pass filtering image H of source images1And H2, with GLF to H1
And H2Low-pass filtering treatment obtains source images notable figure S1And S2.Wherein: (H1, H2)=LF (I1, I2), (S1, S2)=GLF (H1,
H2)。
(3) according to source images I1And I2Pixel S in corresponding notable figure1(i, j) and S2(i, j) size constructs source images pair
The weight matrix P answered1And P2.Wherein:
S1(i, j) is source images I1Notable figure pixel (i, j);
S2(i, j) is source images I2Notable figure pixel (i, j);
P1(i, j) is source images I1Weight matrix element (i, j);
P2(i, j) is source images I2Weight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;
(4) by source images I1And I2As navigational figure, with FGF to weight matrix P1And P2DECOMPOSED OPTIMIZATION is carried out, is obtained
Weight matrix W1 B, W2 B, W1 DAnd W2 D.Wherein: (W1 B, W1 D)=FGF (P1, I1), (W2 B, W2 D)=FGF (P2, I2)。
(5) it is based on source images basal layer (B1, B2) and levels of detail (D1, D2), according to the weight matrix W of optimization1 B, W2 B, W1 D
And W2 DConstruct blending image basal layer FB,With levels of detail FD,, obtain fused basal layer FB
With levels of detail FD.Wherein, FB=W1 BB1+W2 BB2, FD=W1 DD1+W2 DD2。
(6) blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD。
Further, corrosion expansive working processing is carried out to the eigenmatrix constructed in step (4), and using treated
Eigenmatrix constructs blending image.
The multi-focus image fusion system based on FGF that another object of the present invention is to provide a kind of.
Another object of the present invention is to provide a kind of intelligence using the above-mentioned multi-focus image fusing method based on FGF
Intelligent city multi-focus image fusion system.
Another object of the present invention is to provide a kind of doctors using the above-mentioned multi-focus image fusing method based on FGF
Treat imaging multi-focus image fusion system.
Another object of the present invention is to provide a kind of peaces using the above-mentioned multi-focus image fusing method based on FGF
Full monitoring multi-focus image fusion system.
Advantages of the present invention and good effect are as follows:
(1) the invention firstly uses mean filters, and basal layer and levels of detail will be decomposed into source images, then general using drawing
Lars high-pass filtering and Gassian low-pass filter carry out conspicuousness detection to source images, obtain the notable figure of source images;Then pass through
Comparison source images notable figure pixel size obtains the weight figure of corresponding source images;And source images as navigational figure and are utilized
FGF carries out DECOMPOSED OPTIMIZATION to weight figure, respectively obtains the weight figure basal layer and levels of detail of optimization, utilizes the basis of weight figure
Layer and levels of detail respectively merge source images basal layer and levels of detail, then melt fused basal layer and levels of detail
Conjunction obtains the blending image of source images.Secondary fusion is carried out to source images, is improved to the judgement of source images focal zone characteristic
Accuracy rate is conducive to the extraction of clear area target, can have preferably from detailed information such as source images transfer Edge textures
Effect improves the subjective and objective quality of blending image.
(2) in the present invention, image co-registration frame is flexible, easy to implement, can be used for other kinds of image co-registration task.
(3) when this blending algorithm carries out smooth operation to source images with mean filter, can effectively inhibit in source images
Influence of the noise to fused image quality.
Image interfusion method frame of the present invention is flexible, determines accuracy rate with higher to source images focal zone characteristic,
Focal zone target detail can be accurately extracted, it is clear to indicate image detail feature, while " blocking artifact " is effectively eliminated,
Effectively improve the subjective and objective quality of blending image.
Detailed description of the invention
Fig. 1 is the multi-focus image fusing method flow chart based on FGF that case study on implementation of the present invention provides.
Fig. 2 is source images to be fused ' Disk ' effect picture that case study on implementation 1 of the present invention provides.
Fig. 3 is that case study on implementation offer of the present invention is Laplce (LAP), wavelet transformation (DWT), based on non-lower sampling
Contourlet transform (NSCT), principal component analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon line
Manage picture breakdown (CTD), Steerable filter (GFF) and of the invention (Proposed) totally nine kinds of image interfusion methods to multi-focus
The syncretizing effect figure of image ' Disk ' Fig. 3 (a) and (b).
Fig. 4 is image to be fused ' Book ' effect picture that case study on implementation 2 of the present invention provides 2;
Fig. 5 be Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, it is main at
Analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), guiding filter
Wave (GFF) and nine kinds of fusion methods of (Proposed) image of the invention melt multiple focussing image ' Book ' Fig. 4 (a) with (b)
Close effect image.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with case study on implementation, to this
Invention is further elaborated.It should be appreciated that specific implementation case described herein is only used to explain the present invention,
It is not intended to limit the present invention.
In the prior art, blending algorithm determines inaccuracy to source images focal zone in multi-focus image fusion field, carefully
It is imperfect to save information extraction, the detailed information such as source images Edge texture, syncretizing effect cannot be preferably indicated in blending image
Difference.
Application principle of the invention is described in detail with reference to the accompanying drawing.
As shown in Figure 1, the multi-focus image fusing method based on FGF that case study on implementation of the present invention provides, comprising:
S101: basal layer and levels of detail will be decomposed into source images first with mean filter.
S102: and then conspicuousness detection is carried out to source images using Laplce's high-pass filtering and Gassian low-pass filter, it obtains
The weight figure of corresponding source images is obtained to the notable figure of source images, and by comparing corresponding source image saliency map pixel size.
S103: DECOMPOSED OPTIMIZATION is carried out to weight figure using source images as navigational figure and using FGF, respectively obtains optimization
Weight figure basal layer and levels of detail, and using weight figure basal layer and levels of detail respectively by source images basal layer and details
Layer fusion.
S104: finally fused basal layer and levels of detail are merged, obtain blending image.
Below with reference to detailed process, the invention will be further described.
The multi-focus image fusing method based on FGF that case study on implementation of the present invention provides, detailed process include:
Using mean value wave device AF respectively to multiple focussing image I1And I2Smooth operation is carried out, source images I is removed1And I2In
Small structure obtains source images basal layer (B1, B2), source images levels of detail (D1, D2).Wherein: (B1, B2)=AF (I1, I2), (D1,
D2)=(I1, I2)-(B1, B2);
Source images are filtered with LF, obtain the high-pass filtering image H of source images1And H2, with GLF to H1And H2
Low-pass filtering treatment obtains source images notable figure S1And S2.Wherein: (H1, H2)=LF (I1, I2), (S1, S2)=GLF (H1, H2)。
And according to source images I1And I2Pixel S in corresponding notable figure1(i, j) and S2(i, j) size, the corresponding weight square of building source images
Battle array P1And P2.Wherein:
S1(i, j) is source images I1Notable figure pixel (i, j);
S2(i, j) is source images I2Notable figure pixel (i, j);
P1(i, j) is source images I1Weight matrix element (i, j);
P2(i, j) is source images I2Weight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;
Respectively by source images I1And I2As navigational figure, with FGF to weight matrix P1And P2DECOMPOSED OPTIMIZATION is carried out, is obtained
Weight matrix W1 B, W2 B, W1 DAnd W2 D.Wherein: (W1 B, W1 D)=FGF (P1, I1), (W2 B, W2 D)=FGF (P2, I2)。
Based on source images basal layer (B1, B2) and levels of detail (D1, D2), according to the weight matrix W of optimization1 B, W2 B, W1 DWith
W2 DConstruct blending image basal layer FB,With levels of detail FD,, obtain fused basal layer FBWith
Levels of detail FD.Wherein, FB=W1 BB1+W2 BB2, FD=W1 DD1+W2 DD2。
Blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD。
Below with reference to specific implementation case, the invention will be further described.
Fig. 2 is source images to be fused ' Disk ' effect picture that case study on implementation 1 of the present invention provides.
Case study on implementation 1
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 2 (a) and (b),
Processing result is as shown in the Propose in Fig. 3.Simultaneously using Laplce (LAP), wavelet transformation (DWT), based on adopting under non-
The contourlet transform (NSCT) of sample, principal component analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon
Texture image decompose (CTD), eight kinds of image interfusion methods of Steerable filter (GFF) to two width source images shown in Fig. 2 (a) and (b) into
Row fusion treatment carries out quality evaluation to the blending images of different fusion methods, and processing calculates to obtain result shown in table 1.
Table 1 multiple focussing image ' Disk ' fused image quality evaluates
Case study on implementation 2:
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 4 (a) and (b),
Processing result is as shown in the Proposed in Fig. 5.
Simultaneously Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, it is main at
Analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), guiding filter
Eight kinds of image interfusion methods of wave (GFF) carry out fusion treatment to two width source images (a) shown in Fig. 4 and (b), merge to Fig. 5 difference
The blending image of method carries out quality evaluation, and processing calculates to obtain result shown in table 2.
Table 2 multiple focussing image ' Book ' fused image quality evaluates
In Tables 1 and 2: Method represents method;Fusion method includes eight kinds: Laplce (LAP), small echo
Convert (DWT), the contourlet transform (NSCT) based on non-lower sampling, principal component analysis (PCA) method, spatial frequency (SF), volume
Product rarefaction representation (CSR), cartoon texture image decompose (CTD), Steerable filter (GFF);When Running Time represents operation
Between, unit is the second.MI represents mutual information, is that fused image quality based on mutual information objectively evaluates index.QAB/FIt represents from source
The marginal information total amount shifted in image.
From Fig. 3, Fig. 5 can be seen that other method frequency domain methods include Laplce (LAP), wavelet transformation (DWT),
The problem of contourlet transform (NSCT) based on non-lower sampling, blending image all deposit artifact again, fuzzy and poor contrast;
Its blending image contrast of principal component analysis (PCA) method is worst in the method for airspace, the fusion figure of spatial frequency (SF) method
As there is " blocking artifact " phenomenon, and convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), Steerable filter (GFF) melts
It is relatively preferable to close quality, but there is also a small amount of obscure portions.Method of the invention is to multiple focussing image Fig. 3 ' Disk ' and poly
The blending image subjective vision effect of burnt image graph 5 ' Book ' is substantially better than the syncretizing effect of other fusion methods.
It can be seen that from blending image, extractability of the method for the present invention to source images focus area object edge and texture
Other methods are substantially better than, can be good at for the target information of focus area in source images being transferred in blending image, are protected
Deposit the detailed information such as edge and the texture in source images.The target detail information of focal zone can be effectively captured, image is improved
Fusion mass.The method of the present invention has good subjective attribute.
As can be seen from Table 1 and Table 2, the picture quality of the method for the present invention blending image objectively evaluates index MI than other
The blending image of method corresponds to index and is averagely higher by 1.5, and the picture quality of blending image objectively evaluates index QAB/FThan its other party
The blending image of method corresponds to index and is higher by 0.04.Illustrate that this method obtains blending image and has good objective figures.
The foregoing is merely preferable case study on implementation of the invention, are not intended to limit the invention, all of the invention
Made any modifications, equivalent replacements, and improvements etc. within spirit and principle, should be included in protection scope of the present invention it
It is interior.