Disclosure of Invention
In view of the above problems in the background art, a method for fusing multiple exposure images without ghosting in a dynamic scene is provided. The method comprises the steps of selecting a reference image, calculating the relative exposure weight of the exposure image and the gradient direction information of the reference image, then performing dynamic pixel correction on a weight distribution graph, eliminating abnormal dynamic pixels and ghost pixels, and performing smoothing processing on the image by using a Laplacian image pyramid, so that the dynamic fusion requirements under various conditions can be effectively met, ghost trouble possibly caused by a moving object is avoided in the exposure fusion process, and efficient multiple exposure fusion under a dynamic scene is further achieved.
The invention discloses a method for fusing ghost-removed multi-exposure images in a dynamic scene, which comprises the following steps:
s1, acquiring a plurality of exposure images under a dynamic scene;
s2 inputting the exposure images into an exposure sequence in the order of increasing exposure;
s3 selecting one exposure image in the input exposure sequence as a reference image;
s4, calculating relative exposure weight according to gradient direction information of pixels of the reference image and other images in the input exposure sequence at the same position to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort;
s5, performing dynamic pixel correction on the weight distribution graph after threshold processing, selecting dynamic details in the reference image and reserving static details in the whole input exposure sequence;
s6, eliminating abnormal dynamic pixels which do not meet the direct proportion relation between the pixel brightness and the exposure degree in the dynamic details and ghost pixels with suddenly changed pixel brightness in two continuous exposure images to obtain a screened weight distribution map;
s7, smoothing the filtered weight distribution graph by using a multi-resolution fusion tool.
The invention provides a high-efficiency multiple exposure fusion algorithm aiming at the problem of ghost elimination in dynamic scene multiple exposure image sequence fusion.
The exposure level refers to an exposure level of an image. The exposure degree may include overexposure, normal exposure, and underexposure. The exposure level is also called exposure value, which represents all camera aperture shutter combinations that can give the same exposure.
The exposure image refers to an automatic exposure image of a photographing device under normal illumination and scene conditions in the prior art. In order to select one image from a plurality of images as a reference object for combining the plurality of images, a normal exposure image may be used as a reference image, and other images such as an overexposed image and an underexposed image may be used as non-reference images.
The input exposure sequence refers to a series of images obtained by arranging a plurality of exposure images in an order of increasing exposure.
The pixel refers to a minimum unit in an image represented by a sequence of numbers, called a pixel. The image has continuous gray-scale tones, and if the image is magnified by several times, the continuous tones are actually composed of a plurality of small square points with similar colors, and the small square points are the minimum units for forming the image: a pixel. Such smallest graphical elements show on the screen a usually single colored dot. The higher the pixel, the richer the color plate it possesses, and the more the reality of color can be expressed.
The relative exposure weight refers to the weight of the current image relative to the reference image.
The threshold processing is a method for realizing image segmentation in image processing, and common threshold segmentation methods include a simple threshold, an adaptive threshold and the like. The image threshold segmentation is a widely applied segmentation technology, which uses the difference of gray characteristics between a target area to be extracted from an image and a background thereof, regards the image as a combination of two types of areas (the target area and the background area) with different gray levels, and selects a reasonable threshold to determine whether each pixel point in the image belongs to the target area or the background area, thereby generating a corresponding binary image.
The static region refers to a region where the difference in pixel information is not large in a plurality of exposure images in succession throughout the input exposure sequence.
The weight distribution graph can embody the texture complexity weight of each pixel point in the image, and the image edge is generally realized by performing gradient operation on the image.
The multi-resolution fusion is a common image processing method, and the Laplacian pyramid structure provided by the invention is widely applied to multi-scale image fusion.
The smoothing process means that the originally continuous weight distribution is changed into discontinuous weight blocks after the image is subjected to threshold processing, so that the image needs to be smoothed by a multi-resolution fusion tool.
Specifically, the selection of the reference image comprises the following steps:
manually selecting an image with the most moderate exposure and an image with the exposure close to the image and higher brightness in an input exposure sequence as candidate images;
calculating the relative exposure weight of the two candidate images to obtain a weight distribution map;
calculating the appropriate exposure rate and the average gray difference of the two candidate images;
wherein I
rIn order to directly fuse the resulting grayscale images,
is I
rAverage value of (a). R
rIs a binary image of the direct fusion result, n is the number of pixels of the input image,
is the mean gray value of the entire exposure sequence, R
wepTo moderate exposure, I
adIs the average gray difference;
if the average value of the moderate exposure rates of the two candidate images is less than 0.9, selecting a candidate image with smaller average gray difference as a reference image; otherwise, selecting one candidate image with higher moderate exposure rate in the two candidate images as a reference image.
Further, the step of calculating the relative exposure weight according to the gradient direction information of the pixels at the same positions of the reference image and other images in the input exposure sequence to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort includes:
gradient information of the image is acquired by a gradient template of 5 × 5 gauss kernel:
wherein Ii(x, y) is the normalized gray-scale value of the pixel located at (x, y) in the ith exposure image in the input exposure sequence, θi(x, y) is the gradient direction of the pixel located at (x, y) in the ith exposure image in the input exposure sequence, Gxi(x, y) and Gyi(x, y) are the gradients of the i-th exposure image in the input exposure sequence in the horizontal and vertical directions, gxAnd gyThen representing the Gaussian kernel gradient templates in the horizontal and vertical directions respectively;
calculating the gradient direction theta of the reference imagerThen, the gradient angle of the pixel at the same position in the (x, y) th image and the reference image is calculated by the following formula:
wherein (2l +1) × (2l +1) is average filtering template with same size for enhancing diThe robustness of (x, y) improves the resistance to interference factors such as noise, and l is 4;
from the gradient direction angle, the relative exposure weight can be derived by
Where σ is the standard deviation, and σ is taken to be 0.2.
Further, the step of dynamic pixel modification comprises:
based on RGB color channels, the average brightness value of the image replaces the exposure to participate in the operation:
wherein
Is the average gray scale value, and I (x, y) is the gray scale value of any pixel in the exposure image;
as long as the pixel does not meet the direct proportion relation between the brightness and the exposure in any RGB channel, the pixel is judged to be an abnormal dynamic pixel and removed;
if the brightness of a certain pixel in two continuous exposure images in the input exposure sequence suddenly changes, the pixel is judged to be a ghost pixel and eliminated.
Further, the ghost pixels are determined by power law curve control:
where CVDT is the minimum grayscale threshold used to reject ghost pixels,
for the average color value of a certain RGB channel of the input exposure image,
average color values of the reference image in the same RGB channel; c is 1, and gamma is 0.5;
if the color difference between a certain pixel in the reference image and the pixel at the same position in other images exceeds the corresponding CVDT value, the pixel in the reference image is determined as the reference object pixel and all weights are obtained, and all weights of other pixels at the position are cleared.
Further, the step of smoothing the filtered weight distribution map by the multi-resolution fusion tool includes:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
1/2 downsampling the exposure image;
obtaining a Gaussian image pyramid with the size reduced by half in sequence;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid;
the fusion process algorithm is as follows:
where N is the number of exposure images in the input exposure sequence, L { R }
LFor the first layer of the laplacian pyramid of the fusion result,
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
And
and correspondingly multiplying the same layer of the two pyramids of the same exposure image, accumulating the products of N layers of the whole N-times exposure sequence to obtain the Laplacian pyramid of the first layer of the fusion result, and recovering the L { R } of the fusion result according to the inverse process constructed by the Laplacian pyramid.
Further, in the fusion process, the detail image represented by the laplacian pyramid is added to the original image by a coefficient of 0.25 times; to achieve a balance between information preservation and artifact removal, a 6-level laplacian pyramid is employed.
Further, the CVDT values between the reference image and the four exposure levels and the closest exposure image are calculated at most when determining the ghost pixel by power law curve control.
Further, the present invention provides a readable storage medium having a control program stored thereon, characterized in that: when being executed by a processor, the control program realizes the method for fusing the de-ghosting multi-exposure images in the dynamic scene.
Further, the present invention provides a computer control system, including a storage, a processor, and a control program stored in the storage and executable by the processor, wherein: when the processor executes the control program, the method for fusing the de-ghosting multi-exposure images in the dynamic scene is realized.
In order that the invention may be more clearly understood, specific embodiments thereof will be described hereinafter with reference to the accompanying drawings.
Detailed Description
Please refer to fig. 1, which is a flowchart illustrating a method for fusing de-ghosting multi-exposure images in a dynamic scene according to an embodiment of the present invention.
Ghost elimination in the multi-exposure fusion of the existing dynamic scene mainly has two problems: first, when a camera shakes during shooting, a plurality of input images may be misaligned, and thus a generated HDR image appears blurred due to ghosting. This problem can be solved by preprocessing with various image registration or alignment methods. Second, object displacement occurs between exposed images of a dynamic scene when there are objects in the scene being photographed that are significantly moving. Whether the HDR image is synthesized and then further tone mapped for display or direct multi-exposure fusion, noticeable ghost traces will be produced in the resulting image. Therefore, the ghost-removing multi-exposure image fusion method in the dynamic scene is researched.
The method for fusing the de-ghosting multi-exposure image in the dynamic scene comprises the following steps:
s1, acquiring a plurality of exposure images under a dynamic scene;
s2 inputting the exposure images into an exposure sequence in the order of increasing exposure;
s3 selecting one exposure image in the input exposure sequence as a reference image;
s4, calculating relative exposure weight according to gradient direction information of pixels of the reference image and other images in the input exposure sequence at the same position to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort;
s5, performing dynamic pixel correction on the weight distribution graph after threshold processing, selecting dynamic details in the reference image and reserving static details in the whole input exposure sequence;
s6, eliminating abnormal dynamic pixels which do not meet the direct proportion relation between the pixel brightness and the exposure degree in the dynamic details and ghost pixels with suddenly changed pixel brightness in two continuous exposure images to obtain a screened weight distribution map;
s7, smoothing the filtered weight distribution graph by using a multi-resolution fusion tool.
The invention mainly solves two technical problems: how to effectively respond to dynamic fusion requirements according to different scenes and give good visual experience; and how to make exposure fusion of dynamic scenes free from ghost image possibly caused by moving objects in various situations.
The invention provides an efficient multiple exposure fusion algorithm, which is characterized in that relative exposure weight is calculated based on gradient direction information of pixels, and then two indexes are defined: and (3) selecting a proper reference image automatically with proper exposure rate and average gray difference, eliminating an object which is not expected to appear in the exposure image on the basis of the reference image, and finally fusing by using the Laplacian pyramid of the six-layer image to obtain a fused image without ghost.
Image edge processing is generally implemented by performing gradient operations on an image. Since different objects have different texture detail features and therefore have different gradient information, if different object contents are described near two pixels, their gradient directions will be obviously different. According to the relation, the degree of difference and similarity of the pixel contents can be judged according to the size of the included angle in the gradient direction.
In multi-exposure image fusion, the user may manually select the most appropriate reference image. The embodiment of the invention provides an automatic selection algorithm of a reference image. Firstly, selecting an image with the most moderate exposure and an image with the exposure degree close to that of the image and the higher brightness in an input exposure sequence as candidate images; because the exposure images are input into the exposure sequence according to the increasing sequence of the exposure degrees in the embodiment of the invention, the exposure images in the middle and the next of the sequence can be optimized; and respectively obtaining the weight distribution maps of the two reference images. Two criteria proposed by the present invention were then used: and selecting a more appropriate reference image from the two candidate reference images with appropriate exposure and average gray level difference.
Calculating the relative exposure weight of the two candidate images to obtain a weight distribution map; calculating the appropriate exposure rate and the average gray difference of the two candidate images;
wherein I
rIn order to directly fuse the resulting grayscale images,
is I
rAverage value of (a). R
rIs a binary image of the direct fusion result, n is the number of pixels of the input image,
is the mean gray value of the entire exposure sequence, R
wepTo moderate exposure, I
adIs an averageGray level difference;
the direct fusion refers to directly performing fusion according to weight distribution without passing through a Laplacian pyramid;
the binary image, i.e. each pixel in the image has only two possible values or gray scale states, and people often represent the binary image by black and white, B & W, monochrome images.
The gray scale image refers to an image with only one sampling color per pixel, and such an image is usually displayed as a gray scale from darkest black to brightest white, and theoretically, the sampling can display different shades of any color and even different colors with different brightness. The gray image is different from the black and white image, and the black and white image only has two colors of black and white in the field of computer image; however, grayscale images also have many levels of color depth between black and white. Grayscale images are often measured in terms of the brightness of each pixel within a single electromagnetic spectrum, such as visible light, and grayscale images for display are typically stored with a non-linear scale of 8 bits per sample pixel, allowing 256 levels of grayscale (65536 if 16 bits are used).
The moderate exposure rate RwepThe detail retention condition of the image is reflected to a certain degree by counting the ratio of the number of pixels in the moderate exposure area to the total number of pixels of the image in the direct fusion result. The moderate exposure is a ratio, which is a value between 0 and 1, and if the moderate exposure is very high, close to 1, it means that there are substantially no objects in the resulting image that are not moderately exposed, and the information is concentrated in the moderately exposed area; conversely, if the fusion result has a relatively low moderate exposure, such as below 0.5 or near 0, it indicates that there are non-negligible non-moderately exposed objects in the image.
The average gray difference IadRepresenting the difference in average gray level between the direct fusion result and the exposure sequence. The smaller this value, the closer the two are, meaning that there is some retention of various luminance information (underexposure, moderate and over) for the entire exposure sequence.
If the average value of the moderate exposure rates of the two candidate reference images is less than 0.9, the fact that a part of the reference object is located outside the moderate exposure area is meant, a weight distribution graph of the candidate reference image with a smaller average gray scale difference is selected at the moment, and the value represents that the exposure rate of the result image is closer to the average exposure rate of the whole sequence, so that the information of the reference object located outside the moderate exposure area can be better shown; otherwise, selecting the weight distribution map of the candidate reference image with a larger moderate exposure value to reserve more detail and texture information of the moderate exposure of the reference object, wherein the relatively unimportant average gray difference in the case is not needed to be considered.
After the reference image is selected, the gradient direction of the reference image is calculated according to the following formula: in this embodiment, gradient information of the image is acquired by using a gradient template of 5 × 5 gaussian kernel:
wherein Ii(x, y) is the normalized gray-scale value of the pixel at (x, y) in the ith exposure image in the input exposure sequence, which is the data that can be directly obtained from the image, i.e., normalizing the pixel at that location to [0,1 ]]Within the range; thetai(x, y) is the image at (x, y) in the ith exposure image in the input exposure sequenceGradient direction of element, Gxi(x, y) and Gyi(x, y) are the gradients of the i-th exposure image in the input exposure sequence in the horizontal and vertical directions, gxAnd gyThen representing the Gaussian kernel gradient templates in the horizontal and vertical directions respectively; to avoid calculating theta during calculationiThe denominator is 0 when (x, y), Gy is shown in this embodimentiA small constant 10 is added after (x, y)-25。
Calculating the gradient direction theta of the reference imagerThen, the gradient angle of the pixel at the same position in the (x, y) th image and the reference image is calculated by the following formula:
where (2l +1) × (2l +1) is the average filtering template of the same size, in this embodiment, to enhance diThe robustness of (x, y) improves the resistance to interference factors such as noise, and l is 4;
from the gradient direction angle, the relative exposure weight can be derived by
Where σ is the standard deviation, and σ is 0.2 in this example.
The method comprises the steps of calculating relative exposure weights according to gradient direction information of pixels of a reference image and other images at the same position in an input exposure sequence to obtain a weight distribution graph, carrying out threshold processing to eliminate static areas with exposure discomfort, and although some static areas with exposure discomfort can be eliminated correctly, parts of moving objects can be confused. Therefore, the weight distribution map after the thresholding must be corrected. In the embodiment of the invention, the dynamic pixel correction algorithm corrects the area affected by the threshold error in two steps, and the static area and the dynamic area can be successfully distinguished based on the dynamic exposure sequence fusion result of the correction algorithm, so that only the dynamic details in the reference image are selected, and the static detail information in the whole exposure sequence is reserved. The present correction algorithm is based on the RGB channel, not the grayscale or luminance image.
There is a relationship in the exposure sequence: the brightness of a pixel at a certain position in the sequence is proportional to the exposure of the input image, i.e. the higher the exposure of the image, the higher the brightness of the pixel at a certain position, otherwise, the content of the pixel can be considered to be changed. For the convenience of analysis and calculation, the embodiment of the invention adopts the image average brightness value with simpler operation to replace the exposure to participate in the operation:
wherein
Is the average gray scale value, and I (x, y) is the gray scale value of any pixel in the exposure image; both based on RGB channels.
As long as the pixel does not meet the direct proportion relation between the brightness and the exposure in any RGB channel, the pixel is judged to be an abnormal dynamic pixel and removed; the weight distribution graph screened based on the proportional relation is smoothed by a 19 multiplied by 19 mean value filtering template, and the anti-noise performance is enhanced.
In a typical multiple exposure sequence, the change in brightness of pixels describing the same object in the moderately exposed regions relative to the change in exposure is not very dramatic, but merely fluctuates up and down over a small range centered on the image exposure. If the color brightness of a certain pixel in the next exposure image is changed greatly, the pixel can be judged as a ghost pixel to be eliminated. The algorithm of the invention controls the minimum gray threshold judged as a ghost pixel by a power law curve:
where CVDT is the minimum grayscale threshold used to reject ghost pixels,
for the average color value of a certain RGB channel of the input exposure image,
average color values of the reference image in the same RGB channel; in this example, c is 1 and γ is 0.5;
if the color difference between a certain pixel in the reference image and the pixel at the same position in other images exceeds the corresponding CVDT value, the pixel in the reference image is determined as the reference object pixel and all weights are obtained, and all weights of other pixels at the position are cleared.
Referring to the shape of the power law curve, if the exposure difference between the two images is too large, the calculated CVDT minimum gray level threshold will be very close to the exposure difference, resulting in an ideal pixel and a ghost pixel being difficult to resolve. Therefore, the embodiment of the invention calculates the CVDT values between the reference image and the rest four exposure levels and the input image closest to the reference image at most, and other input images are not considered.
Thresholding can change an otherwise continuous weight distribution into discrete weight blocks, which therefore need to be smoothed by a multi-resolution fusion tool. The embodiment of the invention adopts the Laplacian pyramid of the standard image to participate in the fusion process. The laplacian pyramid can be used to seamlessly fuse images, and the step of smoothing comprises:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
1/2 down-sampling the exposure image;
repeating the fuzzy filtering and down-sampling processes until a Gaussian image pyramid with the size reduced by half in sequence is obtained;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid; for the same image, the top level images of the gaussian image pyramid and the laplacian pyramid are the same. According to the normalized weight distribution graph controlled by the relative exposure weight, a Gaussian pyramid of each weight image can be constructed; the laplacian pyramid is computed from each input image.
The fusion process algorithm is as follows:
where N is the number of exposure images in the input exposure sequence, L { R }
LFor the first layer of the laplacian pyramid of the fusion result,
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
And
the same layer of two pyramids of the same exposure image is multiplied correspondingly, and the products of N layers of the whole N-times exposure sequence are accumulated to obtain the Laplacian pyramid of the first layer of the fusion resultAnd the final fusion result can be obtained by recovering the L { R } according to the inverse process of the Laplacian pyramid construction.
In order to improve the detail enhancement effect, in the fusion process, the detail image represented by the Laplacian pyramid is added with the original image by a coefficient of 0.25 times; if the coefficient is too low, the enhancement effect is weaker; if the coefficient is too high, the image will be distorted and lose the natural feeling.
By changing the number of pyramid layers, it can be found that some information may be lost if the number of pyramid layers is too large, and obvious halo artifacts may be caused by incomplete smoothing if the number of pyramid layers is too small. Through contrast experiments, the embodiment of the invention adopts 6 layers of Laplacian pyramids, and achieves relative balance between information retention and artifact elimination.
Please refer to fig. 2, which is a comparison graph of the dynamic scene image fusion method in the outdoor scene according to the embodiment of the present invention and the result of the prior art;
fig. 2a is a multi-scale image enhancement technique in the prior art, which improves the information retention of bright and dark areas by using a smoothed weight distribution gaussian pyramid. Fig. 2b is a fusion algorithm in the prior art, which provides two indexes of visibility and consistency based on the gradient magnitude and direction, respectively, for determining weight distribution and eliminating moving objects that are not expected to appear in the result. Fig. 2c shows a new object detection algorithm in the prior art, which corrects the weight distribution map to eliminate the ghost. FIG. 2d shows the fusion result of the embodiment of the present invention.
In fig. 2b, an obvious ghost phenomenon occurs, because the captured dynamic scene is located in a public place with a large traffic volume and is captured at different times, moving objects, namely passers-by, with different positions and different shapes appear in different input images, some of the passers-by block background information of proper exposure, and some of the passers-by are located in non-proper exposure areas, so that in the method of fig. 2b, accumulated difference values of some areas in gradient directions of different input images are almost the same, and cannot be accurately separated through gaussian curves, and finally the method is disabled, weight distribution is relatively even, and a ghost phenomenon is introduced. And the other three results have the advantage of occupying the reference image, and the reference object detail information shown in the result image is selected in advance, so that the possibility of most ghost interference is eliminated. It follows that the results of the invention are fully capable of dealing with randomly occurring moving object disturbances, while minimizing the impact on detail retention, when ideal pixels are only occasionally present throughout the dynamic exposure sequence.
Please refer to fig. 3, which is a comparison diagram of the result of the dynamic scene image fusion method in the indoor scene according to the embodiment of the present invention and the prior art.
Fig. 3a is a fusion algorithm in the prior art, which respectively provides two indexes of visibility and consistency based on gradient magnitude and direction, and is used for determining weight distribution and eliminating moving objects which are not expected to appear in a result. FIG. 3b illustrates a new object detection algorithm in the prior art, which corrects the weight distribution map to eliminate the ghost. FIG. 3c shows the fusion result of the embodiment of the present invention.
It can be seen that the ghosting caused by moving objects in the result of fig. 3a is very severe, such as the shadow of a doll's face, dog ears and doll hands on a sofa, which has moved slightly in sequence so that the result has been blurred, and also that there are mostly football present in the underexposed areas; the detail retention of the static part of the toy bear is still good, the texture details of the hat and the bear legs of the toy bear are clearly shown, and the sofa side information of the brightest input image only exists. Figure 3b results in the elimination of some ghosts such as the doll hand shadow, the football light front, but still there are significant ghosts in the dog ears and football dark portions, and the detail retention of the cap and legs of the teddy bear, as well as the sofa side, is worse than that of figure 3 a. The result of the invention is equivalent to that in fig. 3a in the situation of retaining the static detail part, and most of ghost images are eliminated, so that the precondition and the requirement for performing fusion processing on the dynamic sequence overlapped by the moving object can be met.
The embodiment of the invention designs a ghost-removing multi-exposure image fusion method in a dynamic scene, which solves two key technical problems: how to effectively respond to dynamic fusion requirements according to different scenes and give good visual experience; and how to make the exposure fusion of the dynamic scene avoid ghost image disturbance possibly caused by moving objects in various conditions, no matter whether the original information of the moving objects is in the moderate exposure area or not.
The core of the invention is relative exposure weight algorithm, how to judge abnormal dynamic pixels and ghost pixels and eliminate the abnormal dynamic pixels and how to automatically select a reference image through proper exposure rate and average gray difference. Compared with the prior art, the invention provides an efficient multi-exposure fusion algorithm, which comprises the steps of firstly selecting a proper exposure image in an exposure sequence as a reference image, then calculating relative exposure weight based on gradient direction information of pixels, removing abnormal dynamic pixels and ghost pixels through dynamic pixel correction, and finally fusing by using a Laplacian pyramid of six layers of images to obtain a fused image without ghosts. The relative balance between information retention and artifact elimination is achieved, the threshold processing of the dynamic area and the static area of the exposure image is more accurate and reliable than that of the prior art, the concepts of moderate exposure rate and average gray difference are introduced to select the reference image, and the method is more scientific and efficient than the method for manually selecting the reference image in the prior art; in the aspect of multi-resolution fusion tool selection, a mature Laplacian pyramid fusion technology is adopted, and a contrast experiment is preferably carried out on six layers of Laplacian pyramids, so that the detail enhancement effect is improved, and the situations of image distortion and the like are avoided.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are included in the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.