[go: up one dir, main page]

CN112785534A - Ghost-removing multi-exposure image fusion method in dynamic scene - Google Patents

Ghost-removing multi-exposure image fusion method in dynamic scene Download PDF

Info

Publication number
CN112785534A
CN112785534A CN202011064721.0A CN202011064721A CN112785534A CN 112785534 A CN112785534 A CN 112785534A CN 202011064721 A CN202011064721 A CN 202011064721A CN 112785534 A CN112785534 A CN 112785534A
Authority
CN
China
Prior art keywords
exposure
image
pixel
images
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011064721.0A
Other languages
Chinese (zh)
Other versions
CN112785534B (en
Inventor
罗林欢
梁国开
熊国锟
邓国豪
刘剑
白徐欢
陈小倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202011064721.0A priority Critical patent/CN112785534B/en
Publication of CN112785534A publication Critical patent/CN112785534A/en
Application granted granted Critical
Publication of CN112785534B publication Critical patent/CN112785534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种动态场景下去鬼影多曝光图像融合方法,首先选定曝光序列中合适的曝光图像作为参考图像,然后基于像素的梯度方向信息计算相对曝光权重,通过动态像素修正剔除异常动态像素和鬼影像素,最后使用六层图像拉普拉斯金字塔进行融合,得到无鬼影的融合图像;本发明能够有效的应对各种情况下的动态融合要求,且在曝光融合过程中免受移动物体可能造成的鬼影困扰,进而达到动态场景下高效的多重曝光融合。

Figure 202011064721

The invention discloses a multi-exposure image fusion method for ghosting in a dynamic scene. First, a suitable exposure image in an exposure sequence is selected as a reference image, then a relative exposure weight is calculated based on the gradient direction information of the pixels, and abnormal dynamics are eliminated through dynamic pixel correction. Pixels and ghost pixels are finally fused using a six-layer image Laplacian pyramid to obtain a ghost-free fused image; the present invention can effectively cope with the dynamic fusion requirements in various situations, and avoids the need for exposure during the fusion process. Ghosts that may be caused by moving objects, and then achieve efficient multi-exposure fusion in dynamic scenes.

Figure 202011064721

Description

Ghost-removing multi-exposure image fusion method in dynamic scene
Technical Field
The invention relates to the technical fields of image fusion technology, multi-exposure fusion technology, dynamic scene image processing and the like, in particular to a ghost-removing multi-exposure image fusion method in a dynamic scene.
Background
In recent years, outdoor line inspection and defect detection of the unmanned aerial vehicle need to be performed based on images shot by the unmanned aerial vehicle, and therefore whether as many details as possible can be recovered from the images shot by the unmanned aerial vehicle is very important for the results of intelligent inspection and detection. And because the unmanned aerial vehicle works in an outdoor natural environment, the dynamic range of an outdoor real natural scene can span about nine orders of magnitude. Therefore, under a certain exposure setting, a single low dynamic range image shot by a common digital camera cannot show the whole dynamic range of a certain natural scene, and the detailed information of a low-brightness or high-brightness area can be lost in a scene with strong brightness change. In order to improve the accuracy of intelligent routing inspection and defect detection of the unmanned aerial vehicle and capture outdoor scene details as much as possible, different exposure parameters can be used for multiple exposures of the same scene, detail information in various brightness ranges can be respectively captured, and then multiple images with different exposure parameters, which retain details in different brightness areas, are fused. This is the multi-exposure fusion technique.
When the unmanned aerial vehicle shoots the same scene in a multi-exposure mode, the unmanned aerial vehicle shoots the scene in the air in a hovering mode, a dynamic scene can be shot at times, and the multi-exposure images of the dynamic scene are directly fused to cause a ghost phenomenon. Therefore, the ghost problem needs to be considered in the research of dynamic scene multi-exposure image fusion.
The current direct multiple exposure image fusion algorithms are mainly divided into two categories: a transform domain based fusion method (referred to as a transform domain method for short) and a spatial domain based fusion method (referred to as a spatial domain method for short). The transform domain method is to transform the image sequence to a transform domain for fusion and then restore the image sequence to the original image sequence. The spatial domain method is to extract useful information of each part directly in the image domain for fusion.
In a dynamic scene, objects described by images in an input sequence are often different due to moving objects existing in a shot scene, and if the images are directly fused by using absolute exposure weights, a ghost phenomenon is caused in the fused images. The reason for the occurrence of ghosting is that there are moving objects with unpredictable color and brightness, and if the normal weighted fusion method is used, the pixels of these moving objects will also occupy a certain weight, thereby affecting the final fusion result. The ghosting problem becomes a serious limitation of the HDR display technology.
At present, algorithms for eliminating the ghost problem can be roughly divided into two types, one is a method without reference image, and the purpose of the method is to eliminate all moving objects completely. In view of the fact that such methods require a large number of input images to remove ghosting, the most popular at present are reference image-based methods. The method selects an input image as a reference image and keeps moving objects contained in the reference image. However, the existing ghost-removing fusion technology adopts a complex algorithm, and the final fusion effect is not ideal enough and has defects.
Disclosure of Invention
In view of the above problems in the background art, a method for fusing multiple exposure images without ghosting in a dynamic scene is provided. The method comprises the steps of selecting a reference image, calculating the relative exposure weight of the exposure image and the gradient direction information of the reference image, then performing dynamic pixel correction on a weight distribution graph, eliminating abnormal dynamic pixels and ghost pixels, and performing smoothing processing on the image by using a Laplacian image pyramid, so that the dynamic fusion requirements under various conditions can be effectively met, ghost trouble possibly caused by a moving object is avoided in the exposure fusion process, and efficient multiple exposure fusion under a dynamic scene is further achieved.
The invention discloses a method for fusing ghost-removed multi-exposure images in a dynamic scene, which comprises the following steps:
s1, acquiring a plurality of exposure images under a dynamic scene;
s2 inputting the exposure images into an exposure sequence in the order of increasing exposure;
s3 selecting one exposure image in the input exposure sequence as a reference image;
s4, calculating relative exposure weight according to gradient direction information of pixels of the reference image and other images in the input exposure sequence at the same position to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort;
s5, performing dynamic pixel correction on the weight distribution graph after threshold processing, selecting dynamic details in the reference image and reserving static details in the whole input exposure sequence;
s6, eliminating abnormal dynamic pixels which do not meet the direct proportion relation between the pixel brightness and the exposure degree in the dynamic details and ghost pixels with suddenly changed pixel brightness in two continuous exposure images to obtain a screened weight distribution map;
s7, smoothing the filtered weight distribution graph by using a multi-resolution fusion tool.
The invention provides a high-efficiency multiple exposure fusion algorithm aiming at the problem of ghost elimination in dynamic scene multiple exposure image sequence fusion.
The exposure level refers to an exposure level of an image. The exposure degree may include overexposure, normal exposure, and underexposure. The exposure level is also called exposure value, which represents all camera aperture shutter combinations that can give the same exposure.
The exposure image refers to an automatic exposure image of a photographing device under normal illumination and scene conditions in the prior art. In order to select one image from a plurality of images as a reference object for combining the plurality of images, a normal exposure image may be used as a reference image, and other images such as an overexposed image and an underexposed image may be used as non-reference images.
The input exposure sequence refers to a series of images obtained by arranging a plurality of exposure images in an order of increasing exposure.
The pixel refers to a minimum unit in an image represented by a sequence of numbers, called a pixel. The image has continuous gray-scale tones, and if the image is magnified by several times, the continuous tones are actually composed of a plurality of small square points with similar colors, and the small square points are the minimum units for forming the image: a pixel. Such smallest graphical elements show on the screen a usually single colored dot. The higher the pixel, the richer the color plate it possesses, and the more the reality of color can be expressed.
The relative exposure weight refers to the weight of the current image relative to the reference image.
The threshold processing is a method for realizing image segmentation in image processing, and common threshold segmentation methods include a simple threshold, an adaptive threshold and the like. The image threshold segmentation is a widely applied segmentation technology, which uses the difference of gray characteristics between a target area to be extracted from an image and a background thereof, regards the image as a combination of two types of areas (the target area and the background area) with different gray levels, and selects a reasonable threshold to determine whether each pixel point in the image belongs to the target area or the background area, thereby generating a corresponding binary image.
The static region refers to a region where the difference in pixel information is not large in a plurality of exposure images in succession throughout the input exposure sequence.
The weight distribution graph can embody the texture complexity weight of each pixel point in the image, and the image edge is generally realized by performing gradient operation on the image.
The multi-resolution fusion is a common image processing method, and the Laplacian pyramid structure provided by the invention is widely applied to multi-scale image fusion.
The smoothing process means that the originally continuous weight distribution is changed into discontinuous weight blocks after the image is subjected to threshold processing, so that the image needs to be smoothed by a multi-resolution fusion tool.
Specifically, the selection of the reference image comprises the following steps:
manually selecting an image with the most moderate exposure and an image with the exposure close to the image and higher brightness in an input exposure sequence as candidate images;
calculating the relative exposure weight of the two candidate images to obtain a weight distribution map;
calculating the appropriate exposure rate and the average gray difference of the two candidate images;
Figure RE-GDA0002999090430000031
Figure RE-GDA0002999090430000032
Figure RE-GDA0002999090430000033
wherein IrIn order to directly fuse the resulting grayscale images,
Figure RE-GDA0002999090430000034
is IrAverage value of (a). RrIs a binary image of the direct fusion result, n is the number of pixels of the input image,
Figure RE-GDA0002999090430000035
is the mean gray value of the entire exposure sequence, RwepTo moderate exposure, IadIs the average gray difference;
if the average value of the moderate exposure rates of the two candidate images is less than 0.9, selecting a candidate image with smaller average gray difference as a reference image; otherwise, selecting one candidate image with higher moderate exposure rate in the two candidate images as a reference image.
Further, the step of calculating the relative exposure weight according to the gradient direction information of the pixels at the same positions of the reference image and other images in the input exposure sequence to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort includes:
gradient information of the image is acquired by a gradient template of 5 × 5 gauss kernel:
Figure RE-GDA0002999090430000041
Figure RE-GDA0002999090430000042
Figure RE-GDA0002999090430000043
Figure RE-GDA0002999090430000044
Figure RE-GDA0002999090430000045
wherein Ii(x, y) is the normalized gray-scale value of the pixel located at (x, y) in the ith exposure image in the input exposure sequence, θi(x, y) is the gradient direction of the pixel located at (x, y) in the ith exposure image in the input exposure sequence, Gxi(x, y) and Gyi(x, y) are the gradients of the i-th exposure image in the input exposure sequence in the horizontal and vertical directions, gxAnd gyThen representing the Gaussian kernel gradient templates in the horizontal and vertical directions respectively;
calculating the gradient direction theta of the reference imagerThen, the gradient angle of the pixel at the same position in the (x, y) th image and the reference image is calculated by the following formula:
Figure RE-GDA0002999090430000046
wherein (2l +1) × (2l +1) is average filtering template with same size for enhancing diThe robustness of (x, y) improves the resistance to interference factors such as noise, and l is 4;
from the gradient direction angle, the relative exposure weight can be derived by
Figure RE-GDA0002999090430000047
Figure RE-GDA0002999090430000048
Where σ is the standard deviation, and σ is taken to be 0.2.
Further, the step of dynamic pixel modification comprises:
based on RGB color channels, the average brightness value of the image replaces the exposure to participate in the operation:
Figure RE-GDA0002999090430000049
wherein
Figure RE-GDA00029990904300000410
Is the average gray scale value, and I (x, y) is the gray scale value of any pixel in the exposure image;
as long as the pixel does not meet the direct proportion relation between the brightness and the exposure in any RGB channel, the pixel is judged to be an abnormal dynamic pixel and removed;
if the brightness of a certain pixel in two continuous exposure images in the input exposure sequence suddenly changes, the pixel is judged to be a ghost pixel and eliminated.
Further, the ghost pixels are determined by power law curve control:
Figure RE-GDA0002999090430000051
where CVDT is the minimum grayscale threshold used to reject ghost pixels,
Figure RE-GDA0002999090430000052
for the average color value of a certain RGB channel of the input exposure image,
Figure RE-GDA0002999090430000053
average color values of the reference image in the same RGB channel; c is 1, and gamma is 0.5;
if the color difference between a certain pixel in the reference image and the pixel at the same position in other images exceeds the corresponding CVDT value, the pixel in the reference image is determined as the reference object pixel and all weights are obtained, and all weights of other pixels at the position are cleared.
Further, the step of smoothing the filtered weight distribution map by the multi-resolution fusion tool includes:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
Figure RE-GDA0002999090430000054
1/2 downsampling the exposure image;
obtaining a Gaussian image pyramid with the size reduced by half in sequence;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid;
the fusion process algorithm is as follows:
Figure RE-GDA0002999090430000055
where N is the number of exposure images in the input exposure sequence, L { R }LFor the first layer of the laplacian pyramid of the fusion result,
Figure RE-GDA0002999090430000056
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
Figure RE-GDA0002999090430000057
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
Figure RE-GDA0002999090430000058
And
Figure RE-GDA0002999090430000059
and correspondingly multiplying the same layer of the two pyramids of the same exposure image, accumulating the products of N layers of the whole N-times exposure sequence to obtain the Laplacian pyramid of the first layer of the fusion result, and recovering the L { R } of the fusion result according to the inverse process constructed by the Laplacian pyramid.
Further, in the fusion process, the detail image represented by the laplacian pyramid is added to the original image by a coefficient of 0.25 times; to achieve a balance between information preservation and artifact removal, a 6-level laplacian pyramid is employed.
Further, the CVDT values between the reference image and the four exposure levels and the closest exposure image are calculated at most when determining the ghost pixel by power law curve control.
Further, the present invention provides a readable storage medium having a control program stored thereon, characterized in that: when being executed by a processor, the control program realizes the method for fusing the de-ghosting multi-exposure images in the dynamic scene.
Further, the present invention provides a computer control system, including a storage, a processor, and a control program stored in the storage and executable by the processor, wherein: when the processor executes the control program, the method for fusing the de-ghosting multi-exposure images in the dynamic scene is realized.
In order that the invention may be more clearly understood, specific embodiments thereof will be described hereinafter with reference to the accompanying drawings.
Drawings
FIG. 1 is a flowchart of a method for fusing de-ghosted multi-exposure images in a dynamic scene according to an embodiment of the present invention;
FIG. 2a, FIG. 2b, FIG. 2c, and FIG. 2d are graphs comparing the result of the dynamic scene image fusion method in the outdoor scene according to the embodiment of the present invention with the prior art;
fig. 3a, 3b, and 3c are graphs comparing the results of the dynamic scene image fusion method in the indoor scene according to the embodiment of the present invention with those of the prior art.
Detailed Description
Please refer to fig. 1, which is a flowchart illustrating a method for fusing de-ghosting multi-exposure images in a dynamic scene according to an embodiment of the present invention.
Ghost elimination in the multi-exposure fusion of the existing dynamic scene mainly has two problems: first, when a camera shakes during shooting, a plurality of input images may be misaligned, and thus a generated HDR image appears blurred due to ghosting. This problem can be solved by preprocessing with various image registration or alignment methods. Second, object displacement occurs between exposed images of a dynamic scene when there are objects in the scene being photographed that are significantly moving. Whether the HDR image is synthesized and then further tone mapped for display or direct multi-exposure fusion, noticeable ghost traces will be produced in the resulting image. Therefore, the ghost-removing multi-exposure image fusion method in the dynamic scene is researched.
The method for fusing the de-ghosting multi-exposure image in the dynamic scene comprises the following steps:
s1, acquiring a plurality of exposure images under a dynamic scene;
s2 inputting the exposure images into an exposure sequence in the order of increasing exposure;
s3 selecting one exposure image in the input exposure sequence as a reference image;
s4, calculating relative exposure weight according to gradient direction information of pixels of the reference image and other images in the input exposure sequence at the same position to obtain a weight distribution map, and performing threshold processing to eliminate static areas with exposure discomfort;
s5, performing dynamic pixel correction on the weight distribution graph after threshold processing, selecting dynamic details in the reference image and reserving static details in the whole input exposure sequence;
s6, eliminating abnormal dynamic pixels which do not meet the direct proportion relation between the pixel brightness and the exposure degree in the dynamic details and ghost pixels with suddenly changed pixel brightness in two continuous exposure images to obtain a screened weight distribution map;
s7, smoothing the filtered weight distribution graph by using a multi-resolution fusion tool.
The invention mainly solves two technical problems: how to effectively respond to dynamic fusion requirements according to different scenes and give good visual experience; and how to make exposure fusion of dynamic scenes free from ghost image possibly caused by moving objects in various situations.
The invention provides an efficient multiple exposure fusion algorithm, which is characterized in that relative exposure weight is calculated based on gradient direction information of pixels, and then two indexes are defined: and (3) selecting a proper reference image automatically with proper exposure rate and average gray difference, eliminating an object which is not expected to appear in the exposure image on the basis of the reference image, and finally fusing by using the Laplacian pyramid of the six-layer image to obtain a fused image without ghost.
Image edge processing is generally implemented by performing gradient operations on an image. Since different objects have different texture detail features and therefore have different gradient information, if different object contents are described near two pixels, their gradient directions will be obviously different. According to the relation, the degree of difference and similarity of the pixel contents can be judged according to the size of the included angle in the gradient direction.
In multi-exposure image fusion, the user may manually select the most appropriate reference image. The embodiment of the invention provides an automatic selection algorithm of a reference image. Firstly, selecting an image with the most moderate exposure and an image with the exposure degree close to that of the image and the higher brightness in an input exposure sequence as candidate images; because the exposure images are input into the exposure sequence according to the increasing sequence of the exposure degrees in the embodiment of the invention, the exposure images in the middle and the next of the sequence can be optimized; and respectively obtaining the weight distribution maps of the two reference images. Two criteria proposed by the present invention were then used: and selecting a more appropriate reference image from the two candidate reference images with appropriate exposure and average gray level difference.
Calculating the relative exposure weight of the two candidate images to obtain a weight distribution map; calculating the appropriate exposure rate and the average gray difference of the two candidate images;
Figure RE-GDA0002999090430000071
Figure RE-GDA0002999090430000072
Figure RE-GDA0002999090430000073
wherein IrIn order to directly fuse the resulting grayscale images,
Figure RE-GDA0002999090430000081
is IrAverage value of (a). RrIs a binary image of the direct fusion result, n is the number of pixels of the input image,
Figure RE-GDA0002999090430000082
is the mean gray value of the entire exposure sequence, RwepTo moderate exposure, IadIs an averageGray level difference;
the direct fusion refers to directly performing fusion according to weight distribution without passing through a Laplacian pyramid;
the binary image, i.e. each pixel in the image has only two possible values or gray scale states, and people often represent the binary image by black and white, B & W, monochrome images.
The gray scale image refers to an image with only one sampling color per pixel, and such an image is usually displayed as a gray scale from darkest black to brightest white, and theoretically, the sampling can display different shades of any color and even different colors with different brightness. The gray image is different from the black and white image, and the black and white image only has two colors of black and white in the field of computer image; however, grayscale images also have many levels of color depth between black and white. Grayscale images are often measured in terms of the brightness of each pixel within a single electromagnetic spectrum, such as visible light, and grayscale images for display are typically stored with a non-linear scale of 8 bits per sample pixel, allowing 256 levels of grayscale (65536 if 16 bits are used).
The moderate exposure rate RwepThe detail retention condition of the image is reflected to a certain degree by counting the ratio of the number of pixels in the moderate exposure area to the total number of pixels of the image in the direct fusion result. The moderate exposure is a ratio, which is a value between 0 and 1, and if the moderate exposure is very high, close to 1, it means that there are substantially no objects in the resulting image that are not moderately exposed, and the information is concentrated in the moderately exposed area; conversely, if the fusion result has a relatively low moderate exposure, such as below 0.5 or near 0, it indicates that there are non-negligible non-moderately exposed objects in the image.
The average gray difference IadRepresenting the difference in average gray level between the direct fusion result and the exposure sequence. The smaller this value, the closer the two are, meaning that there is some retention of various luminance information (underexposure, moderate and over) for the entire exposure sequence.
If the average value of the moderate exposure rates of the two candidate reference images is less than 0.9, the fact that a part of the reference object is located outside the moderate exposure area is meant, a weight distribution graph of the candidate reference image with a smaller average gray scale difference is selected at the moment, and the value represents that the exposure rate of the result image is closer to the average exposure rate of the whole sequence, so that the information of the reference object located outside the moderate exposure area can be better shown; otherwise, selecting the weight distribution map of the candidate reference image with a larger moderate exposure value to reserve more detail and texture information of the moderate exposure of the reference object, wherein the relatively unimportant average gray difference in the case is not needed to be considered.
After the reference image is selected, the gradient direction of the reference image is calculated according to the following formula: in this embodiment, gradient information of the image is acquired by using a gradient template of 5 × 5 gaussian kernel:
Figure RE-GDA0002999090430000083
Figure RE-GDA0002999090430000084
Figure RE-GDA0002999090430000085
Figure RE-GDA0002999090430000091
Figure RE-GDA0002999090430000092
wherein Ii(x, y) is the normalized gray-scale value of the pixel at (x, y) in the ith exposure image in the input exposure sequence, which is the data that can be directly obtained from the image, i.e., normalizing the pixel at that location to [0,1 ]]Within the range; thetai(x, y) is the image at (x, y) in the ith exposure image in the input exposure sequenceGradient direction of element, Gxi(x, y) and Gyi(x, y) are the gradients of the i-th exposure image in the input exposure sequence in the horizontal and vertical directions, gxAnd gyThen representing the Gaussian kernel gradient templates in the horizontal and vertical directions respectively; to avoid calculating theta during calculationiThe denominator is 0 when (x, y), Gy is shown in this embodimentiA small constant 10 is added after (x, y)-25
Calculating the gradient direction theta of the reference imagerThen, the gradient angle of the pixel at the same position in the (x, y) th image and the reference image is calculated by the following formula:
Figure RE-GDA0002999090430000093
where (2l +1) × (2l +1) is the average filtering template of the same size, in this embodiment, to enhance diThe robustness of (x, y) improves the resistance to interference factors such as noise, and l is 4;
from the gradient direction angle, the relative exposure weight can be derived by
Figure RE-GDA0002999090430000094
Figure RE-GDA0002999090430000095
Where σ is the standard deviation, and σ is 0.2 in this example.
The method comprises the steps of calculating relative exposure weights according to gradient direction information of pixels of a reference image and other images at the same position in an input exposure sequence to obtain a weight distribution graph, carrying out threshold processing to eliminate static areas with exposure discomfort, and although some static areas with exposure discomfort can be eliminated correctly, parts of moving objects can be confused. Therefore, the weight distribution map after the thresholding must be corrected. In the embodiment of the invention, the dynamic pixel correction algorithm corrects the area affected by the threshold error in two steps, and the static area and the dynamic area can be successfully distinguished based on the dynamic exposure sequence fusion result of the correction algorithm, so that only the dynamic details in the reference image are selected, and the static detail information in the whole exposure sequence is reserved. The present correction algorithm is based on the RGB channel, not the grayscale or luminance image.
There is a relationship in the exposure sequence: the brightness of a pixel at a certain position in the sequence is proportional to the exposure of the input image, i.e. the higher the exposure of the image, the higher the brightness of the pixel at a certain position, otherwise, the content of the pixel can be considered to be changed. For the convenience of analysis and calculation, the embodiment of the invention adopts the image average brightness value with simpler operation to replace the exposure to participate in the operation:
Figure RE-GDA0002999090430000101
wherein
Figure RE-GDA0002999090430000102
Is the average gray scale value, and I (x, y) is the gray scale value of any pixel in the exposure image; both based on RGB channels.
As long as the pixel does not meet the direct proportion relation between the brightness and the exposure in any RGB channel, the pixel is judged to be an abnormal dynamic pixel and removed; the weight distribution graph screened based on the proportional relation is smoothed by a 19 multiplied by 19 mean value filtering template, and the anti-noise performance is enhanced.
In a typical multiple exposure sequence, the change in brightness of pixels describing the same object in the moderately exposed regions relative to the change in exposure is not very dramatic, but merely fluctuates up and down over a small range centered on the image exposure. If the color brightness of a certain pixel in the next exposure image is changed greatly, the pixel can be judged as a ghost pixel to be eliminated. The algorithm of the invention controls the minimum gray threshold judged as a ghost pixel by a power law curve:
Figure RE-GDA0002999090430000103
where CVDT is the minimum grayscale threshold used to reject ghost pixels,
Figure RE-GDA0002999090430000104
for the average color value of a certain RGB channel of the input exposure image,
Figure RE-GDA0002999090430000105
average color values of the reference image in the same RGB channel; in this example, c is 1 and γ is 0.5;
if the color difference between a certain pixel in the reference image and the pixel at the same position in other images exceeds the corresponding CVDT value, the pixel in the reference image is determined as the reference object pixel and all weights are obtained, and all weights of other pixels at the position are cleared.
Referring to the shape of the power law curve, if the exposure difference between the two images is too large, the calculated CVDT minimum gray level threshold will be very close to the exposure difference, resulting in an ideal pixel and a ghost pixel being difficult to resolve. Therefore, the embodiment of the invention calculates the CVDT values between the reference image and the rest four exposure levels and the input image closest to the reference image at most, and other input images are not considered.
Thresholding can change an otherwise continuous weight distribution into discrete weight blocks, which therefore need to be smoothed by a multi-resolution fusion tool. The embodiment of the invention adopts the Laplacian pyramid of the standard image to participate in the fusion process. The laplacian pyramid can be used to seamlessly fuse images, and the step of smoothing comprises:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
Figure RE-GDA0002999090430000106
1/2 down-sampling the exposure image;
repeating the fuzzy filtering and down-sampling processes until a Gaussian image pyramid with the size reduced by half in sequence is obtained;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid; for the same image, the top level images of the gaussian image pyramid and the laplacian pyramid are the same. According to the normalized weight distribution graph controlled by the relative exposure weight, a Gaussian pyramid of each weight image can be constructed; the laplacian pyramid is computed from each input image.
The fusion process algorithm is as follows:
Figure RE-GDA0002999090430000111
where N is the number of exposure images in the input exposure sequence, L { R }LFor the first layer of the laplacian pyramid of the fusion result,
Figure RE-GDA0002999090430000112
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
Figure RE-GDA0002999090430000113
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
Figure RE-GDA0002999090430000114
And
Figure RE-GDA0002999090430000115
the same layer of two pyramids of the same exposure image is multiplied correspondingly, and the products of N layers of the whole N-times exposure sequence are accumulated to obtain the Laplacian pyramid of the first layer of the fusion resultAnd the final fusion result can be obtained by recovering the L { R } according to the inverse process of the Laplacian pyramid construction.
In order to improve the detail enhancement effect, in the fusion process, the detail image represented by the Laplacian pyramid is added with the original image by a coefficient of 0.25 times; if the coefficient is too low, the enhancement effect is weaker; if the coefficient is too high, the image will be distorted and lose the natural feeling.
By changing the number of pyramid layers, it can be found that some information may be lost if the number of pyramid layers is too large, and obvious halo artifacts may be caused by incomplete smoothing if the number of pyramid layers is too small. Through contrast experiments, the embodiment of the invention adopts 6 layers of Laplacian pyramids, and achieves relative balance between information retention and artifact elimination.
Please refer to fig. 2, which is a comparison graph of the dynamic scene image fusion method in the outdoor scene according to the embodiment of the present invention and the result of the prior art;
fig. 2a is a multi-scale image enhancement technique in the prior art, which improves the information retention of bright and dark areas by using a smoothed weight distribution gaussian pyramid. Fig. 2b is a fusion algorithm in the prior art, which provides two indexes of visibility and consistency based on the gradient magnitude and direction, respectively, for determining weight distribution and eliminating moving objects that are not expected to appear in the result. Fig. 2c shows a new object detection algorithm in the prior art, which corrects the weight distribution map to eliminate the ghost. FIG. 2d shows the fusion result of the embodiment of the present invention.
In fig. 2b, an obvious ghost phenomenon occurs, because the captured dynamic scene is located in a public place with a large traffic volume and is captured at different times, moving objects, namely passers-by, with different positions and different shapes appear in different input images, some of the passers-by block background information of proper exposure, and some of the passers-by are located in non-proper exposure areas, so that in the method of fig. 2b, accumulated difference values of some areas in gradient directions of different input images are almost the same, and cannot be accurately separated through gaussian curves, and finally the method is disabled, weight distribution is relatively even, and a ghost phenomenon is introduced. And the other three results have the advantage of occupying the reference image, and the reference object detail information shown in the result image is selected in advance, so that the possibility of most ghost interference is eliminated. It follows that the results of the invention are fully capable of dealing with randomly occurring moving object disturbances, while minimizing the impact on detail retention, when ideal pixels are only occasionally present throughout the dynamic exposure sequence.
Please refer to fig. 3, which is a comparison diagram of the result of the dynamic scene image fusion method in the indoor scene according to the embodiment of the present invention and the prior art.
Fig. 3a is a fusion algorithm in the prior art, which respectively provides two indexes of visibility and consistency based on gradient magnitude and direction, and is used for determining weight distribution and eliminating moving objects which are not expected to appear in a result. FIG. 3b illustrates a new object detection algorithm in the prior art, which corrects the weight distribution map to eliminate the ghost. FIG. 3c shows the fusion result of the embodiment of the present invention.
It can be seen that the ghosting caused by moving objects in the result of fig. 3a is very severe, such as the shadow of a doll's face, dog ears and doll hands on a sofa, which has moved slightly in sequence so that the result has been blurred, and also that there are mostly football present in the underexposed areas; the detail retention of the static part of the toy bear is still good, the texture details of the hat and the bear legs of the toy bear are clearly shown, and the sofa side information of the brightest input image only exists. Figure 3b results in the elimination of some ghosts such as the doll hand shadow, the football light front, but still there are significant ghosts in the dog ears and football dark portions, and the detail retention of the cap and legs of the teddy bear, as well as the sofa side, is worse than that of figure 3 a. The result of the invention is equivalent to that in fig. 3a in the situation of retaining the static detail part, and most of ghost images are eliminated, so that the precondition and the requirement for performing fusion processing on the dynamic sequence overlapped by the moving object can be met.
The embodiment of the invention designs a ghost-removing multi-exposure image fusion method in a dynamic scene, which solves two key technical problems: how to effectively respond to dynamic fusion requirements according to different scenes and give good visual experience; and how to make the exposure fusion of the dynamic scene avoid ghost image disturbance possibly caused by moving objects in various conditions, no matter whether the original information of the moving objects is in the moderate exposure area or not.
The core of the invention is relative exposure weight algorithm, how to judge abnormal dynamic pixels and ghost pixels and eliminate the abnormal dynamic pixels and how to automatically select a reference image through proper exposure rate and average gray difference. Compared with the prior art, the invention provides an efficient multi-exposure fusion algorithm, which comprises the steps of firstly selecting a proper exposure image in an exposure sequence as a reference image, then calculating relative exposure weight based on gradient direction information of pixels, removing abnormal dynamic pixels and ghost pixels through dynamic pixel correction, and finally fusing by using a Laplacian pyramid of six layers of images to obtain a fused image without ghosts. The relative balance between information retention and artifact elimination is achieved, the threshold processing of the dynamic area and the static area of the exposure image is more accurate and reliable than that of the prior art, the concepts of moderate exposure rate and average gray difference are introduced to select the reference image, and the method is more scientific and efficient than the method for manually selecting the reference image in the prior art; in the aspect of multi-resolution fusion tool selection, a mature Laplacian pyramid fusion technology is adopted, and a contrast experiment is preferably carried out on six layers of Laplacian pyramids, so that the detail enhancement effect is improved, and the situations of image distortion and the like are avoided.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are included in the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (10)

1.一种动态场景下去鬼影多曝光图像融合方法,包括:1. A multi-exposure image fusion method for ghosting in dynamic scenes, comprising: 获取动态场景下的多幅曝光图像;Obtain multiple exposure images in dynamic scenes; 按曝光度递增的顺序将所述曝光图像输入曝光序列;inputting the exposure images into the exposure sequence in order of increasing exposure; 选定输入曝光序列中的一幅曝光图像作为参考图像;Select an exposure image in the input exposure sequence as a reference image; 根据所述参考图像和所述输入曝光序列中其他图像在相同位置的像素的梯度方向信息计算相对曝光权重得到权重分布图,进行阈值处理消除曝光不适度的静态区域;Calculate the relative exposure weight according to the gradient direction information of the pixels in the same position of the reference image and other images in the input exposure sequence to obtain a weight distribution map, and perform thresholding to eliminate static areas with improper exposure; 对阈值处理后的权重分布图进行动态像素修正,选中所述参考图像中的动态细节并保留整个输入曝光序列中的静态细节;Perform dynamic pixel correction on the thresholded weight distribution map, select dynamic details in the reference image and retain static details in the entire input exposure sequence; 剔除所述动态细节中不满足像素亮度与曝光度成正比关系的异常动态像素,以及在连续两幅曝光图像中像素亮度骤变的鬼影像素,得到筛选后的权重分布图;Eliminate abnormal dynamic pixels that do not satisfy the proportional relationship between pixel brightness and exposure in the dynamic details, and ghost pixels with sudden changes in pixel brightness in two consecutive exposure images, to obtain a filtered weight distribution map; 用多分辨率融合工具对所述筛选后的权重分布图进行平滑处理。The filtered weight distribution map is smoothed with a multi-resolution fusion tool. 2.根据权利要求1所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于:所述参考图像的选定包括以下步骤:2. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 1, is characterized in that: the selection of described reference image comprises the following steps: 手动选取输入曝光序列中曝光最适中的图像和曝光度与该图像接近但亮度更高的图像作为候选图像;Manually select the most moderately exposed image in the input exposure sequence and the image with similar exposure but higher brightness as candidate images; 计算所述两幅候选图像的相对曝光权重得到权重分布图;Calculate the relative exposure weight of the two candidate images to obtain a weight distribution map; 计算所述两幅候选图像的适度曝光率和平均灰度差;calculating the moderate exposure and average grayscale difference of the two candidate images;
Figure FDA0002713422300000011
Figure FDA0002713422300000011
Figure FDA0002713422300000012
Figure FDA0002713422300000012
Figure FDA0002713422300000013
Figure FDA0002713422300000013
其中Ir为直接融合结果的灰度图像,
Figure FDA0002713422300000014
为Ir的平均值。Rr为直接融合结果的二值图像,n为输入图像的像素数,
Figure FDA0002713422300000015
为整个曝光序列的平均灰度值,Rwep为适度曝光率,Iad为平均灰度差;
where I r is the grayscale image of the direct fusion result,
Figure FDA0002713422300000014
is the average value of Ir . R r is the binary image of the direct fusion result, n is the number of pixels of the input image,
Figure FDA0002713422300000015
is the average gray value of the entire exposure sequence, R wep is the moderate exposure rate, and I ad is the average gray difference;
若所述两幅候选图像的适度曝光率的平均值小于0.9,则选取所述两幅候选图像中平均灰度差更小的一副候选图像为参考图像;否则选取所述两幅候选图像中适度曝光率更大的一副候选图像为参考图像。If the average of the moderate exposure rates of the two candidate images is less than 0.9, select the candidate image with the smaller average grayscale difference among the two candidate images as the reference image; otherwise, select the candidate image from the two candidate images A candidate image with a larger moderate exposure is the reference image.
3.根据权利要求1所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于,所述根据参考图像和输入曝光序列中其他图像在相同位置的像素的梯度方向信息计算相对曝光权重得到权重分布图,进行阈值处理消除曝光不适度的静态区域的步骤包括:3. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 1, is characterized in that, described according to the gradient direction information of other images in the reference image and the input exposure sequence at the same position of the pixel, calculate the relative exposure A weight distribution map is obtained from the weights, and the steps of performing thresholding to eliminate inappropriately exposed static areas include: 用5×5高斯核的梯度模板获取图像的梯度信息:Obtain the gradient information of the image with the gradient template of the 5×5 Gaussian kernel:
Figure FDA0002713422300000021
Figure FDA0002713422300000021
Figure FDA0002713422300000022
Figure FDA0002713422300000022
Figure FDA0002713422300000023
Figure FDA0002713422300000023
Figure FDA0002713422300000024
Figure FDA0002713422300000024
Figure FDA0002713422300000025
Figure FDA0002713422300000025
其中Ii(x,y)为输入曝光序列中第i幅曝光图像中位于(x,y)处的像素的归一化灰度值,θi(x,y)为输入曝光序列中第i幅曝光图像中位于(x,y)处的像素的梯度方向,Gxi(x,y)和Gyi(x,y)分别为输入曝光序列中第i幅曝光图像水平和垂直方向的梯度,gx和gy则分别表示水平和垂直方向的高斯核梯度模板;where I i (x, y) is the normalized gray value of the pixel located at (x, y) in the i-th exposure image in the input exposure sequence, and θ i (x, y) is the i-th pixel in the input exposure sequence is the gradient direction of the pixel located at (x, y) in the exposure images, Gx i (x, y) and Gy i (x, y) are the gradients in the horizontal and vertical directions of the ith exposure image in the input exposure sequence, respectively, g x and g y represent the Gaussian kernel gradient template in the horizontal and vertical directions, respectively; 计算出参考图像的梯度方向θr,则第i幅图像中(x,y)处与参考图像中相同位置处的像素的梯度夹角由下式计算得出:After calculating the gradient direction θ r of the reference image, the gradient angle of the pixel at (x, y) in the i-th image and the pixel at the same position in the reference image is calculated by the following formula:
Figure FDA0002713422300000026
Figure FDA0002713422300000026
其中(2l+1)×(2l+1)为同等尺寸的均值滤波模板,为增强di(x,y)的鲁棒性,提高对噪声等干扰因素的抵抗能力,取l=4;Among them, (2l+1)×(2l+1) is the mean filter template of the same size, in order to enhance the robustness of d i (x, y) and improve the resistance to interference factors such as noise, take l=4; 根据梯度方向夹角,可以通过下式得出相对曝光权重
Figure FDA0002713422300000027
According to the angle between the gradient directions, the relative exposure weight can be obtained by the following formula
Figure FDA0002713422300000027
Figure FDA0002713422300000028
Figure FDA0002713422300000028
其中σ为标准差,取σ=0.2。Where σ is the standard deviation, take σ = 0.2.
4.根据权利要求1所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于:所述动态像素修正的步骤包括:4. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 1, is characterized in that: the step of described dynamic pixel correction comprises: 基于RGB颜色通道由图像平均亮度值代替曝光度参与运算:Based on the RGB color channel, the average brightness value of the image replaces the exposure to participate in the operation:
Figure FDA0002713422300000029
Figure FDA0002713422300000029
其中
Figure FDA00027134223000000210
为平均灰度值,I(x,y)为曝光图像中任一像素的灰度值;
in
Figure FDA00027134223000000210
is the average gray value, and I(x, y) is the gray value of any pixel in the exposed image;
只要像素在任一个RGB通道中不满足亮度与曝光度成正比的关系,即判定该像素为异常动态像素予以剔除;As long as the pixel does not satisfy the proportional relationship between brightness and exposure in any RGB channel, it is determined that the pixel is an abnormal dynamic pixel and will be rejected; 若在输入曝光序列中的连续两幅曝光图像中某一像素的亮度骤变,即判定该像素为鬼影像素予以剔除。If the brightness of a pixel in two consecutive exposure images in the input exposure sequence changes suddenly, the pixel is determined to be a ghost pixel and eliminated.
5.根据权利要求1所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于,所述鬼影像素由幂律曲线控制判定:5. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 1, is characterized in that, described ghosting pixel is judged by power-law curve control:
Figure FDA0002713422300000031
Figure FDA0002713422300000031
其中CVDT为剔除鬼影像素所使用的最小灰度阈值,
Figure FDA0002713422300000032
为输入曝光图像某一RGB通道的平均颜色值,
Figure FDA0002713422300000033
为参考图像在同一RGB通道的平均颜色值;取c=1,γ=0.5;
where CVDT is the minimum grayscale threshold used to remove ghost pixels,
Figure FDA0002713422300000032
is the average color value of a RGB channel of the input exposure image,
Figure FDA0002713422300000033
is the average color value of the reference image in the same RGB channel; take c=1, γ=0.5;
若参考图像中某一像素与其他图像中同一位置像素的颜色差超过了对应的CVDT值,则判定参考图像中的该像素为参考物体像素并获得全部的权重,而该位置的所有其他像素权值全部清零。If the color difference between a pixel in the reference image and the pixel at the same position in other images exceeds the corresponding CVDT value, the pixel in the reference image is determined to be the reference object pixel and all weights are obtained, and all other pixels in this position are weighted. All values are cleared.
6.根据权利要求1所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于:所述多分辨率融合工具对所述筛选后的权重分布图进行平滑处理的步骤包括:6. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 1, is characterized in that: the step that described multi-resolution fusion tool carries out smooth processing to described weight distribution map after screening comprises: 用5×5大小的高斯低通滤波模板对输入曝光序列中的每一幅曝光图像进行模糊滤波;Blur filter each exposure image in the input exposure sequence with a Gaussian low-pass filter template of size 5×5; 所述5×5大小的高斯低通滤波模板为:The 5×5 Gaussian low-pass filter template is:
Figure FDA0002713422300000034
Figure FDA0002713422300000034
对曝光图像进行1/2下采样;1/2 downsample the exposure image; 获得尺寸依次减半的高斯图像金字塔;Obtain a Gaussian image pyramid whose size is successively halved; 对所述高斯图像金字塔中的图像分别进行上采样,金字塔第l层的图像上采样后与第l-1层的图像相减得到细节图像作为图像拉普拉斯金字塔的第l-1层;Upsampling the images in the Gaussian image pyramid respectively, and subtracting the image of the l-th layer of the pyramid after up-sampling with the image of the l-1st layer to obtain a detail image as the l-1st layer of the image Laplacian pyramid; 融合过程算法为:The fusion process algorithm is:
Figure FDA0002713422300000035
Figure FDA0002713422300000035
其中N为输入曝光序列中的曝光图像数目,L{R}L为融合结果拉普拉斯金字塔的第l层,
Figure FDA0002713422300000036
为输入曝光序列中第k幅曝光图像的归一化权值分布图的高斯金字塔中的第l层,
Figure FDA0002713422300000037
为输入曝光序列中第k幅图像的拉普拉斯金字塔的第l层;
where N is the number of exposure images in the input exposure sequence, L{R} L is the lth layer of the fusion result Laplacian pyramid,
Figure FDA0002713422300000036
is the lth layer in the Gaussian pyramid of the normalized weight distribution map of the kth exposure image in the input exposure sequence,
Figure FDA0002713422300000037
is the lth layer of the Laplacian pyramid of the kth image in the input exposure sequence;
分别计算输入曝光序列中曝光图像的
Figure FDA0002713422300000038
Figure FDA0002713422300000039
将同一曝光图像的两种金字塔的同一层对应相乘,再将整个N重曝光序列的N个此层的乘积进行累加,即获得融合结果的第l层拉普拉斯金字塔,融合结果根据拉普拉斯金字塔构建的逆过程对L{R}进行恢复而得到。
Calculate the exposure images in the input exposure sequence separately
Figure FDA0002713422300000038
and
Figure FDA0002713422300000039
Multiply the same layer of the two pyramids of the same exposure image, and then accumulate the products of the N layers of the entire N multiple exposure sequence, that is, to obtain the first layer of the Laplacian pyramid of the fusion result. The inverse process of Plath's pyramid construction is obtained by restoring L{R}.
7.根据权利要求6所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于,在融合过程中,由拉普拉斯金字塔表示的细节图像以0.25倍的系数与原图像相加;为达到信息保留和伪影消除之间的平衡,采用6层拉普拉斯金字塔。7. A kind of dynamic scene ghosting multi-exposure image fusion method according to claim 6, characterized in that, in the fusion process, the detail image represented by the Laplacian pyramid is 0.25 times the coefficient of the original image. Plus; to achieve a balance between information preservation and artifact removal, a 6-layer Laplacian pyramid is used. 8.根据权利要求5所述的一种动态场景下去鬼影多曝光图像融合方法,其特征在于,通过幂律曲线控制判定鬼影像素时至多计算参考图像与四幅曝光度与其最接近的曝光图像之间的CVDT值。8. a kind of dynamic scene ghosting multi-exposure image fusion method according to claim 5, is characterized in that, when judging ghost image pixel by power law curve control, calculate at most the reference image and four exposures and the exposure image closest to it between CVDT values. 9.一种可读储存介质,其上储存有控制程序,其特征在于:该控制程序被处理器执行时实现如权利要求1至8任意一项所述的动态场景下去鬼影多曝光图像融合方法。9. A readable storage medium on which a control program is stored, characterized in that: when the control program is executed by a processor, the ghosting and multi-exposure image fusion of dynamic scenes according to any one of claims 1 to 8 is realized method. 10.一种计算机控制系统,包括储存器、处理器以及储存在所述储存器中并可被所述处理器执行的控制程序,其特征在于:所述处理器执行所述控制程序时实现如权利要求1至8任意一项所述的动态场景下去鬼影多曝光图像融合方法。10. A computer control system, comprising a storage, a processor and a control program stored in the storage and executable by the processor, characterized in that: when the processor executes the control program, The multi-exposure image fusion method for ghosting in dynamic scenes according to any one of claims 1 to 8.
CN202011064721.0A 2020-09-30 2020-09-30 A method for removing ghosting and multi-exposure image fusion in dynamic scenes Active CN112785534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011064721.0A CN112785534B (en) 2020-09-30 2020-09-30 A method for removing ghosting and multi-exposure image fusion in dynamic scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011064721.0A CN112785534B (en) 2020-09-30 2020-09-30 A method for removing ghosting and multi-exposure image fusion in dynamic scenes

Publications (2)

Publication Number Publication Date
CN112785534A true CN112785534A (en) 2021-05-11
CN112785534B CN112785534B (en) 2025-03-14

Family

ID=75750475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011064721.0A Active CN112785534B (en) 2020-09-30 2020-09-30 A method for removing ghosting and multi-exposure image fusion in dynamic scenes

Country Status (1)

Country Link
CN (1) CN112785534B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222954A (en) * 2021-05-21 2021-08-06 大连海事大学 Multi-exposure image ghost-free fusion method based on patch alignment under global gradient
CN113344820A (en) * 2021-06-28 2021-09-03 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113345356A (en) * 2021-06-15 2021-09-03 合肥维信诺科技有限公司 Ghost testing method and device and storage medium
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 A ghost-free multi-exposure image fusion method and device
WO2023273868A1 (en) * 2021-06-29 2023-01-05 展讯通信(上海)有限公司 Image denoising method and apparatus, terminal, and storage medium
CN115835011A (en) * 2021-09-15 2023-03-21 Oppo广东移动通信有限公司 Image processing chip, application processing chip, electronic device, and image processing method
CN116563190A (en) * 2023-07-06 2023-08-08 深圳市超像素智能科技有限公司 Image processing method, device, computer equipment and computer readable storage medium
CN117372302A (en) * 2023-10-08 2024-01-09 北京辉羲智能科技有限公司 Automatic driving high dynamic image alignment and ghost removal method and device
WO2024094222A1 (en) * 2022-11-02 2024-05-10 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image region validity

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028509A1 (en) * 2011-07-28 2013-01-31 Samsung Electronics Co., Ltd. Apparatus and method for generating high dynamic range image from which ghost blur is removed using multi-exposure fusion
WO2015014286A1 (en) * 2013-07-31 2015-02-05 华为终端有限公司 Method and apparatus for generating high dynamic range image
CN105551061A (en) * 2015-12-09 2016-05-04 天津大学 Processing method for retaining ghosting-free moving object in high-dynamic range image fusion
CN107845128A (en) * 2017-11-03 2018-03-27 安康学院 A kind of more exposure high-dynamics image method for reconstructing of multiple dimensioned details fusion
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A Multi-Exposure Image Fusion Method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028509A1 (en) * 2011-07-28 2013-01-31 Samsung Electronics Co., Ltd. Apparatus and method for generating high dynamic range image from which ghost blur is removed using multi-exposure fusion
WO2015014286A1 (en) * 2013-07-31 2015-02-05 华为终端有限公司 Method and apparatus for generating high dynamic range image
CN105551061A (en) * 2015-12-09 2016-05-04 天津大学 Processing method for retaining ghosting-free moving object in high-dynamic range image fusion
CN107845128A (en) * 2017-11-03 2018-03-27 安康学院 A kind of more exposure high-dynamics image method for reconstructing of multiple dimensioned details fusion
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A Multi-Exposure Image Fusion Method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222954A (en) * 2021-05-21 2021-08-06 大连海事大学 Multi-exposure image ghost-free fusion method based on patch alignment under global gradient
CN113222954B (en) * 2021-05-21 2024-03-29 大连海事大学 Ghost-free fusion method of multi-exposure images under global gradient based on patch alignment
CN113345356A (en) * 2021-06-15 2021-09-03 合肥维信诺科技有限公司 Ghost testing method and device and storage medium
CN113344820A (en) * 2021-06-28 2021-09-03 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113344820B (en) * 2021-06-28 2024-05-10 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
WO2023273868A1 (en) * 2021-06-29 2023-01-05 展讯通信(上海)有限公司 Image denoising method and apparatus, terminal, and storage medium
CN115835011A (en) * 2021-09-15 2023-03-21 Oppo广东移动通信有限公司 Image processing chip, application processing chip, electronic device, and image processing method
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 A ghost-free multi-exposure image fusion method and device
WO2024094222A1 (en) * 2022-11-02 2024-05-10 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image region validity
CN116563190A (en) * 2023-07-06 2023-08-08 深圳市超像素智能科技有限公司 Image processing method, device, computer equipment and computer readable storage medium
CN116563190B (en) * 2023-07-06 2023-09-26 深圳市超像素智能科技有限公司 Image processing method, device, computer equipment and computer readable storage medium
CN117372302A (en) * 2023-10-08 2024-01-09 北京辉羲智能科技有限公司 Automatic driving high dynamic image alignment and ghost removal method and device

Also Published As

Publication number Publication date
CN112785534B (en) 2025-03-14

Similar Documents

Publication Publication Date Title
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
EP4050558B1 (en) Image fusion method and apparatus, storage medium, and electronic device
US10666873B2 (en) Exposure-related intensity transformation
CN104717432B (en) Handle method, image processing equipment and the digital camera of one group of input picture
EP3631754B1 (en) Image processing apparatus and method
CN110599433B (en) A double-exposure image fusion method based on dynamic scenes
EP1800259B1 (en) Image segmentation method and system
WO2022021999A1 (en) Image processing method and image processing apparatus
CN111741211A (en) Image display method and apparatus
JP2001126075A (en) Image processing method and apparatus, and recording medium
CN115223004A (en) Method for generating confrontation network image enhancement based on improved multi-scale fusion
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN111028165A (en) A high dynamic image restoration method based on RAW data to resist camera shake
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
CN115063331B (en) Ghost-free multi-exposure image fusion method based on multi-scale block LBP operator
Xu et al. Color-compensated multi-scale exposure fusion based on physical features
US11625886B2 (en) Storage medium storing program, training method of machine learning model, and image generating apparatus
CN115883755A (en) Multi-exposure image fusion method under multi-type scene
CN112258434A (en) Detail-preserving multi-exposure image fusion algorithm in static scene
CN112907467A (en) Rainbow pattern removing method and device and electronic equipment
CN112381724A (en) Image width dynamic enhancement method based on multi-exposure fusion framework
CN112561813A (en) Face image enhancement method and device, electronic equipment and storage medium
CN119091379A (en) A machine vision unit for smart community security management system
CN113379611B (en) Image processing model generation method, processing method, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant