[go: up one dir, main page]

CN106251365A - Many exposure video fusion method and device - Google Patents

Many exposure video fusion method and device Download PDF

Info

Publication number
CN106251365A
CN106251365A CN201610587415.2A CN201610587415A CN106251365A CN 106251365 A CN106251365 A CN 106251365A CN 201610587415 A CN201610587415 A CN 201610587415A CN 106251365 A CN106251365 A CN 106251365A
Authority
CN
China
Prior art keywords
processed
frame
weight map
reference frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610587415.2A
Other languages
Chinese (zh)
Inventor
杜军平
徐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201610587415.2A priority Critical patent/CN106251365A/en
Publication of CN106251365A publication Critical patent/CN106251365A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种多曝光视频融合方法及装置,其中方法包括:在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧;根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图;利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像。通过本发明中的多曝光视频融合方法及装置,针对被捕获的场景中存在运动目标的情况,能够对多曝光图像序列进行融合,得到清晰无模糊无重影的融合图像。

The present invention provides a multi-exposure video fusion method and device, wherein the method includes: determining a reference frame and at least one frame to be processed in the obtained multi-exposure image sequence, and respectively converting each of the frames to be processed according to the consistency sensitive hash algorithm Frame alignment to the reference frame; calculating a weight map of each aligned frame to be processed according to image features of each aligned frame to be processed, and calculating a weight map of the reference frame according to image features of the reference frame Weight map: using the reference frame to optimize the weight map of each aligned frame to be processed, and fusing each optimized weight map with the weight map of the reference frame to obtain a fused image. Through the multi-exposure video fusion method and device of the present invention, in view of the situation that there is a moving target in the captured scene, the multi-exposure image sequence can be fused to obtain a clear, fuzzy, and ghost-free fused image.

Description

多曝光视频融合方法及装置Multi-exposure video fusion method and device

技术领域technical field

本发明涉及视频处理领域,具体而言,涉及一种多曝光视频融合方法及装置。The present invention relates to the field of video processing, in particular to a multi-exposure video fusion method and device.

背景技术Background technique

多曝光视频融合是指将捕获到的多曝光视频中的多曝光图像序列进行融合,得到曝光良好的融合图像,其中,多曝光视频采用以下方式获得:利用曝光度不同的多个相机对同一个场景进行录像,得到由多曝光图像序列组成的多曝光视频,其中,多曝光图像又称为视频帧。Multi-exposure video fusion refers to the fusion of multi-exposure image sequences in the captured multi-exposure video to obtain a well-exposed fused image. The multi-exposure video is obtained in the following way: using multiple cameras with different The scene is recorded to obtain a multi-exposure video composed of a sequence of multi-exposure images, wherein the multi-exposure images are also called video frames.

为了得到曝光良好的融合图像,现有技术提供了多种多曝光视频融合方法,例如,用于彩色图像融合的基于拉普拉斯金字塔的融合方法,利用局部对比度和邻域颜色一致性度量,在随机游走框架下提出的概率融合方法,以及采用极大后验概率框架构建概率模型推断融合像素的方法。In order to obtain a well-exposed fused image, the existing technology provides a variety of multi-exposure video fusion methods, for example, a fusion method based on the Laplacian pyramid for color image fusion, using local contrast and neighborhood color consistency metrics, A probabilistic fusion method proposed under the random walk framework, and a method to infer fused pixels by constructing a probabilistic model using the maximum a posteriori probability framework.

然而,现有技术中的融合方法都是针对静态场景提出的,没有考虑到运动目标对融合效果的影响,当被捕获的场景中存在运动目标时,通过现有技术的融合方法得到的融合图像存在模糊和重影,无法获得完整清晰的融合图像。However, the fusion methods in the prior art are all proposed for static scenes, without considering the impact of moving objects on the fusion effect. When there are moving objects in the captured scene, the fused image obtained by the fusion method of the prior art There is blurring and ghosting, and it is not possible to obtain a complete and clear fused image.

发明内容Contents of the invention

有鉴于此,本发明的目的在于提供一种多曝光视频融合方法及装置,针对被捕获的场景中存在运动目标的情况,能够对多曝光图像序列进行融合,得到清晰无模糊无重影的融合图像。In view of this, the purpose of the present invention is to provide a multi-exposure video fusion method and device, which can fuse the multi-exposure image sequence to obtain clear, fuzzy and ghost-free fusion in view of the presence of moving objects in the captured scene. image.

第一方面,本发明实施例提供了一种多曝光视频融合方法,所述方法包括:在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧;根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图;利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像。In the first aspect, an embodiment of the present invention provides a multi-exposure video fusion method, the method includes: determining a reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, respectively Each of the frames to be processed is aligned to the reference frame; a weight map of each aligned frame to be processed is calculated according to the image features of the aligned frames to be processed, and a weight map is calculated according to the image features of the reference frame The weight map of the reference frame; using the reference frame to optimize the weight map of each aligned frame to be processed, and fusing each optimized weight map with the weight map of the reference frame to obtain a fused image .

结合第一方面,本发明实施例提供了第一方面第一种可能的实施方式,其中,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧,包括:分别对各个所述待处理帧和所述参考帧进行尺度分解,得到分别与各个所述待处理帧对应的各个待处理子帧集合,以及与所述参考帧对应的参考子帧集合;分别对所述参考子帧集合中的各个参考子帧和各个所述待处理子帧集合中的各个待处理子帧进行分块;根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧;采用灰度映射函数对各个所述待处理子帧集合中的各个对齐后的所述待处理子帧进行误匹配修正,并采用泊松视频融合方法对所述误匹配修正的结果进行优化;将各个所述待处理子帧集合中的各个优化后的所述待处理子帧进行重构,得到各个对齐后的所述待处理帧。With reference to the first aspect, the embodiment of the present invention provides a first possible implementation manner of the first aspect, wherein respectively aligning each of the frames to be processed to the reference frame according to a consistency-sensitive hash algorithm includes: respectively performing scale decomposition on each of the frames to be processed and the reference frame to obtain each subframe set to be processed corresponding to each frame to be processed and a reference subframe set corresponding to the reference frame; Each reference subframe in the reference subframe set and each subframe to be processed in each subframe set to be processed are divided into blocks; each subframe in each subframe set to be processed is divided into blocks according to the consistency sensitive hash algorithm. The divided subframes to be processed are aligned to the respective corresponding divided reference subframes in the set of reference subframes; a grayscale mapping function is used to map each of the subframes to be processed Performing mismatch correction on the aligned subframes to be processed, and optimizing the result of the mismatch correction by using a Poisson video fusion method; The subframes are processed for reconstruction to obtain the aligned frames to be processed.

结合第一方面第一种可能的实施方式,本发明实施例提供了第一方面第二种可能的实施方式,其中,根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧,包括:在当前所述待处理子帧集合中的当前分块后的所述待处理子帧中确定待匹配图像块,在所述参考子帧集合中的当前分块后的所述参考子帧中确定所述待匹配图像块对应的多个邻域图像块;根据一致性敏感哈希算法分别计算所述待匹配图像块与各个所述邻域图像块之间的匹配距离,将所述待匹配图像块对齐至最小的所述匹配距离对应的邻域图像块;重复所述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧。With reference to the first possible implementation manner of the first aspect, the embodiment of the present invention provides a second possible implementation manner of the first aspect, wherein, according to the consistency-sensitive hash algorithm, the The subframes to be processed after each block are aligned to the reference subframes after each corresponding block in the reference subframe set, including: the current block in the current set of subframes to be processed The image block to be matched is determined in the subsequent subframe to be processed, and a plurality of neighboring image blocks corresponding to the image block to be matched are determined in the reference subframe after the current block in the set of reference subframes ; Calculate the matching distance between the image block to be matched and each of the neighborhood image blocks according to the consistency-sensitive hash algorithm, and align the image block to be matched to the neighborhood image corresponding to the smallest matching distance block; repeating the actions of determining the image block to be matched, determining the neighborhood image block, calculating the matching distance, and aligning the image blocks to be matched, until the sub-frames in each sub-frame set to be processed are divided into blocks. The subframes are aligned to each corresponding block of the reference subframes in the set of reference subframes.

结合第一方面,本发明实施例提供了第一方面第三种可能的实施方式,其中,根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图,包括:分别计算各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征;根据各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的所述待处理帧的初始权重图,以及根据所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算所述参考帧的初始权重图;分别对各个对齐后的所述待处理帧的初始权重图进行归一化,得到各个对齐后的所述待处理帧的权重图,以及对所述参考帧的初始权重图进行归一化,得到所述参考帧的权重图。In combination with the first aspect, the embodiment of the present invention provides a third possible implementation manner of the first aspect, wherein the weight map of each aligned frame to be processed is calculated according to the image features of each aligned frame to be processed , and calculating the weight map of the reference frame according to the image features of the reference frame, including: respectively calculating the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, and calculating the The phase consistency feature, local contrast feature and color saturation feature of the reference frame; according to the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, calculate each aligned the initial weight map of the frame to be processed, and calculate the initial weight map of the reference frame according to the phase consistency feature, local contrast feature and color saturation feature of the reference frame; The initial weight map is normalized to obtain the weight map of each aligned frame to be processed, and the initial weight map of the reference frame is normalized to obtain the weight map of the reference frame.

结合第一方面,本发明实施例提供了第一方面第四种可能的实施方式,其中,利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,包括:采用指导滤波器通过以下公式利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化;In combination with the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, wherein optimizing the weight map of each aligned frame to be processed by using the reference frame includes: using guided filtering The device uses the reference frame to optimize the weight map of each aligned frame to be processed through the following formula;

qq ii == ΣΣ jj WW ‾‾ ii jj (( GG )) pp jj ;;

WW ‾‾ ii jj (( GG )) == 11 || ωω || ΣΣ zz :: (( ii ,, jj )) ∈∈ ωω zz (( 11 ++ (( GG ii -- μμ zz )) (( GG jj -- μμ zz )) σσ zz 22 ++ ϵϵ )) ;;

其中,i和j表示像素索引,G表示所述参考帧,表示滤波核,pj表示对齐后的所述待处理帧的权重图,qi表示优化后的权重图,ωz表示中心为z的局部窗口,|ω|表示在ωz像素的数量,μz分别表示在ωz上的所述参考帧的均值和方差,ε表示正则化参数,Gi是所述参考帧G中的第i个像素,Gj表示所述参考帧G中的第j个像素。where i and j represent pixel indices, G represents the reference frame, Represents the filter kernel, p j represents the weight map of the frame to be processed after alignment, q i represents the optimized weight map, ω z represents the local window whose center is z, |ω| represents the number of pixels at ω z , μ z and respectively represent the mean and variance of the reference frame on ω z , ε represents the regularization parameter, G i is the i-th pixel in the reference frame G, and G j represents the j-th pixel in the reference frame G pixels.

结合第一方面,本发明实施例提供了第一方面第五种可能的实施方式,其中,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像,包括:通过以下公式将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像;In combination with the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, wherein merging each optimized weight map with the weight map of the reference frame to obtain a fused image includes: The formula fuses each optimized weight map with the weight map of the reference frame to obtain a fused image;

Ff == ΣΣ kk == 11 NN WW ^^ kk ff kk ;;

其中,F表示得到的融合图像,k表示各个帧的序号,N表示帧的总数量,表示第k个帧的优化后的权重图,fk表示第k个帧的矩阵。Among them, F represents the obtained fused image, k represents the serial number of each frame, N represents the total number of frames, Denotes the optimized weight map of the kth frame, and f k denotes the matrix of the kth frame.

第二方面,本发明实施例提供了一种多曝光视频融合装置,所述装置包括:对齐模块,用于在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧;权重图计算模块,用于根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图;融合模块,用于利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像。In the second aspect, an embodiment of the present invention provides a multi-exposure video fusion device, the device includes: an alignment module, used to determine a reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, according to the consistency sensitive The hash algorithm respectively aligns each of the frames to be processed to the reference frame; the weight map calculation module is used to calculate the weight of each aligned frame to be processed according to the image features of each aligned frame to be processed , and calculate the weight map of the reference frame according to the image features of the reference frame; the fusion module is used to optimize the weight map of each aligned frame to be processed by using the reference frame, and optimize each optimized The weight map of the reference frame is fused with the weight map of the reference frame to obtain a fused image.

结合第二方面,本发明实施例提供了第二方面第一种可能的实施方式,其中,所述对齐模块包括:尺度分解单元,用于分别对各个所述待处理帧和所述参考帧进行尺度分解,得到分别与各个所述待处理帧对应的各个待处理子帧集合,以及与所述参考帧对应的参考子帧集合;分块单元,用于分别对所述参考子帧集合中的各个参考子帧和各个所述待处理子帧集合中的各个待处理子帧进行分块;对齐单元,用于根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧;优化单元,用于采用灰度映射函数对各个所述待处理子帧集合中的各个对齐后的所述待处理子帧进行误匹配修正,并采用泊松视频融合方法对所述误匹配修正的结果进行优化;重构单元,用于将各个所述待处理子帧集合中的各个优化后的所述待处理子帧进行重构,得到各个对齐后的所述待处理帧。In combination with the second aspect, the embodiment of the present invention provides a first possible implementation manner of the second aspect, wherein the alignment module includes: a scale decomposition unit configured to separately perform Scale decomposition to obtain each subframe set to be processed corresponding to each frame to be processed, and a reference subframe set corresponding to the reference frame; Each reference subframe and each subframe to be processed in each subframe set to be processed are divided into blocks; an alignment unit is used to divide each subframe in each subframe set to be processed according to a consistency sensitive hash algorithm. The block-to-be-processed subframes are aligned to each corresponding block-blocked reference subframe in the reference subframe set; an optimization unit is configured to use a grayscale mapping function for each of the to-be-processed subframes Each aligned sub-frame to be processed in the set is subjected to mis-match correction, and a Poisson video fusion method is used to optimize the result of the mis-match correction; Each optimized subframe to be processed in the set is reconstructed to obtain each aligned frame to be processed.

结合第二方面第一种可能的实施方式,本发明实施例提供了第二方面第二种可能的实施方式,其中,所述对齐单元包括:邻域图像块确定子单元,用于在当前所述待处理子帧集合中的当前分块后的所述待处理子帧中确定待匹配图像块,在所述参考子帧集合中的当前分块后的所述参考子帧中确定所述待匹配图像块对应的多个邻域图像块;图像块对齐子单元,用于根据一致性敏感哈希算法分别计算所述待匹配图像块与各个所述邻域图像块之间的匹配距离,将所述待匹配图像块对齐至最小的所述匹配距离对应的邻域图像块;重复子单元,用于重复所述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧。With reference to the first possible implementation manner of the second aspect, the embodiment of the present invention provides a second possible implementation manner of the second aspect, wherein the alignment unit includes: a neighborhood image block determination subunit, configured to Determine the image block to be matched in the subframe to be processed after the current block in the set of subframes to be processed, and determine the image block to be matched in the reference subframe after the current block in the set of reference subframes matching a plurality of neighborhood image blocks corresponding to the image block; the image block alignment subunit is used to calculate the matching distance between the image block to be matched and each of the neighborhood image blocks according to the consistency sensitive hash algorithm, and The image block to be matched is aligned to the adjacent image block corresponding to the minimum matching distance; the repeating subunit is used to repeat the determination of the image block to be matched, the determination of the adjacent image block, the calculation of the matching distance, and the image block to be matched The alignment operation is until the block-to-be-processed subframes in each to-be-processed subframe set are aligned to each corresponding block-blocked reference subframe in the reference subframe set.

结合第二方面,本发明实施例提供了第二方面第三种可能的实施方式,其中,所述权重图计算模块包括:特征计算单元,用于分别计算各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征;初始权重图计算单元,用于根据各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的所述待处理帧的初始权重图,以及根据所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算所述参考帧的初始权重图;权重图归一化单元,用于分别对各个对齐后的所述待处理帧的初始权重图进行归一化,得到各个对齐后的所述待处理帧的权重图,以及对所述参考帧的初始权重图进行归一化,得到所述参考帧的权重图。In combination with the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, wherein the weight map calculation module includes: a feature calculation unit, configured to calculate the weight of each aligned frame to be processed phase consistency features, local contrast features, and color saturation features, and calculating the phase consistency features, local contrast features, and color saturation features of the reference frame; the initial weight map calculation unit is used to calculate according to each aligned The phase consistency feature, local contrast feature and color saturation feature of the frame to be processed respectively calculate the initial weight map of each aligned frame to be processed, and according to the phase consistency feature, local contrast feature and color saturation feature of the reference frame The saturation feature calculates the initial weight map of the reference frame; the weight map normalization unit is used to normalize the initial weight maps of the frames to be processed after each alignment, and obtain the frame to be processed after each alignment. The weight map of the frame is processed, and the initial weight map of the reference frame is normalized to obtain the weight map of the reference frame.

本发明实施例中的方法及装置,首先在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,其次根据各个对齐后的待处理帧的图像特征计算各个对齐后的待处理帧的权重图,以及根据参考帧的图像特征计算参考帧的权重图,最后利用参考帧对各个对齐后的待处理帧的权重图进行优化,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像。由于本发明实施例中的方法及装置根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,因此针对被捕获的场景中存在运动目标的情况,能够对多曝光图像序列进行融合,得到清晰无模糊无重影的融合图像。The method and device in the embodiments of the present invention first determine the reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, respectively align each frame to be processed to the reference frame according to the consistency sensitive hash algorithm, and then according to each Calculate the weight map of each aligned frame to be processed according to the image features of the aligned frames to be processed, and calculate the weight map of the reference frame according to the image features of the reference frame, and finally use the reference frame to calculate the weight map of each aligned frame to be processed Optimization is performed, and each optimized weight map is fused with the weight map of the reference frame to obtain a fused image. Since the method and device in the embodiment of the present invention align each frame to be processed to the reference frame according to the consistency-sensitive hash algorithm, the multi-exposure image sequence can be fused for the case where there is a moving target in the captured scene, A fused image that is clear, blur-free and ghost-free is obtained.

为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,做详细说明如下。In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments are specifically cited below, together with the accompanying drawings, to be described in detail as follows.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图做简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the accompanying drawings used in the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention, so it is not It should be regarded as a limitation on the scope, and those skilled in the art can also obtain other related drawings based on these drawings without creative work.

图1示出了本发明实施例所提供的多曝光视频融合方法的流程示意图;FIG. 1 shows a schematic flow diagram of a multi-exposure video fusion method provided by an embodiment of the present invention;

图2示出了本发明实施例所提供的多曝光视频融合装置的结构示意图。FIG. 2 shows a schematic structural diagram of a multi-exposure video fusion device provided by an embodiment of the present invention.

具体实施方式detailed description

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only It is a part of embodiments of the present invention, but not all embodiments. The components of the embodiments of the invention generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations. Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative efforts belong to the protection scope of the present invention.

考虑到现有技术中的融合方法都是针对静态场景提出的,没有考虑到运动目标对融合效果的影响,当被捕获的场景中存在运动目标时,通过现有技术的融合方法得到的融合图像存在模糊和重影,无法获得完整清晰的融合图像,本发明提供了一种多曝光视频融合方法及装置,下面结合实施例进行具体描述。需要说明的是,本实施例中为了简化描述语言,将视频帧简称为帧,即参考帧等同为参考视频帧,待处理帧等同为待处理视频帧。Considering that the fusion methods in the prior art are all proposed for static scenes, without considering the impact of moving objects on the fusion effect, when there are moving objects in the captured scene, the fused image obtained by the fusion method of the prior art There are blurring and ghosting, and a complete and clear fusion image cannot be obtained. The present invention provides a multi-exposure video fusion method and device, which will be described in detail below with reference to the embodiments. It should be noted that, in this embodiment, in order to simplify the description language, the video frame is referred to as a frame for short, that is, a reference frame is equivalent to a reference video frame, and a frame to be processed is equivalent to a video frame to be processed.

图1示出了本发明实施例所提供的多曝光视频融合方法的流程示意图,如图1所示,该方法包括以下步骤:Fig. 1 shows a schematic flow chart of a multi-exposure video fusion method provided by an embodiment of the present invention. As shown in Fig. 1, the method includes the following steps:

步骤S102,在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧。In step S102, a reference frame and at least one frame to be processed are determined in the acquired multi-exposure image sequence, and each frame to be processed is respectively aligned to the reference frame according to a consistency-sensitive hash algorithm.

本实施例中,考虑到当获取的多曝光图像序列中存在运动目标时,最终融合得到图像可能存在模糊和重影,因此需要选取参考帧和待处理帧,将各个待处理帧对齐至参考帧,以避免融合得到图像可能存在模糊和重影。In this embodiment, considering that when there are moving objects in the acquired multi-exposure image sequence, the final fused image may have blur and ghosting, so it is necessary to select the reference frame and the frame to be processed, and align each frame to be processed to the reference frame , to avoid possible blurring and ghosting of the fused image.

本实施例中,在多曝光图像序列中选择具有最少过饱和像素或最少欠饱和像素的视频帧作为参考帧,一种情况下,将多曝光图像序列中除参考帧以外的视频帧作为待处理帧。另一种情况下,考虑到多曝光图像序列中视频帧数量较多,计算量较大,或者多曝光图像序列中存在不需要进行处理的视频帧时,在多曝光图像序列中选择部分视频帧作为待处理帧。In this embodiment, the video frame with the least oversaturated pixels or the least undersaturated pixels is selected as the reference frame in the multi-exposure image sequence, and in one case, the video frames other than the reference frame in the multi-exposure image sequence are used as the to-be-processed frame. In another case, considering the large number of video frames in the multi-exposure image sequence and the large amount of calculation, or there are video frames that do not need to be processed in the multi-exposure image sequence, select some video frames in the multi-exposure image sequence as a pending frame.

根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧具体包括以下(1)至(4)四个步骤:Aligning each frame to be processed to the reference frame according to the consistency-sensitive hash algorithm specifically includes the following four steps (1) to (4):

(1)分别对各个待处理帧和参考帧进行尺度分解,得到分别与各个待处理帧对应的各个待处理子帧集合,以及与参考帧对应的参考子帧集合。(1) Scale decomposition is performed on each frame to be processed and the reference frame to obtain each subframe set to be processed corresponding to each frame to be processed and a reference subframe set corresponding to the reference frame.

定义参考帧为R,待处理帧为S,通过以下公式(1)分别对各个待处理帧进行尺度分解,得到分别与各个待处理帧对应的各个待处理子帧集合,待处理子帧集合包括对待处理帧尺度分解后得到的多个待处理子帧,通过以下公式(2)对参考帧进行尺度分解,得到与参考帧对应的参考子帧集合,参考子帧集合包括对参考帧尺度分解后得到的多个参考子帧。Define the reference frame as R, the frame to be processed as S, and decompose the scale of each frame to be processed by the following formula (1) to obtain each subframe set to be processed corresponding to each frame to be processed, the subframe set to be processed includes For the multiple subframes to be processed obtained after the scale decomposition of the frame to be processed, the reference frame is decomposed by the following formula (2), and the reference subframe set corresponding to the reference frame is obtained. The reference subframe set includes the scale decomposition of the reference frame The obtained multiple reference subframes.

P(x,y,R,σ-1)=P2↓(x,y,R,σ); (1)P(x,y,R,σ-1)=P 2↓ (x,y,R,σ); (1)

P(x,y,S,σ-1)=P2↓(x,y,S,σ); (2)P(x,y,S,σ-1)=P 2↓ (x,y,S,σ); (2)

公式(1)和(2)中,{P(x,y,Δ,σ-1)Δ=R,S}表示尺度函数,(x,y)表示像素位置,σ表示金字塔层级,σ取值范围为[3,5],2↓表示下采样因子。In formulas (1) and (2), {P(x,y,Δ,σ-1)Δ=R,S} represents the scale function, (x,y) represents the pixel position, σ represents the pyramid level, and the value of σ The range is [3, 5], and 2↓ represents the downsampling factor.

(2)分别对参考子帧集合中的各个参考子帧和各个待处理子帧集合中的各个待处理子帧进行分块,该分块方式可以为滑窗分块。(2) Divide each reference subframe in the reference subframe set and each subframe to be processed in each subframe set to be processed into blocks respectively, and the blocking manner may be a sliding window block.

(3)根据一致性敏感哈希算法分别将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧。(3) Align each divided subframe to be processed in each subframe set to be processed to each corresponding divided reference subframe in the reference subframe set according to the consistency sensitive hash algorithm.

比如,某待处理子帧集合A中包括按照图像尺寸由小至大的顺序依次排列的5张待处理子帧A1、A2、A3、A4和A5,参考子帧集合B中包括按照尺寸图像由小至大的顺序依次排列的5张参考子帧B1、B2、B3、B4和B5,对A1、A2、A3、A4、A5、B1、B2、B3、B4和B5分块后,将A1对齐至B1,A2对齐至B2……A5对齐至B5,即,分块后的待处理子帧对齐至图像尺寸相同的分块后的参考子帧。For example, a set of subframes to be processed A includes five subframes A1, A2, A3, A4, and A5 to be processed in order of image size from small to large. Five reference subframes B1, B2, B3, B4, and B5 arranged in order from small to large, after block A1, A2, A3, A4, A5, B1, B2, B3, B4, and B5, align A1 Align to B1, A2 is aligned to B2 ... A5 is aligned to B5, that is, the sub-frame to be processed after being divided is aligned to the reference sub-frame after being divided into blocks with the same image size.

该动作(3)中,根据一致性敏感哈希算法分别将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧,能够分解为如下动作:In this action (3), according to the consistency-sensitive hash algorithm, each divided subframe to be processed in each subframe set to be processed is aligned to each corresponding divided reference subframe in the reference subframe set. Frames can be decomposed into the following actions:

(31)在当前待处理子帧集合中的当前分块后的待处理子帧中确定待匹配图像块,在参考子帧集合中的当前分块后的参考子帧中确定待匹配图像块对应的多个邻域图像块。(31) Determine the image block to be matched in the subframe to be processed after the current block in the current subframe set to be processed, and determine the corresponding image block to be matched in the reference subframe after the current block in the reference subframe set Multiple neighborhood image blocks of .

邻域图像块能够采用以下方式确定:在当前分块后的待处理子帧中确定第一个待匹配图像块后,在参考子帧中确定与该第一个待匹配图像块相匹配的图像块,称为第一个相匹配图像块,在第一个待匹配图像块的邻域内确定第二个待匹配图像块,将第一个相匹配图像块的邻域内的多个图像块作为第二个待匹配图像块对应的多个邻域图像块,在多个邻域图像块内搜索与第二个待匹配图像块匹配的图像块。即,将参考子帧中上一个匹配上的图像块的邻域内的图像块作为当前待匹配图像块的邻域图像块。Neighboring image blocks can be determined in the following manner: after the first image block to be matched is determined in the subframe to be processed after the current block, an image matching the first image block to be matched is determined in the reference subframe Block, called the first matching image block, determine the second image block to be matched in the neighborhood of the first image block to be matched, and use multiple image blocks in the neighborhood of the first matching image block as the first A plurality of neighboring image blocks corresponding to the two to-be-matched image blocks are searched for an image block matching the second to-be-matched image block in the plurality of neighboring image blocks. That is, the image blocks in the neighborhood of the last matched image block in the reference subframe are used as the neighborhood image blocks of the current to-be-matched image block.

(32)根据一致性敏感哈希算法分别计算待匹配图像块与各个邻域图像块之间的匹配距离,将待匹配图像块对齐至最小的匹配距离对应的邻域图像块。(32) Calculate the matching distance between the image block to be matched and each neighborhood image block according to the consistency-sensitive hash algorithm, and align the image block to be matched to the neighborhood image block corresponding to the smallest matching distance.

通过以下公式(3)计算待匹配图像块与各个邻域图像块之间的匹配距离:Calculate the matching distance between the image block to be matched and each neighborhood image block by the following formula (3):

dy=||x-y||=min{||x'-y||;x'∈S}; (3)d y =||xy||=min{||x'-y||;x'∈S}; (3)

其中,x'表示各个邻域图像块,y表示待匹配图像块,x表示与y匹配距离最小的邻域图像块,dy表示计算得到的最小的匹配距离,S表示各个邻域图像块的集合。Among them, x' represents each neighborhood image block, y represents the image block to be matched, x represents the neighborhood image block with the smallest matching distance with y, d y represents the calculated minimum matching distance, and S represents the distance of each neighborhood image block gather.

(33)重复上述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧。(33) Repeat the above-mentioned actions of determining the image block to be matched, determining the adjacent image block, calculating the matching distance, and aligning the image blocks to be matched until the subframes to be processed after each block in each subframe set to be processed are aligned to Each corresponding divided reference subframe in the reference subframe set.

本步骤中,可以先重复上述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,将当前待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧,然后再在其他待处理子帧集合中重复上述动作,直至将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧。In this step, the above-mentioned actions of determining the image block to be matched, determining the neighborhood image block, calculating the matching distance, and aligning the image blocks to be matched can be repeated, and the subframes to be processed after each block in the current set of subframes to be processed are Align to each corresponding block reference subframe in the reference subframe set, and then repeat the above actions in other subframe sets to be processed until each block to be processed in each subframe set to be processed The subframes are aligned to each corresponding segmented reference subframe in the set of reference subframes.

通过动作(31)至(33),能够将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧。Through actions (31) to (33), each divided subframe to be processed in each subframe set to be processed can be aligned to each corresponding divided reference subframe in the reference subframe set.

(4)采用灰度映射函数对各个待处理子帧集合中的各个对齐后的待处理子帧进行误匹配修正,并采用泊松视频融合方法对误匹配修正的结果进行优化。(4) The grayscale mapping function is used to perform mismatch correction on each aligned subframe to be processed in each subframe set to be processed, and the Poisson video fusion method is used to optimize the result of the mismatch correction.

以当前对齐后的待处理子帧为例,当前对齐后的待处理子帧中每个图像块都对应有一个匹配距离,检测各个匹配距离,当检测到存在匹配距离不满足预设要求时,对当前对齐后的待处理子帧进行误匹配修正,并采用泊松视频融合方法对误匹配修正的结果进行优化。设置有距离阈值,当上述计算得到的最小的匹配距离大于该距离阈值,确定最小的匹配距离不满足预设要求。Taking the currently aligned subframes to be processed as an example, each image block in the currently aligned subframes to be processed corresponds to a matching distance, and each matching distance is detected. When it is detected that there is a matching distance that does not meet the preset requirements, Perform mismatch correction on the currently aligned subframes to be processed, and optimize the result of the mismatch correction by using the Poisson video fusion method. A distance threshold is set, and when the minimum matching distance obtained by the above calculation is greater than the distance threshold, it is determined that the minimum matching distance does not meet the preset requirement.

本步骤中采用灰度映射函数进行初始空洞填充,对当前对齐后的待处理子帧进行误匹配修正,改善匹配的精度,灰度映射函数定义如下:In this step, the grayscale mapping function is used to fill the initial holes, and the mismatch correction is performed on the subframes to be processed after the current alignment to improve the matching accuracy. The grayscale mapping function is defined as follows:

ττ cc == argarg mm ii nno ττ ΣΣ xx ,, ythe y || || ττ (( RR cc (( xx ,, ythe y )) )) -- SS cc ww (( xx ,, ythe y )) || || 11 ;; -- -- -- (( 44 ))

其中,τc′(·)≥0,τc(·)∈[0,1],c∈{r,g,b},表示当前对齐后的待处理子帧的颜色通道的值,Rc表示当前对齐后的待处理子帧对应的参考子帧的颜色通道的值,这里颜色通道的值指r,g,b颜色通道的值,即红色通道分量、绿色通道分量和蓝色通道分量,(x,y)表示像素位置,灰度映射函数的初始值τc设置为当前对齐后的待处理子帧对应的参考子帧的灰度直方图,为了消除异常引起的拟合偏差,优选采用迭代加权最小二乘法实现优化求解。Among them, τ c ′( )≥0, τ c ( )∈[0,1], c∈{r,g,b}, Represents the value of the color channel of the subframe to be processed after the current alignment, R c represents the value of the color channel of the reference subframe corresponding to the subframe to be processed after the current alignment, where the value of the color channel refers to the r, g, b color channel The value of , that is, the red channel component, the green channel component and the blue channel component, (x, y) represents the pixel position, and the initial value τ c of the grayscale mapping function is set as the reference subframe corresponding to the subframe to be processed after the current alignment In order to eliminate the fitting deviation caused by abnormalities, the iterative weighted least squares method is preferably used to achieve optimal solution.

通过公式(4)能够更新当前对齐后的待处理子帧的各个颜色通道的值,颜色通道的值更新后的待处理子帧为误匹配修正的结果。The value of each color channel of the subframe to be processed after the current alignment can be updated through the formula (4), and the subframe to be processed after the value of the color channel is updated is the result of mismatch correction.

采用泊松视频融合方法对误匹配修正的结果进行优化指的是,提取误匹配修正的结果对应的参考子帧的梯度信息,采用泊松视频融合方法将提取的梯度信息和上述误匹配修正的结果进行合成,从而对误匹配修正的结果进行优化。其中,误匹配修正的结果对应的参考子帧的梯度信息还可以置换为当前待处理子帧(即未经过对齐的待处理子帧)的梯度信息。Using the Poisson video fusion method to optimize the result of the mis-match correction refers to extracting the gradient information of the reference subframe corresponding to the result of the mis-match correction, and using the Poisson video fusion method to combine the extracted gradient information with the above-mentioned mis-match correction. The results are synthesized to optimize the results of the mismatch correction. Wherein, the gradient information of the reference subframe corresponding to the mismatch correction result may also be replaced with the gradient information of the current subframe to be processed (that is, the subframe to be processed without alignment).

(5)将各个待处理子帧集合中的各个优化后的待处理子帧进行重构,得到各个对齐后的待处理帧。(5) Reconstruct each optimized subframe to be processed in each subframe set to be processed to obtain each aligned frame to be processed.

以待处理帧A`为例,待处理帧A`对应有待处理子帧集合A,待处理子帧集合A中包括按照图像尺寸由小至大的顺序依次排列的5张待处理子帧A1、A2、A3、A4和A5,A1、A2、A3、A4和A5为对待处理帧A`进行尺寸分解后得到的多个子图像。对A1、A2、A3、A4和A5执行完成动作(4)中的优化操作后,将对齐后的A1、A2、A3、A4和A5进行重构,即将对齐后的A1、A2、A3、A4和A5进行合成,得到对齐后的待处理帧A0。Taking frame A` to be processed as an example, frame A` to be processed corresponds to subframe set A to be processed, and subframe set A to be processed includes five subframes to be processed A1, A1, A2, A3, A4, and A5, A1, A2, A3, A4, and A5 are multiple sub-images obtained after size decomposition of the frame A' to be processed. After performing the optimization operation in action (4) on A1, A2, A3, A4, and A5, reconstruct the aligned A1, A2, A3, A4, and A5, that is, the aligned A1, A2, A3, and A4 Synthesize with A5 to obtain the aligned frame A0 to be processed.

通过上述动作(1)至(5),能够根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,得到对齐后的各个待处理帧,以避免当多曝光图像序列中存在运动目标时,融合得到图像存在模糊和重影的问题。Through the above actions (1) to (5), each frame to be processed can be aligned to the reference frame according to the consistency sensitive hash algorithm, and each frame to be processed after alignment can be obtained, so as to avoid moving objects in the multi-exposure image sequence When , there are blurring and ghosting problems in the fused image.

步骤S104,根据各个对齐后的待处理帧的图像特征计算各个对齐后的待处理帧的权重图,以及根据参考帧的图像特征计算参考帧的权重图。Step S104, calculating the weight map of each aligned frame to be processed according to the image features of each aligned frame to be processed, and calculating the weight map of the reference frame according to the image feature of the reference frame.

本步骤能够具体分解为以下三个步骤实现:This step can be specifically decomposed into the following three steps:

(1)分别计算各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征。(1) Calculate the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, and calculate the phase consistency feature, local contrast feature and color saturation feature of the reference frame.

为了度量视频帧中每个像素的重要性,得到视频帧的权重图,本实施例提出了基于特征的权重估计方法。当待处理帧被对齐后,集成三个图像特征来估计像素的权重,三个图像特征包括:相位一致性特征、局部对比度特征和颜色饱和度。通过集成三种图像特征度量值,可以准确度量不同曝光尺度下对应像素的权重。In order to measure the importance of each pixel in the video frame and obtain the weight map of the video frame, this embodiment proposes a feature-based weight estimation method. When the frames to be processed are aligned, three image features are integrated to estimate the weight of the pixel, the three image features include: phase consistency feature, local contrast feature and color saturation. By integrating three image feature metrics, the weights of corresponding pixels at different exposure scales can be accurately measured.

计算各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征与计算参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征的过程相同,下面以计算当前对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征为例进行说明。Calculating the phase consistency features, local contrast features, and color saturation features of each aligned frame to be processed is the same as calculating the phase consistency features, local contrast features, and color saturation features of the reference frame. The following is to calculate the current alignment. The phase consistency feature, local contrast feature and color saturation feature of the frame to be processed are taken as an example to illustrate.

将当前对齐后的待处理帧由RGB颜色空间转换到YIQ颜色空间,其中Y表示亮度信息,I和Q表示色度信息。RGB空间到YIQ空间的转换可以通过式(5)实现:Convert the currently aligned frames to be processed from the RGB color space to the YIQ color space, where Y represents brightness information, and I and Q represent chrominance information. The conversion from RGB space to YIQ space can be realized by formula (5):

YY II QQ == 0.2990.299 0.5870.587 0.1140.114 0.5960.596 -- 0.2740.274 -- 0.3220.322 0.2110.211 -- 0.5230.523 0.3120.312 RR GG BB ;; -- -- -- (( 55 ))

公式(5)中,RGB分别表示红色通道分量值、绿色通道分量值和蓝色通道分量值,。In formula (5), RGB represents the red channel component value, the green channel component value and the blue channel component value, respectively.

为了计算当前对齐后的待处理帧的相位一致性特征,将当前对齐后的待处理帧的亮度分量Y与二维log-Gabor滤波器进行卷积,二维log-Gabor滤波器的变换函数计算如下:In order to calculate the phase consistency feature of the currently aligned frame to be processed, the brightness component Y of the currently aligned frame to be processed is convolved with a two-dimensional log-Gabor filter, and the transformation function of the two-dimensional log-Gabor filter is calculated as follows:

LL GG (( ωω ,, θθ jj )) == expexp (( -- (( loglog (( ωω // ωω 00 )) )) 22 22 σσ rr 22 )) ·&Center Dot; expexp (( -- (( θθ -- θθ jj )) 22 22 σσ θθ 22 )) ;; -- -- -- (( 66 ))

其中,ω0是滤波器的中心频率,θj=jπ/J,j={0,1,…,J-1}是滤波器的方向角,J是方向数,σr控制滤波器的带宽,σθ决定滤波器的角度带宽,θ表示两个滤波器相切的角度,根据经验设置J=4,四个尺度的ω0分别为1/6、1/12、1/24、1/48,σr=0.5978,σθ=0.6545。卷积结果生成一组正交向量[en,o(x,y),on,o(x,y)](尺度n,方向o,像素位置(x,y)),得到局部振幅计算如下:Where, ω 0 is the center frequency of the filter, θ j = jπ/J, j = {0,1,...,J-1} is the direction angle of the filter, J is the direction number, and σ r controls the bandwidth of the filter , σ θ determines the angular bandwidth of the filter, θ represents the angle at which the two filters are tangent, J=4 is set according to experience, and the ω 0 of the four scales are 1/6, 1/12, 1/24, 1/ 48, σ r =0.5978, σ θ =0.6545. The convolution result generates a set of orthogonal vectors [e n,o (x,y),o n,o (x,y)] (scale n, direction o, pixel position (x,y)), and obtains the local amplitude calculation as follows:

AA nno ,, oo == ee nno ,, oo (( xx ,, ythe y )) 22 ++ oo nno ,, oo (( xx ,, ythe y )) 22 ;; -- -- -- (( 77 ))

则,当前对齐后的待处理帧在位置(x,y)处的相位一致性特征计算如下:Then, the phase consistency feature at the position (x, y) of the frame to be processed after the current alignment is calculated as follows:

PCPC kk (( xx ,, ythe y )) == ΣΣ oo (( ΣΣ nno ee nno ,, oo (( xx ,, ythe y )) )) 22 ++ (( ΣΣ oo oo nno ,, oo (( xx ,, ythe y )) )) 22 ϵϵ ++ ΣΣ oo ΣΣ nno AA nno ,, oo (( xx ,, ythe y )) ;; -- -- -- (( 88 ))

其中,ε是一个小的正常量,相位一致性特征PCk(x,y)的值在0到1之间。PCk(x,y)越接近1,特征越显著。Among them, ε is a small normal quantity, and the value of the phase consistency characteristic PC k (x, y) is between 0 and 1. The closer PC k (x, y) is to 1, the more significant the feature is.

因为相位一致性特征是对比度不变的,而对比度信息影响了人类视觉系统对视频质量的感知,所以采用局部对比度作为第二个度量,与相位一致性特征互为补充。Because the phase consistency feature is contrast-invariant, and the contrast information affects the perception of video quality by the human visual system, local contrast is used as the second measure to complement the phase consistency feature.

本实施例中,局部对比度特征使用视频帧梯度能量进行度量,梯度度量为重要细节特征分配较高权重,例如边缘和纹理。视频帧梯度可以采用卷积掩模进行计算。本实施例中采用Sobel算子作为梯度算子,Sobel算子计算如下:In this embodiment, the local contrast feature is measured using the gradient energy of the video frame, and the gradient metric assigns higher weights to important detail features, such as edges and textures. Video frame gradients can be computed using convolution masks. In this embodiment, the Sobel operator is used as the gradient operator, and the Sobel operator is calculated as follows:

11 44 11 00 -- 11 22 00 -- 22 11 00 -- 11 ,, 11 44 11 22 11 00 00 00 -- 11 -- 22 -- 11 ;; -- -- -- (( 99 ))

沿着视频帧的水平和垂直方向,将当前对齐后的待处理帧的亮度信道Y与Sobel算子执行卷积,得到两个方向梯度GX和GY,当前对齐后的待处理帧的局部对比度特征GK计算如下:Along the horizontal and vertical directions of the video frame, the luminance channel Y of the currently aligned frame to be processed is convolved with the Sobel operator to obtain two directional gradients G X and G Y , and the local area of the currently aligned frame to be processed The contrast feature G K is calculated as follows:

GG kk == GG xx 22 ++ GG ythe y 22 ;; -- -- -- (( 1010 ))

相位一致性特征和局部对比度特征都只在亮度信道上进行度量,对于彩色视频帧,如果仅仅采用相位一致性和局部对比度作为度量指标,不能得到准确的权重图,因此采用颜色饱和度度量作为第三个特征。颜色饱和度特征Sk的计算定义为R、G、B信道内每个像素的标准差,计算如下:Both the phase consistency feature and the local contrast feature are only measured on the luminance channel. For color video frames, if only phase consistency and local contrast are used as metrics, an accurate weight map cannot be obtained, so color saturation is used as the second measure. three features. The calculation of the color saturation feature S k is defined as the standard deviation of each pixel in the R, G, and B channels, and is calculated as follows:

SS kk == (( RR -- mm ‾‾ )) 22 ++ (( GG -- mm ‾‾ )) 22 ++ (( BB -- mm ‾‾ )) 22 33 ;; -- -- -- (( 1111 ))

mm ‾‾ == (( RR ++ GG ++ BB )) 33 ;; -- -- -- (( 1212 ))

其中,表示RGB三个颜色通道的平均值。in, Represents the average of the three RGB color channels.

本步骤中,通过以上公式(5)至公式(12)所示的过程计算各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征后,进行以下步骤(2)。In this step, the phase consistency features, local contrast features, and color saturation features of each aligned frame to be processed are calculated through the process shown in the above formula (5) to formula (12), and the phase consistency features of the reference frame , local contrast feature and color saturation feature, the following step (2) is performed.

(2)根据各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的待处理帧的初始权重图,以及根据参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算参考帧的初始权重图。(2) Calculate the initial weight map of each aligned frame to be processed according to the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, and according to the phase consistency feature of the reference frame, local Contrast features and color saturation features calculate the initial weight map of the reference frame.

计算对齐后的待处理帧的初始权重图的过程与计算参考帧的初始权重图的过程相同,因此这里以计算对齐后的待处理帧的初始权重图为例进行说明。The process of calculating the initial weight map of the frame to be processed after alignment is the same as the process of calculating the initial weight map of the reference frame, so the calculation of the initial weight map of the frame to be processed after alignment is taken as an example for illustration.

考虑到相位一致性特征、局部对比度特征以及颜色饱和度特征三个度量之间是互补的关系,因此通过三个图像特征通过直接相乘进行组合来估计视频帧权重,具体如公式(13)所示,Considering that the three metrics of phase consistency feature, local contrast feature and color saturation feature are complementary, the weight of the video frame is estimated by combining the three image features by direct multiplication, as shown in formula (13) Show,

Wk=PCk×Gk×Sk; (13)W k = PC k × G k × S k ; (13)

其中,Wk表示对齐后的待处理帧的初始权重图,PCk表示对齐后的待处理帧的相位一致性特征,Gk表示对齐后的待处理帧的局部对比对特征,Sk表示对齐后的待处理帧的颜色饱和度特征。通过这种方式,最终得到的融合图像能够保持原视频序列的所有重要细节。Among them, W k represents the initial weight map of the frame to be processed after the alignment, PC k represents the phase consistency feature of the frame to be processed after the alignment, G k represents the local comparison feature of the frame to be processed after the alignment, and S k represents the alignment The color saturation feature of the frame to be processed later. In this way, the resulting fused image preserves all important details of the original video sequence.

(3)分别对各个对齐后的待处理帧的初始权重图进行归一化,得到各个对齐后的待处理帧的权重图,以及对参考帧的初始权重图进行归一化,得到参考帧的权重图。(3) Normalize the initial weight map of each aligned frame to be processed to obtain the weight map of each aligned frame to be processed, and normalize the initial weight map of the reference frame to obtain the weight map of the reference frame weight map.

本步骤中,对各个对齐后的待处理帧的初始权重图的归一化过程与对参考帧的初始权重图的归一化过程相同,下面以当前对齐后的待处理帧的初始权重图的归一化过程为例进行说明。In this step, the normalization process of the initial weight map of each aligned frame to be processed is the same as the normalization process of the initial weight map of the reference frame. The normalization process is described as an example.

假定在多曝光序列中的视频帧数是N,为了保持融合结果的一致性,对N个权重图进行归一化,保证在每个像素(x,y)处的权重和为1,归一化后的初始权重图定义为:Assuming that the number of video frames in the multi-exposure sequence is N, in order to maintain the consistency of the fusion results, the N weight maps are normalized to ensure that the weight sum at each pixel (x, y) is 1, normalized Normalized initial weight map defined as:

WW ~~ kk (( xx ,, ythe y )) == WW kk (( xx ,, ythe y )) // ΣΣ kk ′′ == 11 NN WW kk ′′ (( xx ,, ythe y )) ;; -- -- -- (( 1414 ))

其中,(x,y)表示像素位置,表示归一化后的初始权重图,Wk′(x,y)表示初始权重图,k表示当前初始权重图的序号,k′表示各个初始权重图的序号,k和k′的取值范围为[1,N]。Among them, (x, y) represents the pixel position, Indicates the normalized initial weight map, W k' (x, y) represents the initial weight map, k represents the serial number of the current initial weight map, k' represents the serial number of each initial weight map, and the value range of k and k' is [1, N].

通过步骤S104,基于特征的权重估计方法,集成相位一致性、局部对比度和颜色饱和度三种图像特征,准确地度量了像素的质量,获取了权重图。Through step S104, the feature-based weight estimation method integrates the three image features of phase consistency, local contrast and color saturation, accurately measures the quality of pixels, and obtains a weight map.

步骤S106,利用参考帧对各个对齐后的待处理帧的权重图进行优化,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像。Step S106 , using the reference frame to optimize the weight map of each aligned frame to be processed, and fusing each optimized weight map with the weight map of the reference frame to obtain a fused image.

本步骤中,利用参考帧对各个对齐后的待处理帧的权重图进行优化,包括:In this step, the weight map of each aligned frame to be processed is optimized by using the reference frame, including:

采用指导滤波器通过以下公式(15)和(16)利用参考帧对各个对齐后的待处理帧的权重图进行优化;Using the guidance filter to optimize the weight map of each aligned frame to be processed by using the reference frame through the following formulas (15) and (16);

qq ii == ΣΣ jj WW ‾‾ ii jj (( GG )) pp jj ;; -- -- -- (( 1515 ))

WW ‾‾ ii jj (( GG )) == 11 || ωω || ΣΣ zz :: (( ii ,, jj )) ∈∈ ωω zz (( 11 ++ (( GG ii -- μμ zz )) (( GG jj -- μμ zz )) σσ zz 22 ++ ϵϵ )) ;; -- -- -- (( 1616 ))

其中,i和j表示像素索引,G表示参考帧,表示滤波核,pj表示对齐后的待处理帧的权重图,qi表示优化后的权重图,ωz表示中心为z的局部窗口,|ω|表示在ωz像素的数量,μz分别表示在ωz上的参考帧的均值和方差,ε表示正则化参数,提供一个关于“平面块”或者“高方差/边缘”的判断标准,ε的影响类似于双边滤波器中的范围方差Gi是参考帧G中的第i个像素,Gj表示参考帧G中的第j个像素。上述滤波核满足 Among them, i and j represent the pixel index, G represents the reference frame, Represents the filter kernel, p j represents the weight map of the frame to be processed after alignment, q i represents the optimized weight map, ω z represents the local window centered at z, |ω| represents the number of pixels in ω z , μ z and Denote the mean and variance of the reference frame on ω z , respectively, ε represents the regularization parameter, providing a criterion for "planar block" or "high variance/edge", the impact of ε is similar to the range variance in bilateral filters G i is the i-th pixel in the reference frame G, and G j is the j-th pixel in the reference frame G. The above filter kernel satisfy

在上述优化过程中,参考帧G、对齐后的待处理帧的权重图pj均设置为通过步骤S104计算得到的权重图。In the above optimization process, the weight map p j of the reference frame G and the aligned frames to be processed are all set as the weight map calculated in step S104.

通过上述公式(15)和公式(16)得到的优化后的权重图更加平滑,没有包含间断区域,能够更加准确的表示像素的重要性。The optimized weight map obtained by the above formula (15) and formula (16) is smoother, does not contain discontinuous regions, and can more accurately represent the importance of pixels.

本步骤中,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像,包括:In this step, each optimized weight map is fused with the weight map of the reference frame to obtain a fused image, including:

通过以下公式将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像;Each optimized weight map and the weight map of the reference frame are fused by the following formula to obtain a fused image;

Ff == ΣΣ kk == 11 NN WW ^^ kk ff kk ;; -- -- -- (( 1717 ))

其中,F表示得到的融合图像,k表示各个帧的序号,N表示帧的总数量,表示第k个帧的优化后的权重图,fk表示第k个帧的矩阵。Among them, F represents the obtained fused image, k represents the serial number of each frame, N represents the total number of frames, Denotes the optimized weight map of the kth frame, and f k denotes the matrix of the kth frame.

优化后的权重图指出了视频帧中哪些像素是曝光良好的,以促使融合结果包含所有曝光良好的像素,生成具有生动视觉效果的融合图像。The optimized weight map indicates which pixels in the video frame are well-exposed, so that the fusion result includes all well-exposed pixels, resulting in a fused image with vivid visual effects.

本发明实施例中的方法,首先在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,其次根据各个对齐后的待处理帧的图像特征计算各个对齐后的待处理帧的权重图,以及根据参考帧的图像特征计算参考帧的权重图,最后利用参考帧对各个对齐后的待处理帧的权重图进行优化,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像。由于本发明实施例中的方法根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,因此针对被捕获的场景中存在运动目标的情况,能够对多曝光图像序列进行融合,得到清晰无模糊无重影的融合图像。The method in the embodiment of the present invention first determines the reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, aligns each frame to be processed to the reference frame according to the consistency sensitive hash algorithm, and then according to each aligned Calculate the weight map of each aligned frame to be processed according to the image features of the frame to be processed, and calculate the weight map of the reference frame according to the image feature of the reference frame, and finally use the reference frame to optimize the weight map of each aligned frame to be processed , each optimized weight map is fused with the weight map of the reference frame to obtain a fused image. Since the method in the embodiment of the present invention aligns each frame to be processed to the reference frame according to the consistency-sensitive hash algorithm, the multi-exposure image sequence can be fused to obtain a clear Fused images with no blur and no ghosting.

综上,本发明实施例中的方法具有以下技术效果:In summary, the method in the embodiment of the present invention has the following technical effects:

(1)基于一致性敏感哈希算法的块匹配视频帧对齐方法,能够解决由于被捕获的场景中出现目标运动而造成的融合图像模糊和虚影的问题,最终得到具有高动态范围的、光照良好的完整清晰的场景视频。(1) The block matching video frame alignment method based on the consistency-sensitive hash algorithm can solve the problem of fusion image blur and ghosting caused by target motion in the captured scene, and finally obtain a high dynamic range, light Good full and clear scene video.

(2)基于特征的权重估计方法,集成相位一致性、局部对比度和颜色饱和度三种图像特征,能够准确地度量像素的质量,从而获取权重图。(2) The feature-based weight estimation method integrates three image features of phase consistency, local contrast and color saturation, which can accurately measure the quality of pixels to obtain the weight map.

(3)基于指导滤波的权重图优化方法提升权重图的准确性,确保能够获得高质量的融合图像。(3) The weight map optimization method based on guided filtering improves the accuracy of the weight map and ensures that high-quality fusion images can be obtained.

对应上述的多曝光视频融合方法,本发明实施例还提供了一种多曝光视频融合装置,如图2所示,该装置包括:对齐模块21,用于在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧;权重图计算模块22,用于根据各个对齐后的待处理帧的图像特征计算各个对齐后的待处理帧的权重图,以及根据参考帧的图像特征计算参考帧的权重图;融合模块23,用于利用参考帧对各个对齐后的待处理帧的权重图进行优化,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像。Corresponding to the above-mentioned multi-exposure video fusion method, an embodiment of the present invention also provides a multi-exposure video fusion device, as shown in FIG. 2 , the device includes: an alignment module 21 for determining a reference frame and at least one frame to be processed, each frame to be processed is aligned to a reference frame according to the consistency sensitive hash algorithm; the weight map calculation module 22 is used to calculate each aligned frame according to the image features of each aligned frame to be processed The weight map of the frame to be processed, and calculate the weight map of the reference frame according to the image features of the reference frame; the fusion module 23 is used to optimize the weight map of each aligned frame to be processed by using the reference frame, and combine each optimized weight The weight map of the image and the reference frame are fused to obtain a fused image.

其中,对齐模块21包括:尺度分解单元,用于分别对各个待处理帧和参考帧进行尺度分解,得到分别与各个待处理帧对应的各个待处理子帧集合,以及与参考帧对应的参考子帧集合;分块单元,用于分别对参考子帧集合中的各个参考子帧和各个待处理子帧集合中的各个待处理子帧进行分块;对齐单元,用于根据一致性敏感哈希算法分别将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧;优化单元,用于采用灰度映射函数对各个待处理子帧集合中的各个对齐后的待处理子帧进行误匹配修正,并采用泊松视频融合方法对误匹配修正的结果进行优化;重构单元,用于将各个待处理子帧集合中的各个优化后的待处理子帧进行重构,得到各个对齐后的待处理帧。Wherein, the alignment module 21 includes: a scale decomposition unit, which is used to perform scale decomposition on each frame to be processed and the reference frame, respectively, to obtain each subframe set to be processed corresponding to each frame to be processed, and the reference subframe corresponding to the reference frame The frame set; the block unit, which is used to block each reference subframe in the reference subframe set and each pending subframe in each pending subframe set; the alignment unit, which is used according to the consistency sensitive hash The algorithm respectively aligns each divided subframe to be processed in each subframe set to be processed to each corresponding divided reference subframe in the reference subframe set; the optimization unit is used to use a grayscale mapping function to Each aligned sub-frame to be processed in each sub-frame set to be processed is subjected to mis-match correction, and a Poisson video fusion method is used to optimize the result of mis-match correction; the reconstruction unit is used to combine each sub-frame set to be processed Each optimized sub-frame to be processed is reconstructed to obtain each aligned frame to be processed.

上述对齐单元包括:邻域图像块确定子单元,用于在当前待处理子帧集合中的当前分块后的待处理子帧中确定待匹配图像块,在参考子帧集合中的当前分块后的参考子帧中确定待匹配图像块对应的多个邻域图像块;图像块对齐子单元,用于根据一致性敏感哈希算法分别计算待匹配图像块与各个邻域图像块之间的匹配距离,将待匹配图像块对齐至最小的匹配距离对应的邻域图像块;重复子单元,用于重复确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个待处理子帧集合中的各个分块后的待处理子帧对齐至参考子帧集合中的各个相应的分块后的参考子帧。The above-mentioned alignment unit includes: a neighboring image block determination subunit, which is used to determine the image block to be matched in the subframe to be processed after the current block in the current subframe set to be processed, and the current block in the reference subframe set In the subsequent reference subframe, a plurality of neighboring image blocks corresponding to the image block to be matched are determined; the image block alignment subunit is used to calculate the distance between the image block to be matched and each neighboring image block according to the consistency sensitive hash algorithm. Matching distance, aligning the image block to be matched to the adjacent image block corresponding to the minimum matching distance; repeating subunits, used to repeatedly determine the image block to be matched, determine the adjacent image block, calculate the matching distance, and align the image blocks to be matched Actions until each subframe to be processed after being divided into blocks in each set of subframes to be processed is aligned to each corresponding reference subframe after being divided into blocks in the set of reference subframes.

上述权重图计算模块22包括:特征计算单元,用于分别计算各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征;初始权重图计算单元,用于根据各个对齐后的待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的待处理帧的初始权重图,以及根据参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算参考帧的初始权重图;权重图归一化单元,用于分别对各个对齐后的待处理帧的初始权重图进行归一化,得到各个对齐后的待处理帧的权重图,以及对参考帧的初始权重图进行归一化,得到参考帧的权重图。The above-mentioned weight map calculation module 22 includes: a feature calculation unit, which is used to calculate the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, and calculate the phase consistency feature and local contrast feature of the reference frame. Features and color saturation features; an initial weight map calculation unit, used to calculate the initial weight map of each aligned frame to be processed according to the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed , and calculate the initial weight map of the reference frame according to the phase consistency feature, local contrast feature and color saturation feature of the reference frame; the weight map normalization unit is used to perform the initial weight map of each aligned frame to be processed respectively Normalize to obtain the weight map of each aligned frame to be processed, and normalize the initial weight map of the reference frame to obtain the weight map of the reference frame.

上述融合模块23包括:权重图优化单元,用于采用指导滤波器通过以下公式利用参考帧对各个对齐后的待处理帧的权重图进行优化;The above fusion module 23 includes: a weight map optimization unit, which is used to optimize the weight map of each aligned frame to be processed by using the reference frame through the following formula by using the guidance filter;

qq ii == ΣΣ jj WW ‾‾ ii jj (( GG )) pp jj ;;

WW ‾‾ ii jj (( GG )) == 11 || ωω || ΣΣ zz :: (( ii ,, jj )) ∈∈ ωω zz (( 11 ++ (( GG ii -- μμ zz )) (( GG jj -- μμ zz )) σσ zz 22 ++ ϵϵ )) ;;

其中,i和j表示像素索引,G表示参考帧,表示滤波核,pj表示对齐后的待处理帧的权重图,qi表示优化后的权重图,ωz表示中心为z的局部窗口,|ω|表示在ωz像素的数量,μz分别表示在ωz上的参考帧的均值和方差,ε表示正则化参数,Gi是参考帧G中的第i个像素,Gj表示参考帧G中的第j个像素。Among them, i and j represent the pixel index, G represents the reference frame, Represents the filter kernel, p j represents the weight map of the frame to be processed after alignment, q i represents the optimized weight map, ω z represents the local window centered at z, |ω| represents the number of pixels in ω z , μ z and denote the mean and variance of the reference frame on ω z , respectively, ε denotes the regularization parameter, G i is the i-th pixel in the reference frame G, and G j is the j-th pixel in the reference frame G.

上述融合模块23包括:图像融合单元,用于通过以下公式将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像;The above-mentioned fusion module 23 includes: an image fusion unit, which is used to fuse each optimized weight map and the weight map of the reference frame by the following formula to obtain a fusion image;

Ff == ΣΣ kk == 11 NN WW ^^ kk ff kk ;;

其中,F表示得到的融合图像,k表示各个帧的序号,N表示帧的总数量,表示第k个帧的优化后的权重图,fk表示第k个帧的矩阵。Among them, F represents the obtained fused image, k represents the serial number of each frame, N represents the total number of frames, Denotes the optimized weight map of the kth frame, and f k denotes the matrix of the kth frame.

本发明实施例中的装置,首先在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,其次根据各个对齐后的待处理帧的图像特征计算各个对齐后的待处理帧的权重图,以及根据参考帧的图像特征计算参考帧的权重图,最后利用参考帧对各个对齐后的待处理帧的权重图进行优化,将各个优化后的权重图和参考帧的权重图进行融合,得到融合图像。由于本发明实施例中的装置根据一致性敏感哈希算法分别将各个待处理帧对齐至参考帧,因此针对被捕获的场景中存在运动目标的情况,能够对多曝光图像序列进行融合,得到清晰无模糊无重影的融合图像。The device in the embodiment of the present invention firstly determines the reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, respectively aligns each frame to be processed to the reference frame according to the consistency sensitive hash algorithm, and then according to each aligned Calculate the weight map of each aligned frame to be processed according to the image features of the frame to be processed, and calculate the weight map of the reference frame according to the image feature of the reference frame, and finally use the reference frame to optimize the weight map of each aligned frame to be processed , each optimized weight map is fused with the weight map of the reference frame to obtain a fused image. Since the device in the embodiment of the present invention aligns each frame to be processed to the reference frame according to the consistency-sensitive hash algorithm, the multi-exposure image sequence can be fused to obtain a clear Fused images with no blur and no ghosting.

本发明实施例所提供的多曝光图像融合装置可以为设备上的特定硬件或者安装于设备上的软件或固件等。本发明实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,前述描述的系统、装置和单元的具体工作过程,均可以参考上述方法实施例中的对应过程,在此不再赘述。The multi-exposure image fusion apparatus provided by the embodiment of the present invention may be specific hardware on the device or software or firmware installed on the device. The implementation principles and technical effects of the device provided by the embodiment of the present invention are the same as those of the foregoing method embodiment. For brief description, for the parts not mentioned in the device embodiment, reference may be made to the corresponding content in the foregoing method embodiment. Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the above-described systems, devices, and units can refer to the corresponding processes in the above-mentioned method embodiments, and will not be repeated here.

在本发明所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the embodiments provided in the present invention, it should be understood that the disclosed devices and methods may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本发明提供的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释,此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that like numerals and letters denote similar items in the following drawings, therefore, once an item is defined in one drawing, it does not require further definition and explanation in subsequent drawings, In addition, the terms "first", "second", "third", etc. are only used for distinguishing descriptions, and should not be construed as indicating or implying relative importance.

最后应说明的是:以上所述实施例,仅为本发明的具体实施方式,用以说明本发明的技术方案,而非对其限制,本发明的保护范围并不局限于此,尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that: the above-described embodiments are only specific implementations of the present invention, used to illustrate the technical solutions of the present invention, rather than limiting them, and the scope of protection of the present invention is not limited thereto, although referring to the foregoing The embodiment has described the present invention in detail, and those skilled in the art should understand that any person familiar with the technical field can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present invention Changes can be easily imagined, or equivalent replacements can be made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention. All should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (10)

1.一种多曝光视频融合方法,其特征在于,所述方法包括:1. A multi-exposure video fusion method, characterized in that the method comprises: 在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧;Determining a reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, respectively aligning each frame to be processed to the reference frame according to a consistency-sensitive hash algorithm; 根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图;calculating a weight map of each aligned frame to be processed according to image features of each aligned frame to be processed, and calculating a weight map of the reference frame according to image features of the reference frame; 利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像。Optimizing each aligned weight map of the frame to be processed by using the reference frame, and merging each optimized weight map with the weight map of the reference frame to obtain a fused image. 2.根据权利要求1所述的方法,其特征在于,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧,包括:2. The method according to claim 1, wherein aligning each of the frames to be processed to the reference frame according to a consistency-sensitive hash algorithm comprises: 分别对各个所述待处理帧和所述参考帧进行尺度分解,得到分别与各个所述待处理帧对应的各个待处理子帧集合,以及与所述参考帧对应的参考子帧集合;respectively performing scale decomposition on each of the frames to be processed and the reference frame to obtain each subframe set to be processed corresponding to each frame to be processed and a reference subframe set corresponding to the reference frame; 分别对所述参考子帧集合中的各个参考子帧和各个所述待处理子帧集合中的各个待处理子帧进行分块;Block each reference subframe in the set of reference subframes and each subframe to be processed in each set of subframes to be processed; 根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧;According to the consistency-sensitive hash algorithm, respectively align each subframe to be processed in each subframe set to be processed after being divided into blocks to the reference subframe after each corresponding block in the reference subframe set subframe; 采用灰度映射函数对各个所述待处理子帧集合中的各个对齐后的所述待处理子帧进行误匹配修正,并采用泊松视频融合方法对所述误匹配修正的结果进行优化;Using a grayscale mapping function to perform mismatch correction on each of the aligned subframes in each subframe set to be processed, and using a Poisson video fusion method to optimize the result of the mismatch correction; 将各个所述待处理子帧集合中的各个优化后的所述待处理子帧进行重构,得到各个对齐后的所述待处理帧。Reconstruct each optimized subframe to be processed in each set of subframes to be processed to obtain each aligned frame to be processed. 3.根据权利要求2所述的方法,其特征在于,根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧,包括:3. The method according to claim 2, wherein, according to a consistency-sensitive hash algorithm, the subframes to be processed after each block in each subframe set to be processed are respectively aligned to the reference The reference subframe after each corresponding block in the subframe set includes: 在当前所述待处理子帧集合中的当前分块后的所述待处理子帧中确定待匹配图像块,在所述参考子帧集合中的当前分块后的所述参考子帧中确定所述待匹配图像块对应的多个邻域图像块;Determine the image block to be matched in the subframe to be processed after the current block in the current subframe set to be processed, and determine the image block to be matched in the reference subframe after the current block in the reference subframe set a plurality of neighboring image blocks corresponding to the image block to be matched; 根据一致性敏感哈希算法分别计算所述待匹配图像块与各个所述邻域图像块之间的匹配距离,将所述待匹配图像块对齐至最小的所述匹配距离对应的邻域图像块;Calculate the matching distance between the image block to be matched and each of the neighborhood image blocks according to the consistency sensitive hash algorithm, and align the image block to be matched to the neighborhood image block corresponding to the smallest matching distance ; 重复所述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧。Repeating the actions of determining image blocks to be matched, determining neighboring image blocks, calculating matching distances, and aligning image blocks to be matched, until each of the subframes to be processed is divided into blocks in each set of subframes to be processed. Aligning to each of the corresponding divided reference subframes in the set of reference subframes. 4.根据权利要求1所述的方法,其特征在于,根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图,包括:4. The method according to claim 1, characterized in that, calculating the weight map of each aligned frame to be processed according to the image features of each aligned frame to be processed, and according to the image of the reference frame A feature computes a weight map for the reference frame, comprising: 分别计算各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征;respectively calculating phase consistency features, local contrast features, and color saturation features of each aligned frame to be processed, and calculating phase consistency features, local contrast features, and color saturation features of the reference frame; 根据各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的所述待处理帧的初始权重图,以及根据所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算所述参考帧的初始权重图;Calculate the initial weight map of each aligned frame to be processed according to the phase consistency feature, local contrast feature and color saturation feature of each aligned frame to be processed, and according to the phase consistency of the reference frame feature, local contrast feature and color saturation feature calculate the initial weight map of the reference frame; 分别对各个对齐后的所述待处理帧的初始权重图进行归一化,得到各个对齐后的所述待处理帧的权重图,以及对所述参考帧的初始权重图进行归一化,得到所述参考帧的权重图。Respectively normalize the initial weight map of each aligned frame to be processed to obtain the weight map of each aligned frame to be processed, and normalize the initial weight map of the reference frame to obtain A weight map for the reference frame. 5.根据权利要求1所述的方法,其特征在于,利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,包括:5. The method according to claim 1, wherein the weight map of each aligned frame to be processed is optimized using the reference frame, comprising: 采用指导滤波器通过以下公式利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化;Using the guide filter to optimize the weight map of each aligned frame to be processed by using the reference frame through the following formula; qq ii == ΣΣ jj WW ‾‾ ii jj (( GG )) pp jj ;; WW ‾‾ ii jj (( GG )) == 11 || ωω || ΣΣ zz :: (( ii ,, jj )) ∈∈ ωω zz (( 11 ++ (( GG ii -- μμ zz )) (( GG jj -- μμ zz )) σσ zz 22 ++ ϵϵ )) ;; 其中,i和j表示像素索引,G表示所述参考帧,表示滤波核,pj表示对齐后的所述待处理帧的权重图,qi表示优化后的权重图,ωz表示中心为z的局部窗口,|ω|表示在ωz像素的数量,μz分别表示在ωz上的所述参考帧的均值和方差,ε表示正则化参数,Gi是所述参考帧G中的第i个像素,Gj表示所述参考帧G中的第j个像素。where i and j represent pixel indices, G represents the reference frame, Represents the filter kernel, p j represents the weight map of the frame to be processed after alignment, q i represents the optimized weight map, ω z represents the local window whose center is z, |ω| represents the number of pixels at ω z , μ z and respectively represent the mean and variance of the reference frame on ω z , ε represents the regularization parameter, G i is the i-th pixel in the reference frame G, and G j represents the j-th pixel in the reference frame G pixels. 6.根据权利要求1所述的方法,其特征在于,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像,包括:6. The method according to claim 1, characterized in that merging each optimized weight map with the weight map of the reference frame to obtain a fusion image, comprising: 通过以下公式将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像;Each optimized weight map is fused with the weight map of the reference frame by the following formula to obtain a fused image; Ff == ΣΣ kk == 11 NN WW ^^ kk ff kk ;; 其中,F表示得到的融合图像,k表示各个帧的序号,N表示帧的总数量,表示第k个帧的优化后的权重图,fk表示第k个帧的矩阵。Among them, F represents the obtained fusion image, k represents the serial number of each frame, N represents the total number of frames, Denotes the optimized weight map of the kth frame, and f k denotes the matrix of the kth frame. 7.一种多曝光视频融合装置,其特征在于,所述装置包括:7. A multi-exposure video fusion device, characterized in that the device comprises: 对齐模块,用于在获取的多曝光图像序列中确定参考帧和至少一个待处理帧,根据一致性敏感哈希算法分别将各个所述待处理帧对齐至所述参考帧;An alignment module, configured to determine a reference frame and at least one frame to be processed in the acquired multi-exposure image sequence, and respectively align each frame to be processed to the reference frame according to a consistency-sensitive hash algorithm; 权重图计算模块,用于根据各个对齐后的所述待处理帧的图像特征计算各个对齐后的所述待处理帧的权重图,以及根据所述参考帧的图像特征计算所述参考帧的权重图;A weight map calculation module, configured to calculate a weight map of each aligned frame to be processed according to image features of each aligned frame to be processed, and calculate a weight of the reference frame according to image features of the reference frame picture; 融合模块,用于利用所述参考帧对各个对齐后的所述待处理帧的权重图进行优化,将各个优化后的权重图和所述参考帧的权重图进行融合,得到融合图像。The fusion module is configured to use the reference frame to optimize the weight map of each aligned frame to be processed, and fuse each optimized weight map with the weight map of the reference frame to obtain a fusion image. 8.根据权利要求7所述的装置,其特征在于,所述对齐模块包括:8. The device according to claim 7, wherein the alignment module comprises: 尺度分解单元,用于分别对各个所述待处理帧和所述参考帧进行尺度分解,得到分别与各个所述待处理帧对应的各个待处理子帧集合,以及与所述参考帧对应的参考子帧集合;A scale decomposition unit, configured to perform scale decomposition on each of the frames to be processed and the reference frame to obtain each subframe set to be processed corresponding to each frame to be processed, and a reference frame corresponding to the reference frame set of subframes; 分块单元,用于分别对所述参考子帧集合中的各个参考子帧和各个所述待处理子帧集合中的各个待处理子帧进行分块;a blocking unit, configured to block each reference subframe in the set of reference subframes and each subframe to be processed in each set of subframes to be processed; 对齐单元,用于根据一致性敏感哈希算法分别将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧;An alignment unit, configured to respectively align the to-be-processed subframes after each block in each of the to-be-processed subframe sets to each corresponding block in the reference subframe set according to a consistency-sensitive hash algorithm the subsequent reference subframe; 优化单元,用于采用灰度映射函数对各个所述待处理子帧集合中的各个对齐后的所述待处理子帧进行误匹配修正,并采用泊松视频融合方法对所述误匹配修正的结果进行优化;An optimization unit, configured to use a grayscale mapping function to perform mismatch correction on each of the aligned subframes in each subframe set to be processed, and use a Poisson video fusion method to correct the mismatch results are optimized; 重构单元,用于将各个所述待处理子帧集合中的各个优化后的所述待处理子帧进行重构,得到各个对齐后的所述待处理帧。The reconstruction unit is configured to reconstruct each optimized subframe to be processed in each set of subframes to be processed to obtain each aligned frame to be processed. 9.根据权利要求8所述的装置,其特征在于,所述对齐单元包括:9. The device according to claim 8, wherein the alignment unit comprises: 邻域图像块确定子单元,用于在当前所述待处理子帧集合中的当前分块后的所述待处理子帧中确定待匹配图像块,在所述参考子帧集合中的当前分块后的所述参考子帧中确定所述待匹配图像块对应的多个邻域图像块;The neighborhood image block determination subunit is used to determine the image block to be matched in the subframe to be processed after the current block in the current subframe set to be processed, and the current block in the reference subframe set determining a plurality of neighboring image blocks corresponding to the image block to be matched in the reference subframe after the block; 图像块对齐子单元,用于根据一致性敏感哈希算法分别计算所述待匹配图像块与各个所述邻域图像块之间的匹配距离,将所述待匹配图像块对齐至最小的所述匹配距离对应的邻域图像块;The image block alignment subunit is used to calculate the matching distance between the image block to be matched and each of the neighboring image blocks according to the consistency sensitive hash algorithm, and align the image block to be matched to the smallest Match the neighborhood image blocks corresponding to the distance; 重复子单元,用于重复所述确定待匹配图像块、确定邻域图像块、计算匹配距离、待匹配图像块对齐的动作,直至将各个所述待处理子帧集合中的各个分块后的所述待处理子帧对齐至所述参考子帧集合中的各个相应的分块后的所述参考子帧。The repeating subunit is used to repeat the actions of determining the image block to be matched, determining the adjacent image block, calculating the matching distance, and aligning the image blocks to be matched until each sub-frame set to be processed is divided into blocks. The to-be-processed subframes are aligned to corresponding block-divided reference subframes in the set of reference subframes. 10.根据权利要求7所述的装置,其特征在于,所述权重图计算模块包括:10. The device according to claim 7, wherein the weight map calculation module comprises: 特征计算单元,用于分别计算各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征,以及计算所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征;A feature calculation unit, configured to calculate phase consistency features, local contrast features, and color saturation features of each aligned frame to be processed, and calculate phase consistency features, local contrast features, and color saturation features of the reference frame Degree feature; 初始权重图计算单元,用于根据各个对齐后的所述待处理帧的相位一致性特征、局部对比度特征以及颜色饱和度特征分别计算各个对齐后的所述待处理帧的初始权重图,以及根据所述参考帧的相位一致性特征、局部对比度特征以及颜色饱和度特征计算所述参考帧的初始权重图;An initial weight map calculation unit, configured to calculate an initial weight map of each aligned frame to be processed according to the phase consistency feature, local contrast feature, and color saturation feature of each aligned frame to be processed, and according to calculating an initial weight map of the reference frame based on the phase consistency feature, local contrast feature, and color saturation feature of the reference frame; 权重图归一化单元,用于分别对各个对齐后的所述待处理帧的初始权重图进行归一化,得到各个对齐后的所述待处理帧的权重图,以及对所述参考帧的初始权重图进行归一化,得到所述参考帧的权重图。A weight map normalization unit, configured to normalize the initial weight maps of each aligned frame to be processed, to obtain the weight map of each aligned frame to be processed, and the weight map of the reference frame The initial weight map is normalized to obtain the weight map of the reference frame.
CN201610587415.2A 2016-07-22 2016-07-22 Many exposure video fusion method and device Pending CN106251365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610587415.2A CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610587415.2A CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Publications (1)

Publication Number Publication Date
CN106251365A true CN106251365A (en) 2016-12-21

Family

ID=57603310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610587415.2A Pending CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Country Status (1)

Country Link
CN (1) CN106251365A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133617A (en) * 2017-04-21 2017-09-05 上海交通大学 It is a kind of based on calculate relevance imaging without imageable target Verification System and authentication method
CN107945148A (en) * 2017-12-15 2018-04-20 电子科技大学 A kind of more exposure image interfusion methods based on MRF regional choices
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108596855A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of video image quality Enhancement Method, device and video picture quality enhancement method
CN108876740A (en) * 2018-06-21 2018-11-23 重庆邮电大学 A kind of more exposure registration methods based on ghost removal
CN109688322A (en) * 2018-11-26 2019-04-26 维沃移动通信(杭州)有限公司 A kind of method, device and mobile terminal generating high dynamic range images
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN110310251A (en) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 Image processing method and device
CN110545428A (en) * 2018-05-28 2019-12-06 深信服科技股份有限公司 A motion estimation method and apparatus, server and computer-readable storage medium
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 A real-time detection method for infrared weak and small targets under sky background
CN112468737A (en) * 2020-11-25 2021-03-09 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN114511603A (en) * 2022-02-17 2022-05-17 展讯通信(上海)有限公司 Image alignment method, apparatus, device and storage medium
CN115272428A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image alignment method and device, computer equipment and storage medium
WO2023016044A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, electronic device, and storage medium
CN120075372A (en) * 2025-04-27 2025-05-30 中国科学院沈阳自动化研究所 Image acquisition and fusion method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN104616273A (en) * 2015-01-26 2015-05-13 电子科技大学 Multi-exposure image fusion method based on Laplacian pyramid decomposition
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN104616273A (en) * 2015-01-26 2015-05-13 电子科技大学 Multi-exposure image fusion method based on Laplacian pyramid decomposition
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG XU ET AL.: "Feature-based multiexposure image-sequence fusion with guided filter and image alignment", 《JOURNAL OF ELECTRONIC IMAGING》 *
徐亮: "多传感器运动图像的跨尺度分析与融合研究", 《中国博士学位论文全文数据库》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133617B (en) * 2017-04-21 2021-09-03 上海交通大学 Imaging-free target authentication system and method based on calculation correlation imaging
CN107133617A (en) * 2017-04-21 2017-09-05 上海交通大学 It is a kind of based on calculate relevance imaging without imageable target Verification System and authentication method
CN107945148A (en) * 2017-12-15 2018-04-20 电子科技大学 A kind of more exposure image interfusion methods based on MRF regional choices
CN107945148B (en) * 2017-12-15 2021-06-01 电子科技大学 A Multi-Exposure Image Fusion Method Based on MRF Region Selection
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN109978774B (en) * 2017-12-27 2021-06-18 展讯通信(上海)有限公司 Denoising fusion method and device for multi-frame continuous equal exposure images
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN108492328B (en) * 2018-03-23 2021-02-26 云南大学 Video inter-frame target matching method and device and implementation device
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108596855A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of video image quality Enhancement Method, device and video picture quality enhancement method
CN110545428A (en) * 2018-05-28 2019-12-06 深信服科技股份有限公司 A motion estimation method and apparatus, server and computer-readable storage medium
CN110545428B (en) * 2018-05-28 2024-02-23 深信服科技股份有限公司 Motion estimation method and device, server and computer readable storage medium
CN108876740A (en) * 2018-06-21 2018-11-23 重庆邮电大学 A kind of more exposure registration methods based on ghost removal
CN108876740B (en) * 2018-06-21 2022-04-12 重庆邮电大学 Multi-exposure registration method based on ghost removal
CN109688322A (en) * 2018-11-26 2019-04-26 维沃移动通信(杭州)有限公司 A kind of method, device and mobile terminal generating high dynamic range images
CN109688322B (en) * 2018-11-26 2021-04-02 维沃移动通信(杭州)有限公司 Method and device for generating high dynamic range image and mobile terminal
CN110310251A (en) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 Image processing method and device
CN110310251B (en) * 2019-07-03 2021-10-29 北京字节跳动网络技术有限公司 Image processing method and device
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 A real-time detection method for infrared weak and small targets under sky background
CN112468737A (en) * 2020-11-25 2021-03-09 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112468737B (en) * 2020-11-25 2022-04-29 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN112634187B (en) * 2021-01-05 2022-11-18 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
WO2023016044A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, electronic device, and storage medium
US12328507B2 (en) 2021-08-12 2025-06-10 Honor Device Co., Ltd. Video processing method and apparatus, electronic device, and storage medium
CN114511603A (en) * 2022-02-17 2022-05-17 展讯通信(上海)有限公司 Image alignment method, apparatus, device and storage medium
CN115272428A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image alignment method and device, computer equipment and storage medium
CN120075372A (en) * 2025-04-27 2025-05-30 中国科学院沈阳自动化研究所 Image acquisition and fusion method and device

Similar Documents

Publication Publication Date Title
CN106251365A (en) Many exposure video fusion method and device
US11610082B2 (en) Method and apparatus for training neural network model used for image processing, and storage medium
CN112884682B (en) A stereoscopic image color correction method and system based on matching and fusion
US10230947B2 (en) Panoramic camera systems
EP3295426B1 (en) Edge-aware bilateral image processing
EP3218870B1 (en) Parallax tolerant video stitching with spatial-temporal localized warping and seam finding
US10521919B2 (en) Information processing device and information processing method for applying an optimization model
JP2018101408A (en) System and method for image processing
CN101512601A (en) Method for determining depth map from image and device for determining depth map
US12073534B2 (en) Image restoration method and device
US9129146B2 (en) Method of transforming stereoscopic image and recording medium storing the same
CN108389224A (en) Image processing method and device, electronic equipment and storage medium
CN105741265A (en) Depth image processing method and depth image processing device
CN117058606A (en) A method for detecting contraband in X-ray images
CN108090877A (en) A kind of RGB-D camera depth image repair methods based on image sequence
CN101945299B (en) Camera-equipment-array based dynamic scene depth restoring method
Singh et al. Weighted least squares based detail enhanced exposure fusion
Pan et al. Depth map completion by jointly exploiting blurry color images and sparse depth maps
JP4709794B2 (en) Motion estimation apparatus and method
WO2024096339A1 (en) Semi-global neural image alignment
US20140147056A1 (en) Depth image noise removal apparatus and method based on camera pose
CN110060208B (en) Method for improving reconstruction performance of super-resolution algorithm
Ali et al. Robust regularization for single image dehazing
CN110335228A (en) A kind of the determination method, apparatus and system of image parallactic
CN107256562A (en) Image defogging method and device based on binocular vision system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161221

RJ01 Rejection of invention patent application after publication