[go: up one dir, main page]

CN113222869B - Image processing method - Google Patents

Image processing method Download PDF

Info

Publication number
CN113222869B
CN113222869B CN202110491854.4A CN202110491854A CN113222869B CN 113222869 B CN113222869 B CN 113222869B CN 202110491854 A CN202110491854 A CN 202110491854A CN 113222869 B CN113222869 B CN 113222869B
Authority
CN
China
Prior art keywords
image
frame
long
short
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110491854.4A
Other languages
Chinese (zh)
Other versions
CN113222869A (en
Inventor
叶维健
汪丹丹
刘刚
曾峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110491854.4A priority Critical patent/CN113222869B/en
Publication of CN113222869A publication Critical patent/CN113222869A/en
Application granted granted Critical
Publication of CN113222869B publication Critical patent/CN113222869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, which comprises the following steps: performing contrast adjustment on the long-frame correction image based on the long-frame rough-filtering image to obtain a first long-frame image to be fused; performing contrast adjustment on the long-frame correction image based on the long-frame fine-filtering image to obtain a second long-frame image to be fused; performing contrast adjustment on the short-frame correction image based on the short-frame rough-filtered image to obtain a first short-frame image to be fused; performing contrast adjustment on the short-frame correction image based on the short-frame fine-filtering image to obtain a second short-frame image to be fused; and carrying out weighted fusion on the first long-frame to-be-fused image and the first short-frame to-be-fused image to obtain a first fused image, carrying out weighted fusion on the second long-frame to-be-fused image and the second short-frame to-be-fused image to obtain a second fused image, and generating a target image based on the first fused image and the second fused image. Through the technical scheme of the application, the useful detailed information of the overexposed area and the overexposed area is fully utilized, and the visual effect of the image is good.

Description

Image processing method
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method.
Background
When the front-end equipment needs to acquire an image with a high dynamic range, the dynamic range of the front-end equipment is limited, so that the image cannot simultaneously consider a bright area and a dark area, for example, the dark area of the image has the problems of underexposure and the like, and the bright area of the image has the problems of overexposure and the like. In order to solve the problem that the bright area and the dark area cannot be simultaneously considered, a plurality of images of the same scene can be acquired through different exposure amounts, the images are fused into an image with a high dynamic range, and the image can keep information such as colors, details and the like of the bright area and the dark area. Compared with the common image, the image with high dynamic range can provide more dynamic range and image details, and better visual experience is provided for users, so that the image with high dynamic range is widely applied to the fields of video processing, geographic information systems, medical images and the like.
Although the high dynamic range image can keep useful information such as colors and details of a bright area and a dark area, the high dynamic range image is obtained by fusing a plurality of images, so that when the images are fused, the information of the bright area and the dark area of each image cannot be fully utilized, the useful detailed information of an overexposed area and an overexposed area can still be lost, and the fused image still has the problems of poor visual effect and the like.
Disclosure of Invention
The application provides an image processing method, which comprises the following steps: performing brightness adjustment on a brightness channel of the long-frame image based on the obtained long-frame fine-filtering image to obtain a long-frame correction image; performing brightness adjustment on a brightness channel of the short-frame image based on the acquired short-frame fine-filtering image to obtain a short-frame correction image; the long frame image brightness channel and the short frame image brightness channel are brightness channels aiming at the same target scene, and the exposure time of the long frame image brightness channel is longer than the exposure time of the short frame image brightness channel;
performing contrast adjustment on the long-frame correction image based on the obtained long-frame coarse-filter image to obtain a first long-frame image to be fused; performing contrast adjustment on the long-frame correction image based on the long-frame fine-filtering image to obtain a second long-frame image to be fused; performing contrast adjustment on the short frame correction image based on the obtained short frame coarse filter image to obtain a first short frame image to be fused; performing contrast adjustment on the short frame correction image based on the short frame fine filter image to obtain a second short frame image to be fused;
and carrying out weighted fusion on the first long-frame to-be-fused image and the first short-frame to-be-fused image to obtain a first fused image, carrying out weighted fusion on the second long-frame to-be-fused image and the second short-frame to-be-fused image to obtain a second fused image, and generating a target image based on the first fused image and the second fused image.
The application provides an image processing method, which comprises the following steps:
aiming at the same target scene, acquiring a short frame image with a first exposure time length and a long frame image with a second exposure time length, wherein the second exposure time length is longer than the first exposure time length; responding to a preset first weight mapping table: adjusting the brightness of the long frame image to generate a first long frame image; adjusting the brightness of the short frame image to generate a first short frame image; responding to a preset second weight mapping table: adjusting the contrast of the first long frame image to generate a second long frame image; adjusting the contrast of the first short frame image to generate a second short frame image; generating a first new image based on the second long frame image and the second short frame image; wherein the first weight map and the second weight map are defined as: the corresponding weight of a pixel point in the preset first weight mapping table is larger than the corresponding weight of the pixel point in the preset second weight mapping table.
According to the technical scheme, in the embodiment of the application, the brightness adjustment and the contrast adjustment can be performed on the brightness channel of the long frame image to obtain the first long frame to-be-fused image and the second long frame to-be-fused image, the brightness adjustment and the contrast adjustment are performed on the brightness channel of the short frame image to obtain the first short frame to-be-fused image and the second short frame to-be-fused image, the weighted fusion is performed on the first long frame to-be-fused image and the first short frame to-be-fused image, the weighted fusion is performed on the second long frame to-be-fused image and the second short frame to-be-fused image, the target image is generated based on the fusion image, the information of the bright area and the dark area of the brightness channel of the long frame image and the information of the brightness channel of the short frame can be fully utilized, the useful detail information of the overdark area and the overdark area can be fully utilized, the loss of the useful detail information can be avoided, the visual effect of the fused target image is better, flaws of the target image can be avoided, the image quality is higher, the picture contrast is better, and the color performance is normal. On the premise of smaller brightness loss of a well-exposed area in a long-frame image brightness channel, the overexposed area (such as a light source, a license plate and the like) in the scene is fused through a short-frame image brightness channel, so that the brightness is reduced to be suitable, and the brightness of a target image is more suitable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow diagram of an image processing method in one embodiment of the present application;
FIG. 2 is a flow diagram of an image processing method in one embodiment of the present application;
fig. 3 is a flow chart of an image processing method in another embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
The embodiment of the application provides an image processing method, which can be applied to front-end equipment (such as a network video camera, an analog video camera, a camera and the like) and back-end equipment (such as a server, a management device, a storage device and the like). If the method is applied to the front-end equipment, the front-end equipment acquires a long frame image and a short frame image aiming at the same target scene, and performs image processing by adopting the scheme of the embodiment of the application based on the long frame image and the short frame image. If the method is applied to the back-end equipment, the front-end equipment acquires a long frame image and a short frame image aiming at the same target scene, the long frame image and the short frame image are sent to the back-end equipment, and the back-end equipment performs image processing by adopting the scheme of the embodiment of the application based on the long frame image and the short frame image.
Illustratively, the long frame image and the short frame image are images for the same target scene, and the exposure time of the long frame image is longer than the exposure time of the short frame image. For example, the exposure start time of the long frame image may be the same as the exposure start time of the short frame image, but the exposure end time of the long frame image may be later than the exposure end time of the short frame image. For another example, the exposure end time of the long frame image may be the same as the exposure end time of the short frame image, but the exposure start time of the long frame image may be earlier than the exposure start time of the short frame image.
Illustratively, the long frame image may include a luminance channel (i.e., a Y channel) and a chrominance channel (i.e., a U channel and a V channel), the luminance channel in the long frame image being referred to as a long frame image luminance channel, and the chrominance channel in the long frame image being referred to as a long frame image chrominance channel. The short-frame image may include a luminance channel (i.e., Y channel) and a chrominance channel (i.e., U channel and V channel), with the luminance channel in the short-frame image being referred to as the short-frame image luminance channel and the chrominance channel in the short-frame image being referred to as the short-frame image chrominance channel.
Obviously, since the exposure time of the long frame image is longer than the exposure time of the short frame image, the exposure time of the long frame image luminance channel is longer than the exposure time of the short frame image luminance channel, and the exposure time of the long frame image chrominance channel is longer than the exposure time of the short frame image chrominance channel.
The brightness adjustment and the contrast adjustment can be carried out on the brightness channels of the long frame images, the brightness adjustment and the contrast adjustment can be carried out on the brightness channels of the short frame images, and the weighted fusion is carried out on the images after adjustment, so that the information of the bright area and the dark area of the brightness channels of the long frame images and the brightness channels of the short frame images can be fully utilized, the useful detail information of the overexposure area and the overexposure area can be fully utilized, and the loss of the useful detail information can be avoided.
The technical solutions of the embodiments of the present application are described below with reference to specific embodiments.
Referring to fig. 1, a flowchart of an image processing method is shown, and the method may include:
step 101, performing brightness adjustment on a brightness channel of a long-frame image based on the acquired long-frame fine-filtered image to obtain a long-frame correction image; performing brightness adjustment on a brightness channel of the short-frame image based on the acquired short-frame fine-filtering image to obtain a short-frame correction image; the long frame image brightness channel and the short frame image brightness channel are brightness channels aiming at the same target scene, and the exposure time of the long frame image brightness channel is longer than the exposure time of the short frame image brightness channel.
Step 102, performing contrast adjustment on the long frame correction image based on the obtained long frame coarse filter image to obtain a first long frame image to be fused; performing contrast adjustment on the long-frame correction image based on the long-frame fine-filtering image to obtain a second long-frame image to be fused; performing contrast adjustment on the short frame correction image based on the obtained short frame coarse filter image to obtain a first short frame image to be fused; and carrying out contrast adjustment on the short frame correction image based on the short frame fine filtering image to obtain a second short frame image to be fused.
Before step 101, a long frame image brightness channel and a short frame image brightness channel are acquired, a long frame fine filter image, a long frame coarse filter image, a short frame fine filter image and a short frame coarse filter image are acquired, and step 101 and step 102 are executed based on the images, so that the following images are obtained: the image fusion method comprises the steps of a long frame correction image (namely a long frame image brightness channel subjected to brightness adjustment), a short frame correction image (namely a short frame image brightness channel subjected to brightness adjustment), a first long frame image to be fused (a long frame image brightness channel subjected to brightness adjustment and contrast adjustment), a second long frame image to be fused (a long frame image brightness channel subjected to brightness adjustment and contrast adjustment), a first short frame image to be fused (a short frame image brightness channel subjected to brightness adjustment and contrast adjustment) and a second short frame image to be fused (a short frame image brightness channel subjected to brightness adjustment and contrast adjustment).
In step 101, the brightness adjustment process for the brightness channel of the long frame image includes: if the average value of the brightness values of all the pixel points in the long-frame fine-filtered image is larger than a first threshold value, gamma correction is carried out on the brightness values of the brightness channels of the long-frame image by adopting a first gamma coefficient, so that a long-frame corrected image is obtained; otherwise, the brightness value of the brightness channel of the long frame image is kept unchanged, and the long frame correction image is obtained. The brightness adjustment process for the short frame image brightness channel comprises the following steps: if the average value of the brightness values of all the pixel points in the short-frame fine-filtered image is smaller than a second threshold value, gamma correction is carried out on the brightness values of the brightness channels of the short-frame image by adopting a second gamma coefficient, so that a short-frame corrected image is obtained; otherwise, the brightness value of the brightness channel of the short frame image is kept unchanged, and the short frame correction image is obtained.
For example, the first gamma coefficient may be less than 1 and the second gamma coefficient may be greater than 1.
Before step 101, coarse filtering processing can be performed on the long frame image brightness channel to obtain a long frame coarse filtering image; the long-frame image brightness channel can be subjected to fine filtering treatment to obtain a long-frame fine filtering image; coarse filtering treatment can be carried out on the brightness channel of the short frame image to obtain a short frame coarse filtering image; the short frame image brightness channel can be subjected to fine filtering treatment to obtain a short frame fine filtering image.
The course of the coarse filtering process may include: inquiring a configured coarse filtering gradient mapping curve table through the gradient value of each pixel point in the long-frame image brightness channel or the short-frame image brightness channel to obtain a first weight value of the pixel point and a second weight value of surrounding pixel points of the pixel point, wherein the coarse filtering gradient mapping curve table is used for representing the mapping relation between the gradient value and the weight value; determining a target brightness value of the pixel point based on the brightness value of the pixel point, the first weight value, the brightness values of surrounding pixel points of the pixel point and the second weight value; a long-frame coarse-filtered image or a short-frame coarse-filtered image is determined based on the target luminance value of each pixel.
The process of the fine filtering process may include: inquiring a configured fine filtering gradient mapping curve table through the gradient value of each pixel point in the long-frame image brightness channel or the short-frame image brightness channel to obtain a third weight value of the pixel point and a fourth weight value of surrounding pixel points of the pixel point, wherein the fine filtering gradient mapping curve table is used for representing the mapping relation between the gradient value and the weight value; determining a target brightness value of the pixel point based on the brightness value of the pixel point, the third weight value, the brightness values of surrounding pixel points of the pixel point and the fourth weight value; a long-frame fine-filtered image or a short-frame fine-filtered image is determined based on the target luminance value of each pixel.
For example, for the same gradient value, the weight value corresponding to the gradient value in the fine-filtered gradient map curve table is greater than the weight value corresponding to the gradient value in the coarse-filtered gradient map curve table.
And 103, carrying out weighted fusion on the first long-frame to-be-fused image and the first short-frame to-be-fused image to obtain a first fused image, carrying out weighted fusion on the second long-frame to-be-fused image and the second short-frame to-be-fused image to obtain a second fused image, and generating a target image based on the first fused image and the second fused image.
For example, the first fused image may be obtained by performing weighted fusion based on the first long-frame to-be-fused image, the acquired long-frame weight image, the first short-frame to-be-fused image, and the acquired short-frame weight image; and performing weighted fusion on the second long-frame image to be fused, the long-frame weight image, the second short-frame image to be fused and the short-frame weight image to obtain the second fused image.
In one possible implementation, before the target image is generated based on the first fused image and the second fused image, the chroma fusion channel may be further obtained by performing weighted fusion based on the long frame image chroma channel, the long frame weight image, the short frame image chroma channel, and the short frame weight image. For example, the long frame image chrominance channel and the short frame image chrominance channel may be chrominance channels for the same target scene.
On this basis, generating a target image based on the first fused image and the second fused image may include, but is not limited to: and generating a brightness fusion channel based on the first fusion image and the second fusion image, and generating the target image based on the brightness fusion channel and the chromaticity fusion channel.
In the above embodiment, the long frame weight image is obtained by: inquiring the configured long-frame membership function through the brightness value of each pixel point in the long-frame image brightness channel to obtain a long-frame weight value corresponding to the pixel point; acquiring a long frame weight image based on a long frame weight value corresponding to each pixel point; the long frame membership function represents the following relationship: if the brightness value of the pixel point is larger, the long frame weight value of the pixel point is smaller.
In the above embodiment, the short frame weight image is obtained by: inquiring the configured short frame membership function through the brightness value of each pixel point in the short frame image brightness channel to obtain a short frame weight value corresponding to the pixel point; acquiring a short frame weight image based on a short frame weight value corresponding to each pixel point; the short frame membership function represents the following relationship: if the brightness value of the pixel point is larger, the short frame weight value of the pixel point is larger.
In a possible implementation manner, after step 103, if the target image includes a license plate region sub-image and the short frame image includes a license plate region sub-image, a license plate migration sub-image is generated based on the license plate region sub-image in the short frame image, and the license plate region sub-image in the target image is replaced by the license plate migration sub-image. The short-frame image may include, for example, a short-frame image luminance channel and a short-frame image chrominance channel.
Illustratively, generating license plate migration sub-images based on license plate region sub-images in the short frame image may include, but is not limited to: determining a first average value of all brightness values in a brightness channel of the short frame image and a second average value of all brightness values in a brightness channel of the target image, and determining a ratio of the second average value to the first average value; and generating a license plate migration sub-image based on the license plate region sub-image in the short frame image and the ratio.
Illustratively, after step 103, a surrounding migration sub-image may also be generated based on the first surrounding area sub-image of the license plate area sub-image in the target image and the second surrounding area sub-image of the license plate area sub-image in the short frame image; the first surrounding area sub-image in the target image is replaced by the surrounding migration sub-image.
According to the technical scheme, in the embodiment of the application, the brightness adjustment and the contrast adjustment are carried out on the brightness channels of the long-frame images to obtain the first long-frame to-be-fused image and the second long-frame to-be-fused image, the brightness adjustment and the contrast adjustment are carried out on the brightness channels of the short-frame images to obtain the first short-frame to-be-fused image and the second short-frame to-be-fused image, the weighted fusion is carried out on the first long-frame to-be-fused image and the first short-frame to-be-fused image, the weighted fusion is carried out on the second long-frame to-be-fused image and the second short-frame to-be-fused image, the target image is generated based on the fusion image, the information of the bright area and the dark area of the brightness channels of the long-frame images can be fully utilized, the useful detail information of the overexposure area and the overdark area can be fully utilized, the loss of the useful detail information is avoided, the visual effect of the fused target image is better, flaws of the target image are avoided, the image quality is higher, the picture contrast is better, and the color performance is normal. On the premise of smaller brightness loss of a well-exposed area in a long-frame image brightness channel, the overexposed area (such as a light source, a license plate and the like) in the scene is fused through a short-frame image brightness channel, so that the brightness is reduced to be suitable, and the brightness of a target image is more suitable.
The above technical solutions of the embodiments of the present application are described below with reference to specific application scenarios.
An image processing method is provided in an embodiment of the present application, and referring to fig. 2, the method may include:
step 201, a long frame image and a short frame image are acquired, wherein the long frame image comprises a long frame image brightness channel and a long frame image chromaticity channel, and the short frame image comprises a short frame image brightness channel and a short frame image chromaticity channel.
The long frame image and the short frame image are images aiming at the same target scene, the long frame image brightness channel and the short frame image brightness channel are brightness channels aiming at the same target scene, the long frame image chroma channel and the short frame image chroma channel are chroma channels aiming at the same target scene, namely, two frame images of the same target scene are acquired by adopting different exposure time lengths, the images which are long in exposure time are marked as long frame images, the images which are short in exposure time length are marked as short frame images, and the exposure time length of the long frame images is longer than that of the short frame images. Because the exposure time of the long frame image is longer than the exposure time of the short frame image, the exposure time of the long frame image brightness channel (the exposure time of the long frame image) is longer than the exposure time of the short frame image brightness channel (the exposure time of the short frame image), and the exposure time of the long frame image chromaticity channel is longer than the exposure time of the short frame image chromaticity channel.
In one possible implementation, the automatic exposure wide dynamic mode is an automatic exposure mode, in which the front-end device can separately expose the long frame image and the short frame image, set different parameters such as reference brightness, maximum exposure time, and the like for the long frame image and the short frame image, and based on these parameters, the long frame image and the short frame image can be acquired, so long as the long frame image and the short frame image can be obtained, and the exposure time of the long frame image is longer than the exposure time of the short frame image.
Step 202, performing coarse filtering on a brightness channel of a long-frame image to obtain a long-frame coarse-filtered image; fine filtering is carried out on the brightness channel of the long-frame image to obtain a long-frame fine-filtered image; coarse filtering is carried out on the brightness channel of the short frame image to obtain a short frame coarse-filtered image; and carrying out fine filtering on the brightness channel of the short-frame image to obtain a short-frame fine-filtered image. Coarse filtering is performed on the brightness channel of the long-frame image to obtain a long-frame coarse-filtered image, which may include:
step S11, determining the gradient value of each pixel point in the long frame image brightness channel.
For example, the long frame image luminance channel may be coarsely filtered in the top-to-bottom direction, or the long frame image luminance channel may be coarsely filtered in the left-to-right direction, or the long frame image luminance channel may be coarsely filtered in the bottom-to-top direction, or the long frame image luminance channel may be coarsely filtered in the right-to-left direction. Taking the left to right as an example, the gradient value of the pixel point is determined by adopting the formula (1).
Is (x, y) =abs (S (x-1, y) -S (x+1, y)) formula (1)
S (x, y) represents the luminance value of the pixel (x, y) in the long frame image luminance channel, S (x-1, y) represents the luminance value of the pixel (x-1, y) in the long frame image luminance channel, that is, the luminance value of the first pixel on the left side of the pixel (x, y), and S (x+1, y) represents the luminance value of the pixel (x+1, y) in the long frame image luminance channel, that is, the luminance value of the first pixel on the right side of the pixel (x, y). abs is an absolute value, and S (x, y) represents a gradient value of the pixel point (x, y).
Obviously, for each pixel point (x, y) in the long frame image brightness channel, the gradient value of the pixel point (x, y) can be obtained by using the formula (1), and the gradient value represents the brightness difference of the left and right two pixel points of the pixel point (x, y).
If the direction from right to left, formula (1) is replaced with +.s (x, y) =abs (S (x+1, y) -S (x-1, y)), if the direction from top to bottom, formula (1) is replaced with +.s (x, y) =abs (S (x, y-1) -S (x, y+1)), and if the direction from bottom to top, formula (1) is replaced with +.s (x, y) =abs (S (x, y+1) -S (x, y-1)).
Step S12, for each pixel point in the long frame image brightness channel, inquiring a configured coarse filtering gradient mapping curve table through the gradient value of the pixel point to obtain a first weight value of the pixel point.
For example, a coarse-filter gradient map curve table may be preconfigured, and the coarse-filter gradient map curve may be expressed as α 1 (x) The coarse filtering gradient mapping curve table is used for representing the mapping relation between the gradient values and the weight values, and the value range of the weight values can be 0-1, namely the value range is (0, 1), that is, the weight value corresponding to each gradient value can be queried through the coarse filtering gradient mapping curve table.
Based on this, for each pixel (x, y) in the long frame image brightness channel, the coarse-filter gradient mapping curve table can be queried by the gradient value ∈ (x, y) of the pixel (x, y) to obtain the weight value α of the pixel (x, y) 1 (. DELTA.S (x, y)), the weighting value alpha of the pixel point (x, y) 1 (. V.s (x, y)) is noted as a first weight value for the pixel point (x, y), which may be greater than or equal to 0 and less than or equal to 1.
Step S13, for each pixel in the long frame image brightness channel, determining a second weight value of surrounding pixels of the pixel based on the first weight value of the pixel, for example, the sum of the first weight value and the second weight value may be 1, so that the second weight value may be determined based on the first weight value. For example, for each pixel (x, y) in the long frame image luminance channel, the first weight value of the pixel (x, y) is alpha 1 (. DELTA.S (x, y)), the second weight value of the surrounding pixel points of the pixel point (x, y) is1-α 1 (▽S(x,y)。
Step S14, for each pixel, determining a target luminance value of the pixel based on the luminance value of the pixel, the first weight value, the luminance values of surrounding pixels of the pixel, and the second weight value.
For each pixel point (x, y) in the long frame image luminance channel, when the long frame image luminance channel is coarsely filtered in the left-to-right direction, the target luminance value of the pixel point (x, y) may be determined using equation (2).
L 1 (x,y)=S(x,y)·α 1 (▽S(x,y))+S(x-1,y)·(1-α 1 (. DELTA.S (x, y))) equation (2)
In the formula (2), S (x, y) represents the luminance value of the pixel point (x, y), α 1 (. DELTA.S (x, y)) represents a first weight value, S (x-1, y) represents a luminance value of the pixel point (x-1, y), 1-alpha 1 (. DELTA.S (x, y) represents the second weight value.
In the formula (2), L 1 (x, y) represents a target luminance value of the pixel point (x, y).
Obviously, for each pixel point (x, y) in the long frame image brightness channel, the formula (2) is adopted to obtain the target brightness value of the pixel point (x, y), namely the brightness value after coarse filtering is carried out on the brightness value of the pixel point (x, y).
For example, if coarse filtering is performed in the right-to-left direction, equation (2) may also be replaced by L 1 (x,y)=S(x,y)·α 1 (▽S(x,y))+S(x+1,y)·(1-α 1 (. Gtoreq.s (x, y))), if from top to bottom, formula (2) is replaced with L 1 (x,y)=S(x,y)·α 1 (▽S(x,y))+S(x,y-1)·(1-α 1 (. Gtoreq.s (x, y))), if from bottom to top, formula (2) is replaced by L 1 (x,y)=S(x,y)·α 1 (▽S(x,y))+S(x,y+1)·(1-α 1 (▽S(x,y)))。
For example, if coarse filtering is performed in at least two directions, a target luminance value is calculated for each direction, and the target luminance values in all directions are averaged to be the target luminance value of the pixel.
And S15, determining a long-frame rough filtering image based on the target brightness value of each pixel point of the long-frame image brightness channel, namely forming the long-frame rough filtering image by the target brightness values of all the pixel points of the long-frame image brightness channel.
For example, the target luminance value of the pixel point of the long frame image luminance channel may not be in the luminance range 0-255, so the target luminance value of each pixel point of the long frame image luminance channel may be mapped, so that the target luminance value of each pixel point of the long frame image luminance channel is in the luminance range 0-255, which is not limited in the mapping process, and the mapped target luminance value of each pixel point forms a long frame coarse-filter image.
Thus, the coarse filtering process of the long frame image brightness channel is completed, and the long frame coarse filtering image is obtained.
Illustratively, the fine filtering is performed on the luminance channel of the long-frame image to obtain a long-frame fine filtered image, which includes:
Step S21, determining the gradient value of each pixel point in the brightness channel of the long frame image.
Step S22, inquiring a configured fine filtering gradient mapping curve table according to the gradient value of each pixel point in the long frame image brightness channel to obtain a third weight value of the pixel point.
Step S23, fourth weight values of surrounding pixel points of the pixel points are determined based on the third weight values of the pixel points.
Step S24, for each pixel point, determining a target brightness value of the pixel point based on the brightness value of the pixel point, the third weight value, the brightness values of surrounding pixel points of the pixel point and the fourth weight value.
Step S25, determining a long-frame fine filtered image based on the target brightness value of each pixel point of the long-frame image brightness channel, namely, forming the long-frame fine filtered image by the target brightness values of all the pixel points of the long-frame image brightness channel.
Illustratively, the processing of steps S21-S25 is similar to the processing of steps S11-S15, except that coarse filtering is replaced with fine filtering, and the following description is given of the differences:
replacing the coarse filtering gradient mapping curve table with fine filteringGradient mapping curve table, i.e. pre-configured with fine filter gradient mapping curve table, denoted as α 2 (x) And the fine filtering gradient mapping curve table is used for representing the mapping relation between the gradient value and the weight value. After the gradient value S (x, y) of the pixel point (x, y) is used for inquiring the fine filtering gradient mapping curve table, a third weight value alpha is obtained 2 (. DELTA.S (x, y)), and the fourth weight value of the surrounding pixel points of the pixel points (x, y) is 1-alpha 2 (. V.s (x, y)) and, in determining the target luminance value of the pixel point (x, y), replacing formula (2) with formula (3). In formula (3), L 2 (x, y) represents a target luminance value of the pixel point (x, y).
L 2 (x,y)=S(x,y)·α 2 (▽S(x,y))+S(x-1,y)·(1-α 2 (. DELTA.S (x, y))) equation (3)
In one possible implementation, the coarse filtering gradient mapping curve table and the fine filtering gradient mapping curve table are used to represent the mapping relationship between the gradient value and the weight value, and the difference between the two is that:
both the coarse and fine filter gradient map tables may include a plurality of gradient values, such as gradient value a1, gradient value a2, gradient value a3, and so on. For each gradient value, the gradient value corresponds to a weight value in the coarse filtering gradient mapping curve table, for example, the gradient value a1 corresponds to the weight value b11, the gradient value a2 corresponds to the weight value b12, and the gradient value a3 corresponds to the weight value b13. For each gradient value, the gradient value corresponds to a weight value in the fine filtering gradient mapping curve table, for example, the gradient value a1 corresponds to the weight value b21, the gradient value a2 corresponds to the weight value b22, and the gradient value a3 corresponds to the weight value b23. On this basis, for the same gradient value, the weight value corresponding to the gradient value in the fine filtering gradient mapping curve table may be greater than the weight value corresponding to the gradient value in the coarse filtering gradient mapping curve table, for example, the weight value b21 may be greater than the weight value b11, the weight value b22 may be greater than the weight value b12, and the weight value b23 may be greater than the weight value b13.
As can be seen from the formulas (2) and (3), for the same gradient value, the corresponding weight value alpha of the gradient value in the fine-filtered gradient mapping curve table 2 (. DELTA.S (x, y)) is large, 1-. Alpha. 2 (▽S (x, y) is small, i.e. the primary source of the target luminance value is the luminance value of the pixel (x, y) itself, and the secondary source of the target luminance value is the luminance value of the surrounding pixels (x-1, y) of the pixel (x, y). The gradient value is a weight value alpha corresponding to the coarse filtering gradient mapping curve table 1 (. DELTA.S (x, y)) is small, 1-. Alpha. 1 (. V.s (x, y) is large, i.e. the primary source of the target luminance value is the luminance value of the surrounding pixel (x-1, y) of the pixel (x, y), while the secondary source is the luminance value of the pixel (x, y) itself.
Illustratively, performing coarse filtering on the brightness channel of the short-frame image to obtain a short-frame coarse-filtered image, including:
step S31, determining the gradient value of each pixel point in the brightness channel of the short frame image.
Step S32, for each pixel point in the short frame image brightness channel, inquiring the configured coarse filtering gradient mapping curve table through the gradient value of the pixel point to obtain a first weight value of the pixel point.
Step S33, determining second weight values of surrounding pixel points of the pixel points based on the third weight values of the pixel points.
Step S34, for each pixel, determining a target luminance value of the pixel based on the luminance value of the pixel, the first weight value, the luminance values of surrounding pixels of the pixel, and the second weight value.
Step S35, determining a short-frame rough filtering image based on the target brightness value of each pixel point of the short-frame image brightness channel, namely forming the short-frame rough filtering image by the target brightness values of all the pixel points of the short-frame image brightness channel.
Step S31-step S35 are similar to step S11-step S15 except that the long frame image luminance channel is replaced with the short frame image luminance channel, and the long frame coarse-filtered image is replaced with the short frame coarse-filtered image.
Illustratively, the fine filtering is performed on the brightness channel of the short-frame image to obtain a short-frame fine-filtered image, which includes:
step S41, determining the gradient value of each pixel point in the brightness channel of the short frame image.
Step S42, for each pixel point in the short frame image brightness channel, inquiring the configured fine filtering gradient mapping curve table through the gradient value of the pixel point to obtain a third weight value of the pixel point.
Step S43, fourth weight values of surrounding pixel points of the pixel points are determined based on the third weight values of the pixel points.
Step S44, for each pixel point, determining a target brightness value of the pixel point based on the brightness value of the pixel point, the third weight value, the brightness values of surrounding pixel points of the pixel point and the fourth weight value.
Step S45, determining a short-frame fine filter image based on the target brightness value of each pixel point of the short-frame image brightness channel, namely forming the short-frame fine filter image by the target brightness values of all the pixel points of the short-frame image brightness channel.
Step S41 to step S45 are similar to step S21 to step S25 except that the long frame image luminance channel is replaced with the short frame image luminance channel, and the long frame fine filter image is replaced with the short frame fine filter image.
In summary, the long frame image luminance channel can be respectively subjected to coarse filtering and fine filtering to obtain a long frame coarse filtering image and a long frame fine filtering image, the short frame image luminance channel is respectively subjected to coarse filtering and fine filtering to obtain a short frame coarse filtering image and a short frame fine filtering image, so that illumination smoothing filtering is realized, illumination of each place of a scene is estimated from a known scene image, smooth transition can be performed on the long frame image luminance channel and the short frame image luminance channel to obtain a smooth image, and finally fusion of the images is completed.
The light reflected by the surface of the object cannot exceed the light emitted by the light source for irradiating the object, namely, L is larger than S, L is illuminance, S is a scene image, and the relation between L and S satisfies: s=r×l, R being the reflectance. In the position where the input image has a very high gradient, the estimated illumination is discontinuous, i.e. the discontinuity in L is similar to the discontinuity in S, and in summary, a recursive method may be used to perform a recursive calculation in one dimension and four different directions to ensure visual information mixing of the whole image, see step S11-step S15.
After illumination smooth filtering, a long-frame rough filtering image and a long-frame fine filtering image can be obtained, and a short-frame rough filtering image and a short-frame fine filtering image are obtained.
Illustratively, the long frame coarse-filtered image is smoother than the long frame fine-filtered image, and the difference between the long frame fine-filtered image and the long frame image luminance channel is smaller than the difference between the long frame coarse-filtered image and the long frame image luminance channel. In addition, the short-frame rough filtered image is smoother than the short-frame fine filtered image, and the difference between the short-frame fine filtered image and the short-frame image brightness channel is smaller than the difference between the short-frame rough filtered image and the short-frame image brightness channel.
In step 203, a long frame weight image and a short frame weight image are obtained, where the long frame weight image may include a weight value of each pixel point in a long frame to-be-fused image (for example, a first long frame to-be-fused image and a second long frame to-be-fused image, and related content refers to the subsequent embodiment), and the short frame weight image may include a weight value of each pixel point in a short frame to-be-fused image (for example, a first short frame to-be-fused image and a second short frame to-be-fused image).
In one possible implementation manner, the acquiring manner of the long frame weight image may include:
step S51, inquiring the configured long frame membership function according to the brightness value of each pixel point in the long frame image brightness channel to obtain the long frame weight value corresponding to the pixel point.
For example, a long frame membership function may be preconfigured, and the long frame membership function may be denoted as M 1 Long frame membership function M 1 For representing luminance values and long frame weight values (long frame membership function M 1 The weight value of (a) is marked as a long frame weight value), that is, for each luminance value, the long frame weight value corresponding to the luminance value can be queried through the long frame membership function. Based on this, for long frames For each pixel (x, y) in the image brightness channel, the long frame membership function M can be queried by the brightness value of the pixel (x, y) 1 And obtaining the long frame weight value of the pixel point (x, y). For example, it can be expressed by the following formula: w (W) 1 =M 1 (S 1 ),S 1 Representing the brightness value, W, of a pixel (x, y) 1 Long frame weight value, M, representing pixel point (x, y) 1 Representing a long frame membership function, M 1 (S 1 ) Querying a long frame membership function M by means of brightness values of pixel points (x, y) 1
Illustratively, for a long frame membership function, the long frame membership function is used to represent the relationship: if the brightness value of the pixel point is larger, the long frame weight value of the pixel point is smaller. If the brightness value of the pixel point is smaller, the long frame weight value of the pixel point is larger. For example, the long frame membership function indicates a mapping relationship between the luminance value c11 and the long frame weight d11, and a mapping relationship between the luminance value c12 and the long frame weight d12, and based on this, if the luminance value c11 is greater than the luminance value c12, the long frame weight d11 is smaller than the long frame weight d12.
Step S52, a long frame weight image is obtained based on the long frame weight value corresponding to each pixel point of the long frame image brightness channel, namely, long frame weight values corresponding to all the pixel points are combined into the long frame weight image.
In one possible implementation manner, the acquiring manner of the short frame weight image may include:
step S61, inquiring the configured short frame membership function according to the brightness value of each pixel point in the short frame image brightness channel to obtain the short frame weight value corresponding to the pixel point.
For example, a short frame membership function may be preconfigured, and the short frame membership function may be denoted as M 2 Short frame membership function M 2 For representing luminance values and short frame weight values (short frame membership function M 2 The weight value of (a) is recorded as a short frame weight value), that is, for each luminance value, the short frame weight value corresponding to the luminance value can be queried through a short frame membership function. Based on this, forFor each pixel (x, y) in the short frame image luminance channel, querying the short frame membership function M by the luminance value of the pixel (x, y) 2 And obtaining the short frame weight value of the pixel point (x, y). For example, it can be expressed by the following formula: w (W) 2 =M 2 (S 2 ),S 2 Representing the brightness value, W, of a pixel (x, y) 2 Short frame weight value, M, representing pixel point (x, y) 2 Representing a short frame membership function, M 2 (S 2 ) Querying a short frame membership function M by means of brightness values of pixel points (x, y) 2
Illustratively, for a short frame membership function, the short frame membership function is used to represent the relationship: if the brightness value of the pixel point is larger, the short frame weight value of the pixel point is larger. If the brightness value of the pixel point is smaller, the short frame weight value of the pixel point is smaller. For example, the short frame membership function indicates a mapping relationship between the luminance value c21 and the short frame weight d21, and a mapping relationship between the luminance value c22 and the short frame weight d22, and based on this, if the luminance value c21 is greater than the luminance value c22, the short frame weight d21 is greater than the short frame weight d22.
Step S62, acquiring a short frame weight image based on the short frame weight value corresponding to each pixel point of the short frame image brightness channel, namely, forming the short frame weight image by the short frame weight values corresponding to all the pixel points.
When acquiring a long frame weight image and a short frame weight image, it is necessary to assign a larger weight value to the pixels of the well-exposed image region and a lower weight value to the pixels of the underexposed or overexposed image region. Because the long frame image (the long frame image brightness channel and the long frame image chromaticity channel) can better capture the low brightness region of the scene, the brightness value of the pixel point smaller than the brightness intermediate value in the long frame image is still better than that of the low brightness region of the short frame image (the short frame image brightness channel and the short frame image chromaticity channel), even the pixel point smaller than the brightness intermediate value can be allocated with a larger long frame weight value. Because the car light and license plate area of the long frame image are usually overexposed areas, namely the brightness values of the overexposed areas are large, lower long frame weight values are required to be allocated to the overexposed areas such as car light and license plate in the long frame image, and the brightness of the overexposed areas such as car light and license plate of the long frame image is reduced through the lower long frame weight values.
In summary, for the long frame membership function corresponding to the long frame image, the long frame membership function is required to achieve the following relationship: if the luminance value of a pixel is larger (overexposed region), the longer frame weight value of the pixel is smaller. If the brightness value of a pixel is smaller (darker area), the longer frame weight value of the pixel is larger.
Because the short-frame image can better capture the high-brightness area of the scene, namely the areas such as the car lamp and the license plate of the short-frame image are usually areas with proper exposure, and the low-brightness area of the short-frame image is an underexposure area, smaller short-frame weight values can be allocated to the low-brightness area of the short-frame image, and higher short-frame weight values can be allocated to the areas such as the car lamp and the license plate of the short-frame image. In summary, for the short frame membership function corresponding to the short frame image, the following relationship needs to be implemented by the short frame membership function: if the brightness value of the pixel point is larger (the proper area is exposed), the short frame weight value of the pixel point is larger. If the brightness value of a pixel is smaller (underexposed area), the shorter the frame weight value of the pixel is smaller.
Step 204, performing brightness adjustment on the long frame image brightness channel based on the long frame fine filter image to obtain a long frame correction image (i.e. the brightness-adjusted long frame image brightness channel), and performing brightness adjustment on the short frame image brightness channel based on the short frame fine filter image to obtain a short frame correction image (i.e. the brightness-adjusted short frame image brightness channel).
For example, in order to avoid the luminance of the long frame image luminance channel being too bright or the luminance of the short frame image luminance channel being too dark, luminance adjustment may be performed on the long frame image luminance channel and the short frame image luminance channel, and the manner of luminance adjustment may include, but is not limited to, performing Gamma correction on the long frame image luminance channel and the short frame image luminance channel. The brightness adjustment process for the long frame image brightness channel may include: and if the average value of the brightness values of all the pixel points in the long-frame fine-filtered image is larger than a first threshold value, performing gamma correction on the brightness values of the brightness channels of the long-frame image by adopting a first gamma coefficient to obtain a long-frame corrected image. If the average value of the brightness values of all the pixel points in the long-frame fine-filtered image is not greater than the first threshold value, the brightness value of a brightness channel of the long-frame image is kept unchanged, and a long-frame correction image is obtained. The first threshold may be empirically configured, and is not limited thereto, e.g., the first threshold may be 128. The first gamma coefficient may be empirically configured, and is not limited, e.g., the first gamma coefficient may be less than 1, and the first gamma coefficient may be greater than 0.
For example, the luminance adjustment can be performed on the luminance channel of the long frame image by the formula (4):
L long (x, y) represents the luminance value of the pixel point (x, y) in the long-frame fine-filtered image, avg (L) long (x, y)) represents the average value of the luminance values of all pixel points in the long-frame fine-filtered image, 128 represents the first threshold value, S long (x, y) represents the luminance value of the pixel point (x, y) of the luminance channel of the long frame image, gamma 1 Representing the first gamma coefficient.
As can be seen from equation (4), if avg (L long (x, y)) is greater than 128, gamma can be used for each pixel point (x, y) of the long frame image brightness channel 1 Brightness value S for pixel point (x, y) long (x, y) correcting to obtain corrected brightness value Gamma 1 (S long (x, y)), and corrected luminance value Gamma 1 (S long (x, y)) is used as the brightness value of the pixel point (x, y) in the long frame correction image. If avg (L) long (x, y)) is not greater than 128, the luminance value S of the pixel point (x, y) is calculated for each pixel point (x, y) of the long frame image luminance channel long (x, y) as a luminance value of the pixel point (x, y) in the long frame correction image. Obviously, after the above processing is performed for each pixel point (x, y) of the long frame image brightness channel, the brightness value of the pixel point (x, y) in the long frame correction image can be obtained,then, the luminance values of all the pixel points (x, y) in the long-frame correction image may be combined into the long-frame correction image.
Illustratively, the brightness adjustment procedure for the short frame image brightness channel may include, but is not limited to: if the average value of the brightness values of all the pixel points in the short-frame fine-filtered image is smaller than the second threshold value, the second gamma coefficient can be adopted to carry out gamma correction on the brightness value of the brightness channel of the short-frame image, so that a short-frame corrected image is obtained. If the average value of the brightness values of all the pixel points in the short-frame fine-filtered image is not smaller than the second threshold value, the brightness value of the brightness channel of the short-frame image can be kept unchanged, and the short-frame correction image is obtained. The second threshold may be empirically configured, and is not limited, e.g., the second threshold may be 128. The second gamma coefficient may be empirically configured, and is not limited, e.g., the second gamma coefficient may be greater than 1.
For example, the luminance adjustment can be performed on the luminance channel of the short frame image by the formula (5):
L short (x, y) represents the luminance value of the pixel point (x, y) in the short-frame fine-filtered image, avg (L) short (x, y)) represents the average value of the luminance values of all pixel points in the short-frame fine-filtered image, 128 represents the second threshold value, S short (x, y) represents the luminance value of the pixel point (x, y) of the luminance channel of the short frame image, gamma 2 Representing a second gamma coefficient.
As can be seen from equation (5), if avg (L short (x, y)) is less than 128, gamma can be used for each pixel point (x, y) of the short frame image brightness channel 2 Brightness value S for pixel point (x, y) short (x, y) correcting to obtain corrected brightness value Gamma 2 (S short (x, y)), and corrected luminance value Gamma 2 (S short (x, y)) is used as the brightness value of the pixel point (x, y) in the short frame correction image. If avg (L) short (x, y)) is not less than 128, then the luminance is not less than 128 for the short frame imageEach pixel point (x, y) of the channel, the brightness value S of the pixel point (x, y) short (x, y) as a luminance value of the pixel point (x, y) in the short-frame correction image. Obviously, after the above processing is performed on each pixel (x, y) of the luminance channel of the short-frame image, the luminance value of the pixel (x, y) in the short-frame correction image can be obtained, and then the luminance values of all the pixels (x, y) in the short-frame correction image can be formed into the short-frame correction image.
Step 205, performing contrast adjustment on the long-frame correction image based on the long-frame coarse-filter image to obtain a first long-frame image to be fused (i.e. a contrast-adjusted long-frame correction image); performing contrast adjustment on the long-frame correction image based on the long-frame fine-filtering image to obtain a second long-frame image to be fused; performing contrast adjustment on the short-frame correction image based on the short-frame rough-filtered image to obtain a first short-frame image to be fused; and carrying out contrast adjustment on the short-frame correction image based on the short-frame fine-filtering image to obtain a second short-frame image to be fused.
In the above embodiment, a long-frame coarse-filter image, a long-frame fine-filter image, a short-frame coarse-filter image, a short-frame fine-filter image, a long-frame correction image, and a short-frame correction image can be obtained, and based on these images, contrast adjustment can be performed, for example, using the following formula (6):
in the formula (6), con represents a preset contrast value, which can be configured empirically, without limitation, and clip (x, a, b) represents clamping the value of x in the range of the values of a and b.
If the contrast adjustment is performed on the long-frame correction image based on the long-frame coarse-filter image, S (x, y) represents the luminance value of the pixel point (x, y) in the long-frame correction image, L (x, y) represents the luminance value of the pixel point (x, y) in the long-frame coarse-filter image, and C (x, y) represents the luminance value of the pixel point (x, y) in the first long-frame image to be fused. Obviously, after the above processing is performed on each pixel (x, y), the brightness values of all the pixels in the first long frame to-be-fused image can be obtained, and the brightness values of the pixels form the first long frame to-be-fused image.
If the contrast adjustment is performed on the long frame correction image based on the long frame fine-filtered image, S (x, y) represents the luminance value of the pixel point (x, y) in the long frame correction image, L (x, y) represents the luminance value of the pixel point (x, y) in the long frame fine-filtered image, and C (x, y) represents the luminance value of the pixel point (x, y) in the second long frame to-be-fused image.
If the contrast adjustment is performed on the short-frame correction image based on the short-frame coarse-filter image, S (x, y) represents the luminance value of the pixel point (x, y) in the short-frame correction image, L (x, y) represents the luminance value of the pixel point (x, y) in the short-frame coarse-filter image, and C (x, y) represents the luminance value of the pixel point (x, y) in the first short-frame image to be fused.
If the contrast adjustment is performed on the short-frame correction image based on the short-frame fine-filtered image, S (x, y) represents the luminance value of the pixel point (x, y) in the short-frame correction image, L (x, y) represents the luminance value of the pixel point (x, y) in the short-frame fine-filtered image, and C (x, y) represents the luminance value of the pixel point (x, y) in the second short-frame image to be fused.
And 206, carrying out weighted fusion on the first long-frame to-be-fused image and the first short-frame to-be-fused image to obtain a first fused image, and carrying out weighted fusion on the second long-frame to-be-fused image and the second short-frame to-be-fused image to obtain a second fused image. A luminance fusion channel is generated based on the first fusion image and the second fusion image.
For example, the first fused image may be obtained by performing weighted fusion based on the first long-frame to-be-fused image, the long-frame weight image (which represents the weight value of each pixel point in the first long-frame to-be-fused image), the first short-frame to-be-fused image, and the short-frame weight image (which represents the weight value of each pixel point in the first short-frame to-be-fused image). And carrying out weighted fusion on the basis of the second long-frame image to be fused, the long-frame weight image (the weight value of each pixel point in the second long-frame image to be fused), the second short-frame image to be fused and the short-frame weight image (the weight value of each pixel point in the second short-frame image to be fused), so as to obtain the second fused image.
For example, weighted fusion of the images to be fused can be achieved by the formula (7):
S(x,y)=(C 1 (x,y)*W 1 (x,y)+C 2 (x,y)*W 2 (x,y))/(W 1 (x,y)+W 2 (x, y)) equation (7)
C, if the first long frame to-be-fused image and the first short frame to-be-fused image are subjected to weighted fusion 1 (x, y) represents the brightness value, W, of the pixel point (x, y) in the image to be fused of the first long frame 1 (x, y) represents the weight value of the pixel point (x, y) in the long frame weight image, C 2 (x, y) represents the brightness value, W, of the pixel point (x, y) in the image to be fused of the first short frame 2 (x, y) represents the weight value of the pixel point (x, y) in the short frame weight image. S (x, y) represents the brightness value of the pixel points (x, y) in the first fused image, and after the processing is carried out on each pixel point (x, y), the brightness values of all the pixel points in the first fused image can be obtained, and the brightness values of the pixel points form the first fused image.
C, if the second long frame to-be-fused image and the second short frame to-be-fused image are subjected to weighted fusion 1 (x, y) represents the brightness value of the pixel point (x, y) in the image to be fused of the second long frame, W 1 (x, y) represents the weight value of the pixel point (x, y) in the long frame weight image, C 2 (x, y) represents the brightness value, W, of the pixel point (x, y) in the image to be fused of the second short frame 2 (x, y) represents the weight value of the pixel point (x, y) in the short frame weight image. S (x, y) represents the brightness value of the pixel points (x, y) in the second fused image, and after the processing is carried out on each pixel point (x, y), the brightness values of all the pixel points in the second fused image can be obtained, and the brightness values of the pixel points form the second fused image.
For example, in generating the luminance fusion channel based on the first fusion image and the second fusion image, the first fusion image and the second fusion image may be fused into the luminance fusion channel using equation (8).
S fu (x,y)=S co (x,y)/255*(S co (x,y)+S fi (x,y))/2+S fi /255*(255-(S co (x,y)+S fi (x,y))/2) (8)
In the formula (8), S co (x, y) represents the luminance value of the pixel point (x, y) in the first fused image, S fi (x, y) represents the luminance value of the pixel point (x, y) in the second fused image, S fu And (x, y) represents the brightness value of the pixel point (x, y) in the brightness fusion channel, and after the processing is carried out on each pixel point (x, y), the brightness values of all the pixel points in the brightness fusion channel can be obtained, and the brightness values of the pixel points form the brightness fusion channel.
Step 207, performing weighted fusion based on the long frame image chroma channel (i.e. the long frame image chroma channel may be represented as a long frame chroma U channel and a long frame chroma V channel), the long frame weight image (which is used to represent the weight value of each pixel point in the long frame image chroma channel), the short frame image chroma channel (i.e. the short frame image chroma channel may be represented as a short frame chroma U channel and a short frame chroma V channel), and the short frame weight image (which is used to represent the weight value of each pixel point in the short frame image chroma channel), to obtain a chroma fusion channel.
For example, the weighted fusion of the long frame image chrominance channels and the short frame image chrominance channels may be achieved by the above equation (7). C, if the long frame chroma U channel and the short frame chroma U channel are subjected to weighted fusion 1 (x, y) represents the chrominance value of pixel (x, y) in the long frame chrominance U channel, C 2 (x, y) represents the chrominance values of the pixel points (x, y) in the short frame chrominance U channel, and S (x, y) represents the chrominance values of the pixel points (x, y) in the chrominance U fusion channel.
C if the long frame chroma V channel and the short frame chroma V channel are subjected to weighted fusion 1 (x, y) represents the chrominance value of pixel (x, y) in the long frame chrominance V channel, C 2 (x, y) represents the chrominance value of the pixel point (x, y) in the chrominance V channel of the short frame, and S (x, y) represents the chrominance value of the pixel point (x, y) in the chrominance V fusion channel.
In summary, after the above processing, the chroma U fusion channel and the chroma V fusion channel can be obtained, where both the chroma U fusion channel and the chroma V fusion channel are chroma fusion channels.
In step 208, a target image is generated based on the luminance fusion channel and the chrominance fusion channel, that is, the luminance fusion channel, the chrominance U fusion channel, and the chrominance V fusion channel may be combined to obtain the target image, where the target image includes the luminance fusion channel, the chrominance U fusion channel, and the chrominance V fusion channel.
Thus, the target image can be generated based on the acquired long frame image and short frame image.
In one possible implementation manner, since the exposure time periods of the long frame image and the short frame image are different, displacement difference may exist in license plate regions in the long frame image and the short frame image, and a double image phenomenon may occur in the license plate region of the fused target image, so that visual effect is affected. Aiming at the problems, in the embodiment of the application, the license plate region of the target image can be processed, namely, the license plate region of the short frame image is processed, and the processed license plate region is overlapped to the target image, so that the problem of ghost blurring caused by displacement difference is avoided.
For example, a license plate migration sub-image is generated based on the license plate region sub-image of the short frame image, and the license plate region sub-image of the target image is replaced by the license plate migration sub-image. And generating a surrounding migration sub-image based on the first surrounding area sub-image of the license plate area sub-image in the target image and the second surrounding area sub-image of the license plate area sub-image in the short frame image, and replacing the first surrounding area sub-image of the target image by the surrounding migration sub-image.
The following describes the processing procedure of the license plate region in combination with the following steps:
Step S71, determining a first average value of all luminance values in a short frame image luminance channel of the short frame image (average value of luminance values of all pixel points in the short frame image luminance channel), and determining a second average value of all luminance values in a luminance channel of the target image (average value of luminance values of all pixel points in the luminance channel).
Step S72, determining the ratio of the second average value to the first average value.
And step 73, generating a license plate migration sub-image based on the license plate region sub-image in the short frame image and the ratio.
For example, a license plate region sub-image (i.e., a sub-image including a license plate region, the size of the license plate region sub-image in the short frame image being the same as the size of the license plate region sub-image in the target image) may be obtained from the short frame image, and then a license plate migration sub-image may be generated based on the license plate region sub-image and the ratio.
For example, the license plate region sub-image (i.e., YUV space) is converted into a license plate region sub-image of RGB color space, where the license plate region sub-image of RGB color space includes an R channel sub-image, a G channel sub-image, and a B channel sub-image. And multiplying the R channel sub-image by the ratio to obtain an R channel target sub-image, multiplying the G channel sub-image by the ratio to obtain a G channel target sub-image, and multiplying the B channel sub-image by the ratio to obtain a B channel target sub-image. And then, fusing the R channel target sub-image, the G channel target sub-image and the B channel target sub-image to obtain a license plate migration sub-image. The brightness of the license plate migration sub-image is close to that of the target image, but the license plate part of the license plate migration sub-image is clearer.
And S74, replacing the license plate region sub-image in the target image by the license plate migration sub-image.
Step S75, generating a surrounding migration sub-image based on a first surrounding area sub-image of the license plate area sub-image in the target image and a second surrounding area sub-image of the license plate area sub-image in the short frame image.
For example, a second surrounding area sub-image of the license plate area sub-image may be obtained from the short frame image, and a first surrounding area sub-image of the license plate area sub-image may be obtained from the target image, where a size of the second surrounding area sub-image may be the same as a size of the first surrounding area sub-image.
For example, the second surrounding area sub-image includes K1 pixels on the upper side of the license plate area sub-image in the short frame image, K2 pixels on the lower side of the license plate area sub-image in the short frame image, K3 pixels on the left side of the license plate area sub-image in the short frame image, and K4 pixels on the right side of the license plate area sub-image in the short frame image.
The first peripheral region sub-image comprises K1 pixel points on the upper side of the license plate region sub-image in the target image, K2 pixel points on the lower side of the license plate region sub-image in the target image, K3 pixel points on the left side of the license plate region sub-image in the target image and K4 pixel points on the right side of the license plate region sub-image in the target image.
And then, determining a first sub-weight map of the first peripheral region sub-image, determining a second sub-weight map of the second peripheral region sub-image, and carrying out weighting operation based on the first peripheral region sub-image, the first sub-weight map, the second peripheral region sub-image and the second sub-weight map to obtain a peripheral migration sub-image.
For example, the surrounding migration sub-image can be obtained by the formula (7), in the formula (7), C 1 (x, y) represents the pixel value (i.e., the value consisting of the luminance value and the chrominance value) W of the pixel point (x, y) in the sub-image of the first peripheral area 1 (x, y) represents the weight value of the pixel point (x, y) in the first sub-weight graph, C 2 (x, y) represents the pixel value of the pixel point (x, y) in the second surrounding area sub-image, W 2 (x, y) represents the weight value of the pixel point (x, y) in the second sub-weight map, and S (x, y) represents the pixel value of the pixel point (x, y) in the surrounding migrated sub-image.
Obviously, after the above processing is performed on each pixel (x, y) of the surrounding migration sub-image, pixel values of all the pixels in the surrounding migration sub-image can be obtained, and these pixel values form the surrounding migration sub-image.
In one possible embodiment, for each pixel (x, y), the pixel (x, y) corresponds to a weight value W in the first sub-weight map 1 (x, y) corresponding to a weight value W in the second sub-weight map 2 (x, y). If the pixel (x, y) is close to the license plate region sub-image, if the distance between the pixel (x, y) and the edge of the license plate region sub-image is smaller than the distance threshold, the weight value W in the second sub-weight graph 2 (x, y) is greater than the weight value W in the first sub-weight graph 1 (x, y), i.e. the weight value of the second surrounding area sub-image in the short frame image is larger. If the pixel (x, y) is far from the license plate region sub-image, such as the pixel (x, y) and the license plate region sub-imageThe distance of the edge is not less than the distance threshold, and the weight value W in the first sub-weight graph 1 (x, y) is greater than the weight value W in the second sub-weight graph 2 (x, y), namely the weight value of the sub-image of the first peripheral area in the target image is larger.
Step S76, replacing the first peripheral area sub-image in the target image by the peripheral migration sub-image.
Thus, the adjustment of the target image is completed, and an image with no ghost image blurring problem in the license plate area is obtained.
Based on the same application concept as the above method, another image processing method is provided in this embodiment of the present application, and referring to fig. 3, a flowchart of the image processing method is shown, where the method may include:
Step 301, acquiring a short frame image with a first exposure time length and a long frame image with a second exposure time length for the same target scene, wherein the second exposure time length can be longer than the first exposure time length.
Step 302, responding to a preset first weight mapping table: and adjusting the brightness of the long frame image to generate a first long frame image, and adjusting the brightness of the short frame image to generate a first short frame image.
Illustratively, the first weight map may be a fine filter gradient map curve table, the first long frame image may be a long frame correction image, and the first short frame image may be a short frame correction image. For example, the luminance of the long frame image luminance channel of the long frame image may be adjusted to generate the first long frame image, and the luminance of the short frame image luminance channel of the short frame image may be adjusted to generate the first short frame image.
Step 303, responding to a preset second weight mapping table: and adjusting the contrast of the first long frame image to generate a second long frame image, and adjusting the contrast of the first short frame image to generate a second short frame image. The second weight mapping table may be a coarse filtering gradient mapping curve table, the second long frame image may be a first long frame image to be fused, and the second short frame image may be a first short frame image to be fused.
The first weight map and the second weight map are defined as: the corresponding weight of a pixel point in a preset first weight mapping table is larger than the corresponding weight of the pixel point in a preset second weight mapping table.
Step 304 generates a first new image based on the second long frame image and the second short frame image.
In one possible implementation, the first long frame image is defined as: weighting the long frame image based on the first weight mapping table to generate a first weighted long frame image (namely a long frame fine filter image); the brightness of the long frame image is adjusted based on the first weighted long frame image, and a first long frame image is generated.
Illustratively, the first short frame image is defined as: weighting the short frame image based on the first weight mapping table to generate a first weighted short frame image (namely a short frame fine filter image); and adjusting the brightness of the short frame image based on the first weighted short frame image to generate a first short frame image.
Illustratively, the second long frame image is defined as: weighting the long frame image based on the second weight mapping table to generate a second weighted long frame image (i.e. a long frame coarse filter image); and adjusting the contrast of the first long frame image based on the second weighted long frame image to generate a second long frame image.
Illustratively, the second short frame image is defined as: weighting the short frame image based on the second weight mapping table to generate a second weighted short frame image (i.e. a short frame coarse filter image); and adjusting the contrast of the first short frame image based on the second weighted short frame image to generate a second short frame image.
In one possible implementation, the first weight mapping table and the second weight mapping table are defined as: a pixel point is associated with a first weight and a second weight in a preset first weight mapping table (the sum of the first weight and the second weight may be a fixed value, such as 1), and the pixel point is associated with a third weight and a fourth weight in a preset second weight mapping table (the sum of the third weight and the fourth weight may be a fixed value, such as 1).
Based on this, the first weighted long frame image is defined as: and acquiring a first weight and a second weight corresponding to one pixel point in the long frame image based on the first weight mapping table. And weighting and generating a first weighted long frame image based on the pixel point, other pixel points adjacent to the pixel point, the first weight and the second weight.
The first weighted short frame image is defined as: and acquiring a first weight and a second weight corresponding to one pixel point in the short frame image based on the first weight mapping table. And weighting and generating a first weighted short frame image based on the pixel point, other pixel points adjacent to the pixel point, the first weight and the second weight.
The second weighted long frame image is defined as: acquiring a third weight and a fourth weight corresponding to a pixel point in the long frame image based on the second weight mapping table; and weighting and generating a second weighted long frame image based on the pixel point, other pixel points adjacent to the pixel point, the third weight and the fourth weight.
The second weighted short frame image is defined as: acquiring a third weight and a fourth weight corresponding to a pixel point in the short frame image based on the second weight mapping table; and weighting and generating a second weighted short frame image based on the pixel point, other pixel points adjacent to the pixel point, the third weight and the fourth weight.
In the above embodiment, the other pixel points are defined as front and rear pixel points in the specified gradient direction of the pixel point, such as the left-side pixel point, the right-side pixel point, the upper-side pixel point, the lower-side pixel point, and the like.
For example, the first weighted long frame image, the first weighted short frame image, the second weighted long frame image, and the second weighted short frame image are obtained in step 202, and the detailed description is omitted here.
In the above embodiment, adjusting the luminance of the long frame image based on the first weighted long frame image, generating the first long frame image may include: if the average value of the brightness values of all the pixel points in the first weighted long frame image is larger than a first threshold value, gamma correction is carried out on the brightness of the long frame image by adopting a first gamma coefficient, and a first long frame image is generated; and if the average value of the brightness values of all the pixel points in the first weighted long frame image is not greater than a first threshold value, keeping the brightness of the long frame image unchanged, and generating a first long frame image. Adjusting the brightness of the short frame image based on the first weighted short frame image, generating the first short frame image may include: if the average value of the brightness values of all the pixel points in the first weighted short frame image is smaller than a second threshold value, gamma correction is carried out on the brightness of the short frame image by adopting a second gamma coefficient, and a first short frame image is generated; if the average value of the brightness values of all the pixel points in the first weighted short frame image is not smaller than a second threshold value, keeping the brightness of the short frame image unchanged, and generating a first short frame image; the first gamma coefficient is less than 1 and the second gamma coefficient is greater than 1.
After the first long frame image is obtained, the contrast of the first long frame image can be adjusted based on the second weighted long frame image, so as to generate a second long frame image. After the first short frame image is obtained, the contrast of the first short frame image can be adjusted based on the second weighted short frame image, so as to generate a second short frame image.
In the above embodiment, the brightness adjustment process for the long frame image and the brightness adjustment process for the short frame image can be referred to in step 204, and the detailed description is not repeated here. For the contrast adjustment process of the first long frame image and the contrast adjustment process of the first short frame image, refer to step 205, and the detailed description is not repeated here.
As can be seen from the above technical solutions, in the embodiments of the present application, since the weight corresponding to a pixel point in the first weight mapping table is greater than the weight corresponding to the pixel point in the second weight mapping table, compared with the second weighted long frame image, the brightness of the first weighted long frame image is closer to the brightness of the long frame image, when the brightness of the long frame image is adjusted based on the first weighted long frame image, the bright area information of the long frame image can be fully reserved, the useful detail information of the overexposed area is fully utilized, and the loss of the useful detail information is avoided.
Similarly, compared with the second weighted short frame image, the brightness of the first weighted short frame image is closer to that of the short frame image, and when the brightness of the short frame image is adjusted based on the first weighted short frame image, the brightness information of the short frame image can be fully reserved, so that the loss of useful detail information in the short frame image is avoided.
When the contrast (the ratio of the brightest (white) to the darkest (black)) of the first long frame image is adjusted based on the second weighted long frame image, the contrast of the first long frame image can be adjusted by using the brightness having the larger phase difference, so that the enhancement of the contrast is realized, and the color level of the second long frame image is more. Similarly, compared with the first weighted short frame image, the brightness of the second weighted short frame image is larger than the brightness of the short frame image, when the contrast of the first short frame image is adjusted based on the second weighted short frame image, the contrast of the first short frame image can be adjusted by using the brightness with larger phase difference, the enhancement of the contrast is realized, and the color level of the second short frame image is more.
In summary, the brightness of the long frame image is adjusted by using the first weighted long frame image to obtain a first long frame image, then the contrast of the first long frame image is adjusted by using the second weighted long frame image to obtain a second long frame image, and the brightness of the short frame image is adjusted by using the first weighted short frame image to obtain a first short frame image, then the contrast of the first short frame image is adjusted by using the second weighted short frame image to obtain a second short frame image, and then the second long frame image and the second short frame image are fused to obtain a first new image.
In one possible implementation, the contrast of the first long frame image may be adjusted based on the first weighted long frame image, to generate a third long frame image (i.e., the second long frame image to be fused), and the contrast of the first short frame image may be adjusted based on the first weighted short frame image, to generate a third short frame image (i.e., the second short frame image to be fused). Then, a second new image is generated based on the third long frame image and the third short frame image. On this basis, the first new image and the second new image can be fused to generate the target image. For example, the first new image may be a first fused image and the second new image may be a second fused image.
In the process of generating the first new image based on the second long frame image and the second short frame image, the second long frame image, the long frame weight image corresponding to the second long frame image, and the short frame weight image corresponding to the second short frame image can be subjected to weighted fusion to obtain the first new image.
In the process of generating the second new image based on the third long frame image and the third short frame image, the third long frame image, the long frame weight image corresponding to the third long frame image, and the short frame weight image corresponding to the third short frame image can be weighted and fused to obtain the second new image.
Regarding the manner of acquisition of the long frame weight image and the short frame weight image, reference may be made to step 203.
With respect to the manner in which the first new image and the second new image are acquired, reference may be made to step 206.
In one possible implementation manner, the short frame image, the long frame image and the target image are images containing license plates, based on which the target image can be further processed to generate a processed target image, so that the pixel points of the license plate region in the processed target image are determined by the pixel points of the license plate region in the short frame image, and the pixel points of the non-license plate region in the processed target image and the corresponding pixel points in the target image remain unchanged.
The pixel points of the license plate region in the processed target image are determined by the pixel points of the license plate region in the short frame image and the ratio of the brightness average value of all the pixel points in the target image to the brightness average value of all the pixel points in the short frame image.
The pixel points of the license plate surrounding area in the processed target image are determined by the pixel points of the license plate surrounding area in the short frame image and the pixel points of the license plate surrounding area in the target image.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (7)

1. An image processing method, the method comprising:
performing brightness adjustment on a brightness channel of the long-frame image based on the obtained long-frame fine-filtering image to obtain a long-frame correction image; performing brightness adjustment on a brightness channel of the short-frame image based on the acquired short-frame fine-filtering image to obtain a short-frame correction image; the long frame image brightness channel and the short frame image brightness channel are brightness channels aiming at the same target scene, and the exposure time of the long frame image brightness channel is longer than the exposure time of the short frame image brightness channel;
Performing contrast adjustment on the long-frame correction image based on the obtained long-frame coarse-filter image to obtain a first long-frame image to be fused; performing contrast adjustment on the long-frame correction image based on the long-frame fine-filtering image to obtain a second long-frame image to be fused; performing contrast adjustment on the short frame correction image based on the obtained short frame coarse filter image to obtain a first short frame image to be fused; performing contrast adjustment on the short frame correction image based on the short frame fine filter image to obtain a second short frame image to be fused;
performing weighted fusion on a first long-frame to-be-fused image and a first short-frame to-be-fused image to obtain a first fused image, performing weighted fusion on a second long-frame to-be-fused image and a second short-frame to-be-fused image to obtain a second fused image, and generating a target image based on the first fused image and the second fused image;
the method comprises the steps of obtaining a long frame correction image, wherein the brightness adjustment of a long frame image brightness channel is carried out on the basis of the obtained long frame fine filtering image, and before the long frame correction image is obtained, the method further comprises the following steps:
performing coarse filtering processing and fine filtering processing on the long frame image brightness channel respectively to obtain a long frame coarse filtering image and a long frame fine filtering image, and performing coarse filtering processing and fine filtering processing on the short frame image brightness channel respectively to obtain a short frame coarse filtering image and a short frame fine filtering image;
The coarse filtering process includes: inquiring a configured coarse filtering gradient mapping curve table through the gradient value of each pixel point in a long-frame image brightness channel or a short-frame image brightness channel to obtain a first weight value of the pixel point and a second weight value of surrounding pixel points of the pixel point, wherein the coarse filtering gradient mapping curve table is used for representing the mapping relation between the gradient value and the weight value; determining a target brightness value of the pixel point based on the brightness value of the pixel point, the first weight value, the brightness values of surrounding pixel points of the pixel point and the second weight value; determining a long-frame coarse-filter image or a short-frame coarse-filter image based on the target brightness value of each pixel point;
the fine filtering process includes: inquiring a configured fine filtering gradient mapping curve table through the gradient value of each pixel point in a long-frame image brightness channel or a short-frame image brightness channel to obtain a third weight value of the pixel point and a fourth weight value of surrounding pixel points of the pixel point, wherein the fine filtering gradient mapping curve table is used for representing the mapping relation between the gradient value and the weight value; determining a target brightness value of the pixel point based on the brightness value of the pixel point, the third weight value, the brightness values of surrounding pixel points of the pixel point and the fourth weight value; determining a long-frame fine-filtered image or a short-frame fine-filtered image based on the target brightness value of each pixel point;
The weight value corresponding to the gradient value in the fine filtering gradient mapping curve table is larger than the weight value corresponding to the gradient value in the coarse filtering gradient mapping curve table aiming at the same gradient value.
2. The method according to claim 1, wherein the performing brightness adjustment on the long frame image brightness channel based on the obtained long frame fine filter image to obtain a long frame correction image includes: if the average value of the brightness values of all the pixel points in the long-frame fine-filtered image is larger than a first threshold value, gamma correction is carried out on the brightness values of the brightness channels of the long-frame image by adopting a first gamma coefficient, so that a long-frame corrected image is obtained; otherwise, keeping the brightness value of the brightness channel of the long-frame image unchanged to obtain a long-frame correction image;
the adjusting the brightness of the brightness channel of the short frame image based on the obtained short frame fine filtering image to obtain a short frame correction image comprises the following steps: if the average value of the brightness values of all the pixel points in the short-frame fine-filtered image is smaller than a second threshold value, gamma correction is carried out on the brightness values of the brightness channels of the short-frame image by adopting a second gamma coefficient, so that a short-frame corrected image is obtained; otherwise, keeping the brightness value of the brightness channel of the short frame image unchanged to obtain a short frame correction image; wherein the first gamma coefficient is less than 1 and the second gamma coefficient is greater than 1.
3. The method according to claim 1, wherein the performing weighted fusion on the first long frame to-be-fused image and the first short frame to-be-fused image to obtain a first fused image, and performing weighted fusion on the second long frame to-be-fused image and the second short frame to-be-fused image to obtain a second fused image includes:
the method comprises the steps of carrying out weighted fusion on the basis of a first long-frame image to be fused, an acquired long-frame weight image, a first short-frame image to be fused and an acquired short-frame weight image to obtain a first fused image;
and carrying out weighted fusion on the basis of the second long-frame image to be fused, the long-frame weight image, the second short-frame image to be fused and the short-frame weight image to obtain the second fused image.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
before the generating a target image based on the first fused image and the second fused image, the method further includes: weighting and fusing are carried out on the basis of the long frame image chromaticity channel, the acquired long frame weight image, the short frame image chromaticity channel and the acquired short frame weight image, so as to obtain a chromaticity fusion channel; wherein the long frame image chrominance channel and the short frame image chrominance channel are chrominance channels for the target scene;
The generating a target image based on the first fused image and the second fused image includes:
generating a luminance fusion channel based on the first fusion image and the second fusion image;
and generating a target image based on the brightness fusion channel and the chromaticity fusion channel.
5. The method of any of claims 1-4, wherein after the generating a target image based on the first fused image and the second fused image, the method further comprises:
if the target image comprises a license plate region sub-image and the acquired short frame image comprises a license plate region sub-image, generating a license plate migration sub-image based on the license plate region sub-image in the short frame image; wherein the short frame image comprises a short frame image brightness channel and a short frame image chromaticity channel;
and replacing the license plate region sub-image in the target image by the license plate migration sub-image.
6. An image processing method, comprising:
aiming at the same target scene, acquiring a short frame image with a first exposure time length and a long frame image with a second exposure time length, wherein the second exposure time length is longer than the first exposure time length;
Weighting the long frame image based on a first weight mapping table to generate a first weighted long frame image; adjusting the brightness of the long frame image based on the first weighted long frame image to generate a first long frame image;
weighting the short frame image based on a first weight mapping table to generate a first weighted short frame image; adjusting the brightness of the short frame image based on the first weighted short frame image to generate a first short frame image;
weighting the long frame image based on a second weight mapping table to generate a second weighted long frame image; adjusting the contrast of the first long frame image based on the second weighted long frame image to generate a second long frame image;
weighting the short frame image based on a second weight mapping table to generate a second weighted short frame image; adjusting the contrast of the first short frame image based on the second weighted short frame image to generate a second short frame image;
generating a first new image based on the second long frame image and the second short frame image;
adjusting the contrast of the first long frame image based on the first weighted long frame image to generate a third long frame image; adjusting the contrast of the first short frame image based on the first weighted short frame image to generate a third short frame image; generating a second new image based on the third long frame image and the third short frame image;
Fusing the first new image and the second new image to generate a target image;
wherein the first weight map and the second weight map are defined as: the corresponding weight of a pixel point in a preset first weight mapping table is larger than the corresponding weight of the pixel point in a preset second weight mapping table.
7. The method of claim 6, wherein the short frame image, the long frame image, and the target image are images including a license plate; the method further comprises the steps of: and processing the target image to generate a processed target image, so that the pixel points of the license plate region in the processed target image are determined by the pixel points of the license plate region in the short frame image, and the pixel points of the non-license plate region in the processed target image and the corresponding pixel points in the target image are kept unchanged.
CN202110491854.4A 2021-05-06 2021-05-06 Image processing method Active CN113222869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110491854.4A CN113222869B (en) 2021-05-06 2021-05-06 Image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110491854.4A CN113222869B (en) 2021-05-06 2021-05-06 Image processing method

Publications (2)

Publication Number Publication Date
CN113222869A CN113222869A (en) 2021-08-06
CN113222869B true CN113222869B (en) 2024-03-01

Family

ID=77091039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110491854.4A Active CN113222869B (en) 2021-05-06 2021-05-06 Image processing method

Country Status (1)

Country Link
CN (1) CN113222869B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905185B (en) * 2021-10-27 2023-10-31 锐芯微电子股份有限公司 Image processing method and device
CN115035059A (en) * 2022-06-06 2022-09-09 京东方科技集团股份有限公司 Defect detection method, device, defect detection system, equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008005083A (en) * 2006-06-21 2008-01-10 Mitsubishi Electric Corp Imaging apparatus
CN102970549A (en) * 2012-09-20 2013-03-13 华为技术有限公司 Image processing method and image processing device
KR20130079728A (en) * 2012-01-03 2013-07-11 중앙대학교 산학협력단 Apparatus and method for image enhancement using color channel
CN104484864A (en) * 2014-12-31 2015-04-01 苏州科达科技股份有限公司 Method and system for acquiring image gamma curve and enhancing image contrast
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN105744159A (en) * 2016-02-15 2016-07-06 努比亚技术有限公司 Image synthesizing method and device
CN110050292A (en) * 2016-12-12 2019-07-23 杜比实验室特许公司 The system and method that video for adjusting high dynamic range images handles curve
CN110766639A (en) * 2019-10-30 2020-02-07 北京迈格威科技有限公司 Image enhancement method and device, mobile equipment and computer readable storage medium
CN111741229A (en) * 2017-02-24 2020-10-02 三星电子株式会社 Image processing method and device
CN111986129A (en) * 2020-06-30 2020-11-24 普联技术有限公司 HDR image generation method and device based on multi-shot image fusion and storage medium
CN112258417A (en) * 2020-10-28 2021-01-22 杭州海康威视数字技术股份有限公司 Image generation method, device and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4626673B2 (en) * 2008-04-30 2011-02-09 セイコーエプソン株式会社 Image processing apparatus, integrated circuit device, and electronic apparatus
KR101575803B1 (en) * 2010-01-15 2015-12-09 삼성전자 주식회사 Method and apparatus of generating high sensitivity image in dark environment
US8965120B2 (en) * 2012-02-02 2015-02-24 Canon Kabushiki Kaisha Image processing apparatus and method of controlling the same

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008005083A (en) * 2006-06-21 2008-01-10 Mitsubishi Electric Corp Imaging apparatus
KR20130079728A (en) * 2012-01-03 2013-07-11 중앙대학교 산학협력단 Apparatus and method for image enhancement using color channel
CN102970549A (en) * 2012-09-20 2013-03-13 华为技术有限公司 Image processing method and image processing device
CN104484864A (en) * 2014-12-31 2015-04-01 苏州科达科技股份有限公司 Method and system for acquiring image gamma curve and enhancing image contrast
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN105744159A (en) * 2016-02-15 2016-07-06 努比亚技术有限公司 Image synthesizing method and device
CN110050292A (en) * 2016-12-12 2019-07-23 杜比实验室特许公司 The system and method that video for adjusting high dynamic range images handles curve
CN111741229A (en) * 2017-02-24 2020-10-02 三星电子株式会社 Image processing method and device
CN110766639A (en) * 2019-10-30 2020-02-07 北京迈格威科技有限公司 Image enhancement method and device, mobile equipment and computer readable storage medium
CN111986129A (en) * 2020-06-30 2020-11-24 普联技术有限公司 HDR image generation method and device based on multi-shot image fusion and storage medium
CN112258417A (en) * 2020-10-28 2021-01-22 杭州海康威视数字技术股份有限公司 Image generation method, device and equipment

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
A multi-exposure image fusion algorithm without ghost effect;Jaehyun An et al.20220711;《2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;第1565-1568页 *
Automatic local exposure correction using bright channel prior for under-exposed images;Yinting Wang et al.;《Signal Processing》;第93卷(第11期);第3227-3238页 *
Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations;Ioannis Merianos et al.;《Journal of Imaging》;第5卷(第3期);第1-15页 *
单张LDR图像的曝光校正与细节增强;常猛等;《光子学报》;第47卷(第4期);第1-10页 *
基于导向滤波的鬼影消除多曝光图像融合;安世全等;《计算机工程与设计》;第41卷(第11期);第3154-3160页 *
多尺度细节融合的多曝光高动态图像重建;付争方等;《计算机工程与应用》;第54卷(第24期);第182-187页,第197页 *
高动态范围视频的多曝光图像序列快速融合;朴永杰等;《液晶与显示》;第29卷(第6期);第1032-1041页 *

Also Published As

Publication number Publication date
CN113222869A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
RU2491760C2 (en) Image processing device, image processing method and programme
US8311355B2 (en) Skin tone aware color boost for cameras
CN112330531B (en) Image processing method, image processing device, electronic equipment and storage medium
KR100916073B1 (en) Histogram Stretching Device and Method for Improving Image Contrast
US7969480B2 (en) Method of controlling auto white balance
US8284271B2 (en) Chroma noise reduction for cameras
JP5677113B2 (en) Image processing device
US20130202204A1 (en) Image processing apparatus and method of controlling the same
US20160364843A1 (en) Image enhancement methods and systems using the same
US20140036106A1 (en) Image processing apparatus and image processing method
US20120281110A1 (en) Image processing apparatus and image processing method
JP2009510587A (en) Image detail enhancement
CN113222869B (en) Image processing method
CN113573032A (en) Image processing method and projection system
CN106464815A (en) System and method of fast adaptive blending for high dynamic range imaging
CN113411554A (en) Underwater image color restoration method and device
CN111970432A (en) Image processing method and image processing device
KR100933556B1 (en) Color image processing apparatus and method for extending the dynamic range
JP2018014646A (en) Image processing apparatus and image processing method
CN112258417B (en) Image generation method, device and equipment
JP4775230B2 (en) Image processing apparatus, imaging apparatus, and image processing program
CN115914850A (en) Method for enhancing permeability of wide dynamic image, electronic device and storage medium
US20080131007A1 (en) Image Coding Method and Image Coding Device
JP2012134651A (en) Image processing device, method, and program
CN1874526A (en) Apparatus for detecting, correcting attenuation and processing in hue, saturation and luminance directions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant