[go: up one dir, main page]

CN114638749A - Low-illumination image enhancement model, method, electronic device and storage medium - Google Patents

Low-illumination image enhancement model, method, electronic device and storage medium Download PDF

Info

Publication number
CN114638749A
CN114638749A CN202210135560.2A CN202210135560A CN114638749A CN 114638749 A CN114638749 A CN 114638749A CN 202210135560 A CN202210135560 A CN 202210135560A CN 114638749 A CN114638749 A CN 114638749A
Authority
CN
China
Prior art keywords
illumination
layer
initialization
module
optimized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210135560.2A
Other languages
Chinese (zh)
Other versions
CN114638749B (en
Inventor
王旭
翁键
邬文慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202210135560.2A priority Critical patent/CN114638749B/en
Publication of CN114638749A publication Critical patent/CN114638749A/en
Application granted granted Critical
Publication of CN114638749B publication Critical patent/CN114638749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

本发明适用于图像处理技术领域,提供了一种低光照图像增强模型、方法、电子设备及存储介质,低光照图像增强模型包括依次连接的初始化模块、优化模块、光照调整模块和图像重建模块,初始化模块用于对输入图像进行初始化分解,得到初始化光照层和初始化反射层,优化模块用于采用unfolding算法对初始化光照层和初始化反射层进行交替迭代优化,得到优化光照层和优化反射层,光照调整模块用于对优化光照层进行光照调整,得到目标光照层,图像重建模块用于根据目标光照层和优化反射层进行图像重建,得到目标光照图像,从而在保证低光照图像增强模型的灵活性和解释性的同时,提高了低光照图像增强模型的鲁棒性。

Figure 202210135560

The invention is applicable to the technical field of image processing, and provides a low-light image enhancement model, method, electronic device and storage medium. The low-light image enhancement model includes an initialization module, an optimization module, an illumination adjustment module and an image reconstruction module connected in sequence, The initialization module is used to initialize and decompose the input image to obtain the initialized illumination layer and the initialized reflection layer. The optimization module is used to alternately and iteratively optimize the initialized illumination layer and the initialized reflection layer by using the unfolding algorithm to obtain the optimized illumination layer and the optimized reflection layer. The adjustment module is used to adjust the illumination of the optimized illumination layer to obtain the target illumination layer. The image reconstruction module is used to reconstruct the image according to the target illumination layer and the optimized reflection layer to obtain the target illumination image, thereby ensuring the flexibility of the low-light image enhancement model. While being interpretable, the robustness of the low-light image enhancement model is improved.

Figure 202210135560

Description

Low-illumination image enhancement model, method, electronic device and storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a low-illumination image enhancement model, a low-illumination image enhancement method, electronic equipment and a storage medium.
Background
Visual perception is one of the most important ways for human beings to know the world, and with the rapid development of society, the demand of people for information is larger and larger, and in real application, unpredictable situations often occur to cause image quality degradation. In these degradation situations, low-light images are always one of the hot issues of computer society interest, especially in the fields of urban traffic, surveillance video, medical assistance, and the like. The low-illumination image is usually due to insufficient illumination intensity or too short exposure time, so that the whole pixel intensity of the acquired image is relatively small, the contrast is relatively low, a large amount of details are invisible, and the visual experience and the algorithm processing accuracy are seriously influenced.
In the existing low-illumination image enhancement technology, although most of deep learning algorithms are combined with the traditional Retinex theory and a deep learning method, a network is adopted for direct solution in the process of solving intermediate variables, and complex prior hypothesis and an iteration flow in traditional image processing are omitted. Although the deep learning method has the advantage of fast reasoning speed, the development of the deep learning technology is limited to a certain extent due to the lack of interpretability. In order to accelerate the convergence rate of the traditional iterative optimization algorithm, as early as 2010, a famous learner, Lecun et al, designs an iterative solution frame integrated with deep learning, and the frame has the idea that a nonlinear operator and a linear matrix of each iteration in the traditional iterative algorithm can be respectively replaced by an activation function and a neural network of a deep learning neighborhood, and the whole optimization process is more adaptive to the real distribution of data in a data-driven mode without depending on a specific prior assumption, so that the method has stronger robustness and obtains better effect under the condition of less iteration times. After that, such an algorithm (hereinafter, an unfolding algorithm is used instead) which expands an iterative process is gradually applied to the neighborhood of image super-resolution reconstruction, image denoising, pulse compression and the like. Recently, Liu et al applied the unfolding algorithm to low-illumination enhancement for illumination estimation and denoising processing. This work, while providing a new solution for unsupervised low-light enhancement, strips the relationship of light and noise resulting in the risk of excessive smoothing and noise retention of the image after enhancement, and furthermore, the image enhancement effect is not ideal since the model is easily overexposed in most cases, ignoring the reflective layer.
Disclosure of Invention
The invention aims to provide a low-illumination image enhancement model, a low-illumination image enhancement method, electronic equipment and a storage medium, and aims to solve the problem that the image enhancement effect in the prior art is not ideal.
In one aspect, the present invention provides a low-light image enhancement model, which includes an initialization module, an optimization module, a light adjustment module, and an image reconstruction module, which are connected in sequence,
the initialization module is used for performing initialization decomposition on an input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image;
the optimization module is used for performing a plurality of times of alternate iterative optimization on the initialization illumination layer and the initialization reflection layer by adopting an unfolding algorithm to obtain an optimization illumination layer and an optimization reflection layer;
the illumination adjusting module is used for adjusting illumination of the optimized illumination layer to obtain a target illumination layer;
and the image reconstruction module is used for reconstructing an image according to the target illumination layer and the optimized reflection layer to obtain a target illumination image.
Preferably, the initialization module is a fully connected neural network.
Preferably, the fully-connected neural network is a full convolutional neural network comprising 4 convolutional layers.
Preferably, the loss function adopted by the initialization module during training includes a fidelity term and a prior term, the fidelity term is used for measuring the closeness of an initialization image composed of an initialization illumination layer and an initialization reflection layer of a training sample and the training sample, and the prior term is used for measuring the closeness of the initialization illumination layer of the training sample and the R, G, B three-channel maximum value of the training sample.
Preferably, the loss function adopted by the initialization module during training is as follows:
Figure BDA0003504364810000021
wherein L isinitRepresenting the loss of the initialization module, I representing the training sample, R0An initialisation reflecting layer, L, representing the training sample0Represents the initialization illumination layer of the training sample, μ is a constant, and R, G, B represents the red, green, and blue channels, respectively.
Preferably, the optimization module includes a variable computation sub-network, a reflection layer repair network, and an illumination layer repair network, when performing current-time alternate iterative optimization, the variable computation sub-network is configured to compute a first intermediate variable and a second intermediate variable after current-time iterative optimization, the reflection layer repair network is configured to obtain an optimized reflection layer after current-time iterative optimization based on the first intermediate variable and the second intermediate variable after current-time iterative optimization, and the illumination layer repair network is configured to obtain an optimized illumination layer after current-time iterative optimization based on the second intermediate variable after current-time iterative optimization.
Preferably, the variable quantity operator network is configured to calculate the first intermediate variable and the second intermediate variable after the current iterative optimization by using a least square method.
Preferably, the reflection layer repair network is configured to perform convolution operation on the first intermediate variable and the second intermediate variable after the current iterative optimization to obtain a first intermediate feature map and a second intermediate feature map, perform cascade operation on the first feature map and the second intermediate feature map to obtain a spliced feature map, perform channel attention calculation on the spliced feature map by using a channel attention mechanism to obtain a re-weighted feature map, obtain noise distribution of the re-weighted feature map, and obtain the optimized reflection layer after the current iterative optimization based on the noise distribution and the first intermediate feature map.
Preferably, the illumination adjustment module comprises an adjustment factor expansion submodule, a splicing submodule and a brightness adjustment network which are connected in sequence, wherein,
the adjustment factor expansion submodule is used for expanding a preset adjustment scale factor into a matrix with the same size as the optimized illumination layer;
the splicing submodule is used for splicing the matrix and the optimized illumination layer to obtain a splicing result;
and the brightness adjusting network is used for adjusting the brightness of the optimized illumination layer based on the splicing result to obtain the target illumination layer.
Preferably, the network structure of the brightness adjustment network is the same as that of the initialization module, and the size of the convolution kernel of the convolution layer of the brightness adjustment network is larger than that of the convolution kernel of the convolution layer of the initialization module.
Preferably, the loss function used in the brightness adjustment network training includes one or more combinations of a gradient level fidelity term, a color level fidelity term, and a structure level fidelity term, where the gradient level fidelity term is used to measure a horizontal or vertical gradient distance between an optimized illumination layer and a target illumination layer of a training sample, the color level fidelity term is used to measure a reconstruction loss between a target illumination image and a reference image of the training sample, and the structure level fidelity term is used to measure a distance between the target illumination image and the reference image of the training sample.
Preferably, the loss function adopted in the brightness adjustment network training is as follows:
Figure BDA0003504364810000041
wherein L isadjustRepresenting the loss of the brightness adjustment network,
Figure BDA0003504364810000042
in the horizontal or vertical direction representing an optimized illumination layer of the training sampleThe gradient of the gradient is changed,
Figure BDA0003504364810000043
representing the gradient of the target illumination layer of the training sample in the horizontal or vertical direction, IrefRepresenting the reference image, R representing an optimized reflective layer of the training sample,
Figure BDA0003504364810000044
representing a target illumination layer of the training sample, SSIM representing an image quality loss function.
In another aspect, the present invention provides a low-light image enhancement method based on the above low-light image enhancement model, including the following steps:
performing initialization decomposition on an input image through the initialization module to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image;
performing a plurality of times of alternate iterative optimization on the initialization illumination layer and the initialization reflection layer by adopting an unfolding algorithm through the optimization module to obtain an optimization illumination layer and an optimization reflection layer;
performing illumination adjustment on the optimized illumination layer through the illumination adjustment module to obtain a target illumination layer;
and reconstructing an image according to the target illumination layer and the optimized reflection layer through the image reconstruction module to obtain a target illumination image.
In another aspect, the present invention also provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method when executing the computer program.
In another aspect, the present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above.
The low-illumination image enhancement model comprises an initialization module, an optimization module, an illumination adjustment module and an image reconstruction module which are sequentially connected, wherein the initialization module is used for carrying out initialization decomposition on an input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image, the optimization module is used for carrying out a plurality of times of alternate iterative optimization on the initialization illumination layer and the initialization reflection layer by adopting an undermining algorithm to obtain an optimization illumination layer and an optimization reflection layer, the illumination adjustment module is used for carrying out illumination adjustment on the optimization illumination layer to obtain a target illumination layer, and the image reconstruction module is used for carrying out image reconstruction according to the target illumination layer and the optimization reflection layer to obtain a target illumination image.
Drawings
Fig. 1A is a schematic structural diagram of a low-light image enhancement model according to an embodiment of the present invention;
fig. 1B is a schematic diagram illustrating an operation principle of a reflective layer repair network according to an embodiment of the present invention;
fig. 1C is a schematic diagram illustrating an operating principle of a low-illumination image enhancement model according to an embodiment of the present invention;
FIG. 2A is a visualization result of an SICE data set providing enhancement effects of 10 different low-light image enhancement models according to a second embodiment of the present invention;
fig. 2B is a visualization result of enhancement effects of 10 different low-light image enhancement models provided by the second embodiment of the present invention on an LOL data set;
FIG. 3 is a flowchart of an implementation of a low-light image enhancement method according to a third embodiment of the present invention; and
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1A illustrates a structure of a low-illumination image enhancement model according to an embodiment of the present invention, where the low-illumination image enhancement model includes an initialization module 11, an optimization module 12, an illumination adjustment module 13, and an image reconstruction module 14, which are connected in sequence; the initialization module is used for performing initialization decomposition on an input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image; the optimization module is used for performing a plurality of times of alternate iterative optimization on the initialization illumination layer and the initialization reflection layer by adopting an unfolding algorithm to obtain an optimization illumination layer and an optimization reflection layer; the illumination adjusting module is used for performing illumination adjustment on the optimized illumination layer to obtain a target illumination layer; the image reconstruction module is used for reconstructing an image according to the target illumination layer and the optimized reflection layer to obtain a target illumination image. The input image is a low-light image to be subjected to image enhancement.
For the initialization module, considering that variable initialization plays an important role for iterative optimization algorithms (e.g., ADMM), the initialization module may perform initialization decomposition on the input image by using the commonly used all-zero initialization decomposition and random initialization. In order to provide the correct direction for the subsequent optimization by the initialized decomposition, therefore, the initialized illumination layer and the initialized reflection layer should contain important information of the input image, assuming that the reflection layer is an RGB three-channel image and three channels share the same illumination layer, the three-channel maximum value of the input image can be directly assigned to the illumination layer as the initialized illumination layer, and according to Retinex theory, the color of the object is composed of the reflectivity of the surface and the illumination intensity falling on the surface, so that the following imaging expression is provided:
I=R·L (1)
where I denotes an input image, R denotes a reflective layer of the input image, and L denotes an illumination layer of the input image, and thus, the initialized reflective layer can be obtained based on the Retinex theory described above.
However, the above-mentioned initialization decomposition method will enlarge the difference of three channel values in the reflective layer, and further destroy the statistical properties of R, G, and B. Therefore, the initialization module preferably adopts a fully-connected network to realize the initialization decomposition of the illumination layer and the reflection layer, so that the initialization illumination layer and the initialization reflection layer contain more information, and meanwhile, the statistical properties of R, G and B are prevented from being damaged. Further preferably, the fully-connected neural network is a full convolution neural network including 4 convolution layers, so as to improve the operation efficiency of the low-illumination image enhancement model.
Specifically, the input of the initialization module is a low-illumination image, the output is an initialization illumination layer and an initialization reflection layer corresponding to the input image, and the abstract representation is as follows:
Figure BDA0003504364810000071
wherein R is0Denotes an initialization reflective layer, L0It is indicated that the illumination layer is initialized,
Figure BDA0003504364810000072
indicating an initialization module, I indicating an input image, thetaDIndicating the parameters of the initialization module.
For the purpose of preserving effective information of the input image by the initialization decomposition, preferably, the loss function adopted by the initialization module in training includes a fidelity term and a prior term, so that the initialization illumination layer and the initialization reflection layer after the initialization decomposition preserve effective information of the input image. The fidelity term is used for measuring the closeness degree of an initialization image formed by an initialization illumination layer and an initialization reflection layer of a training sample and the training sample so as to ensure that the initialization decomposition meets the Retinex theory, and the priori term is used for measuring the closeness degree of the initialization illumination layer of the training sample and the R, G, B three-channel maximum value of the training sample so as to enable the initialization illumination layer to learn richer structural information. The training sample is an input image during low-illumination image enhancement model training.
Preferably, the fidelity term in the loss function adopted during the training of the initialization module is calculated by using a norm L1, and the prior term is calculated by using a norm L2, so as to ensure the training effect of the initialization module, and enable the illumination layer and the reflection layer after the initialization decomposition to retain the effective information of the input image as much as possible. The loss function formula adopted during the training of the initialization module is as follows:
Figure BDA0003504364810000073
wherein L isinitDenotes the loss of the initialization module, I denotes the training samples (input images during model training), R0Initialized reflective layer, L, representing training samples0Represents the initial illumination layer of the training sample, μ is a constant, and R, G, B represents the red, green, and blue channels, respectively.
For the optimization module, according to the Retinex theory, the decomposition problem of the imaging expression (1) is a ill-conditioned problem, and therefore prior model constraints R and L need to be established when solving the problem, which are expressed as:
Figure BDA0003504364810000074
where Φ (R) and Ψ (L) represent a priori assumptions for the reflective layer R and the illumination layer L, respectively. To solve the problem, two new variables are introduced, namely a first intermediate variable P and a second intermediate variable Q, and equation (4) can then be rewritten as:
Figure BDA0003504364810000081
so far, formula (5) can be reasonably divided into fidelity terms and prior terms for independent solution, and the following expression is obtained:
Figure BDA0003504364810000082
Figure BDA0003504364810000083
Figure BDA0003504364810000084
Figure BDA0003504364810000085
wherein, PkAnd QkRepresents the result of the fidelity term solution, R, at the kth iterationkAnd LkThe solution of the prior term at the kth iteration is shown. When k is 1, PkAnd QkIndicating an initialization reflective layer and an initialization reflective layer illumination layer.
It is considered that the initialized reflective layer tends to be saturated with noise, severely affecting the presentation of important details. The initialization lighting layer not only completely retains the structural information, but also leaves redundant texture details to a great extent. Therefore, the purpose of the optimization module in this embodiment is to repair the initialized reflective layer, so that effective details can be completely retained while removing noise. The satisfactory illumination layer should be structurally complete and smooth in texture detail, and in order to achieve the above purpose, the fidelity terms and the prior terms in the above equations (6) - (9) can be alternately and iteratively optimized by using an unfolding algorithm based on the framework of the deep neural network. Based on the comprehensive consideration of time and performance, the number of times of alternate iterative optimization performed by the optimization module is preferably 3, so as to determine the number of times of alternate iterative optimization according to the actual experimental effect.
Preferably, the optimization module comprises a variable calculation sub-network, a reflection layer repair network and an illumination layer repair network, when current-time alternate iteration optimization is performed, the variable calculation sub-network is used for calculating a first intermediate variable and a second intermediate variable after current-time iteration optimization, the reflection layer repair network is used for obtaining an optimized reflection layer after current-time iteration optimization based on the first intermediate variable and the second intermediate variable after current-time iteration optimization, and the illumination layer repair network is used for obtaining an optimized illumination layer after current-time iteration optimization based on the second intermediate variable after current-time iteration optimization to replace the traditional optimized illumination layer based on current-time iteration optimizationManual prior solution of the Retinex optimization algorithm enables the optimization module to learn more robust prior information from the data. The reflection layer repairing network and the illumination layer repairing network are constructed on the basis of a neural network. Preferably, the variable quantity operator network is used for calculating the first intermediate variable and the second intermediate variable after the current iteration optimization by adopting a least square method so as to realize the calculation of the intermediate variables. In specific implementation, when the kth iterative optimization is performed, Q obtained according to the previous (k-1) iterative optimization can be firstly obtainedk-1、Rk-1Solving the above equation (6) to obtain a first intermediate variable P of the current iteration optimizationkThen fixing the first intermediate variable P optimized in the current iterationkOptimized illumination layer L optimized with previous iterationk-1Solving the above formula (7) to obtain a second intermediate variable Q of the current iteration optimizationkThe above equations (6) - (7) are understood as a classical least squares problem, so that the following closed-loop solution can be obtained by derivation:
Figure BDA0003504364810000091
Figure BDA0003504364810000092
wherein k is more than or equal to 1, k is a positive integer, and lambda is a constant.
In the formation of PkAnd QkThereafter, the prior term R is learnedkAnd Lk
According to experiments, the noise on the reflecting layer and the brightness distribution on the illuminating layer are highly correlated, and the noise is less when the brightness is higher, and vice versa. Preferably, the reflection layer repairing network is configured to perform convolution operation on the calculated feature map of the first intermediate variable and the second intermediate variable to obtain a first intermediate feature map and a second intermediate feature map, perform fusion processing on the first intermediate feature map and the second intermediate feature map to obtain a fusion feature map, and then obtain an optimized reflection layer after the current iteration based on the noise distribution of the fusion feature map and the first intermediate feature map, so that the reflection layer is repaired by combining information of the illumination layer, denoising of the reflection layer is achieved, and a learning effect of the optimized reflection layer is improved.
Further preferably, the reflection layer repair network is configured to perform convolution operation on the first intermediate variable and the second intermediate variable after the current iteration optimization to obtain a first intermediate feature map and a second intermediate feature map, perform cascade operation on the first feature map and the second intermediate feature map to obtain a spliced feature map, perform channel attention calculation on the spliced feature map by using a channel attention mechanism to obtain a re-weighted feature map, obtain noise distribution of the re-weighted feature map, and obtain the optimized reflection layer after the current iteration optimization based on the noise distribution and the first intermediate feature map.
Optimizing the reflective layer R at the time of the kth alternate iterative optimization in equation (8)kThe abstract representation is:
Figure BDA0003504364810000101
wherein, thetaRRepresenting a reflective layer repair network
Figure BDA0003504364810000102
Parameter of (A), PkFirst intermediate variable, Q, representing the kth alternate iterative optimizationkA second intermediate variable representing the kth alternate iteration optimization.
Optimizing the illumination layer L during the kth alternate iterative optimization in equation (9)kThe abstract representation is: :
Figure BDA0003504364810000103
wherein, thetaLRepresenting lighting layer repair networks
Figure BDA0003504364810000104
The parameter (c) of (c).
Fig. 1B is a schematic diagram of the working principle of the reflective layer repair network in this embodiment. In fig. 1B, the reflective layer repair network includes a channel self-attention module and a noise extraction module, Conv represents convolution, circle C represents cascade operation (coordination), Average position represents Average pooling, FC represents full connection, circle x represents pixel multiplication (Element-wise multiplication), and circle-represents denoising, and the reflective layer repair network operates according to the principle that a first intermediate variable P is first processedkAnd a second intermediate variable QkPerforming convolution operation to obtain a first intermediate feature map M1 and a second intermediate feature map M2, then performing cascade operation on the first intermediate feature map M1 and the second intermediate feature map M2 through the cascade operation to obtain a spliced feature map M3, performing channel attention calculation on a spliced feature map M3 through a channel self-attention module to obtain a re-weighted feature map M4, obtaining a noise distribution M5 of the re-weighted feature map M4 through a noise extraction module, and denoising the first intermediate feature map M1 based on the noise distribution M5 to obtain an optimized reflection layer M6 after current iteration optimization.
For the above illumination adjustment module, in the Retinex theory, the reflective layer is an inherent property of an object and does not change with the illumination condition. Based on the theory, the low-illumination imaging is caused by the lower intensity of the illumination layer, so that after the optimized illumination layer and the optimized reflection layer are obtained, the optimized reflection layer can be fixed and unchanged, and the optimized illumination layer is adjusted. The illumination adjustment module may adjust illumination of the illumination layer by using a gamma correction (gamma correction) technique, which can achieve different brightness enhancement effects by adjusting parameters, but it is difficult to determine an adjustment scale factor by adjusting the scale. Preferably, the illumination adjustment module is configured to perform illumination adjustment on the optimized illumination layer according to a preset adjustment scale factor, where the adjustment scale factor is specified by a user, so as to generate the target illumination layer according to a brightness adjustment scale specified by the user. In specific implementation, the input of the illumination adjustment module is an optimized illumination layer and a user-specified adjustment scale factor, and the output is a high-light illumination layer under a target adjustment scale, namely a target illumination layer.
Preferably, the illumination adjustment module includes an adjustment factor expansion submodule, a splicing submodule and a brightness adjustment network which are connected in sequence, wherein the adjustment factor expansion submodule is used for expanding a preset adjustment scale factor into a matrix with the same size as the optimized illumination layer, the splicing submodule is used for splicing the matrix and the optimized illumination layer to obtain a splicing result, and the brightness adjustment network is used for adjusting the brightness of the optimized illumination layer based on the splicing result to obtain a target illumination layer so as to adjust the illumination intensity. In a specific implementation, the scaling factor is expanded into a matrix with the same size as the illumination layer, and then the matrix is spliced with the optimized illumination layer to serve as an input of the electrical network after the brightness is adjusted, which can be specifically expressed as:
Figure BDA0003504364810000111
where ω denotes the scale factor of the adjustment, θARepresenting a brightness adjustment network
Figure BDA0003504364810000112
L denotes the optimized illumination layer.
Preferably, the brightness adjustment network has the same network structure as the initialization module, that is, the brightness adjustment network may be a full convolution neural network including 4 convolution layers, and the convolution kernel of the convolution layer of the brightness adjustment network has a larger size than that of the convolution kernel of the convolution layer of the initialization module, so as to maintain consistency and limit smoothness of the illumination layer.
In order to train the brightness adjustment network better, considering that the high-brightness illumination layer output by the brightness adjustment network should be consistent with the input low-brightness illumination layer in structure, preferably, the loss function adopted in the training of the brightness adjustment network includes a gradient level fidelity term to ensure the training effect of the brightness adjustment network. And the gradient layer fidelity item is used for measuring the horizontal or vertical gradient distance between the optimized illumination layer and the target illumination layer of the training sample.
In order to reconstruct an image of normal illumination based on the target illumination layer, it is preferable that the loss function used in the training of the brightness adjustment network includes a color-level fidelity term to further improve the training effect of the brightness adjustment network. The color-level fidelity item is used for measuring the reconstruction loss of a target illumination image and a reference image of a training sample, so that the reconstructed image is a normal illumination image, namely the reference image.
In order to make the reconstructed image consistent with the reference image in structure, brightness and contrast, the loss function adopted in the training of the brightness adjustment network preferably includes one or more combinations of color-level fidelity terms and structure-level fidelity terms, so as to further improve the training effect of the brightness adjustment network. The structure-level fidelity item is used for measuring the distance between a target illumination image and a reference image of a training sample, so that the reconstructed image is consistent with the reference image in structure, brightness and contrast.
Preferably, the loss function adopted in the lighting adjustment module training includes a gradient level fidelity term, a color level fidelity term and a structure level fidelity term, so as to further improve the training effect of the brightness adjustment network through the constraints of three aspects.
Further preferably, the gradient level fidelity term is calculated by using a norm of L1, the color level fidelity term is calculated by using a norm of L2, the structural level fidelity term is calculated by using SSIM (image quality) loss, and the loss function adopted during the training of the brightness adjustment network is represented as:
Figure BDA0003504364810000121
wherein L isadjustIndicating the loss of the brightness adjustment network,
Figure BDA0003504364810000122
represents the gradient in the horizontal or vertical direction of the optimized illumination layer of the training sample,
Figure BDA0003504364810000123
representing the gradient of the target illumination layer of the training sample in the horizontal or vertical direction, IrefRepresenting a reference image, R represents an optimized reflective layer of a training sample,
Figure BDA0003504364810000124
representing the target illumination layer of the training sample, SSIM representing the image quality loss function.
After the target illumination layer is obtained, the image reconstruction module multiplies the target illumination layer and the optimized reflection layer to reconstruct an image to obtain a target illumination image.
Fig. 1C is a schematic diagram of an operating principle of the low-illumination image enhancement model according to the embodiment of the present invention. In FIG. 1C, an input image I passes through an initialization module
Figure BDA0003504364810000125
Obtaining an initialized illumination layer L0 and an initialized reflection layer R0 after the initialized decomposition, outputting an optimized illumination layer and an optimized reflection layer after T times of alternate iterative optimization training of an optimization module, inputting the optimized illumination layer into an illumination adjusting module, splicing an adjusting scale factor omega with the optimized illumination layer, and inputting a splicing characteristic diagram into a brightness adjusting network
Figure BDA0003504364810000126
Adjusting brightness to obtain a target illumination layer
Figure BDA0003504364810000127
Then, the layer is illuminated based on the target
Figure BDA0003504364810000128
And optimizing the reflecting layer to reconstruct the image, and inputting the enhanced image, namely the target illumination image.
In the embodiment of the invention, the low-illumination image enhancement model comprises an initialization module, an optimization module, an illumination adjustment module and an image reconstruction module which are sequentially connected, wherein the initialization module is used for carrying out initialization decomposition on an input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image, the optimization module is used for carrying out a plurality of times of alternate iterative optimization on the initialization illumination layer and the initialization reflection layer by adopting an unfolding algorithm to obtain an optimization illumination layer and an optimization reflection layer, the illumination adjustment module is used for carrying out illumination adjustment on the optimization illumination layer to obtain a target illumination layer, the image reconstruction module is used for carrying out image reconstruction according to the target illumination layer and the optimization reflection layer to obtain a target illumination image, so that the flexibility and the interpretability of the low-illumination image enhancement model are ensured, and the robustness of the low-illumination image enhancement model is improved, and the low-illumination image enhancement model can restrain noise and simultaneously retain detail information.
In the embodiment of the present invention, each unit/module of the low-light image enhancement module may be implemented by corresponding hardware or software units, and each unit/module may be an independent software or hardware unit/module, or may be integrated into one software or hardware unit/module, which is not limited herein.
Example two:
this example further illustrates the low-light enhancement model described in the first experimental example with reference to the experimental example:
the experimental example subjectively and objectively evaluates the unfolding-based low-light image enhancement model described in example one on two disclosed low-light image enhancement test sets. The two representative datasets are the LOL dataset and the SICE dataset, respectively. The experimental examples used general reference indices for evaluating Image quality, namely Mean Absolute Error (MAE), Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR) and Learned Perceptual Image Patch Similarity (LPIPS). A good model would have high PSNR and SSIM index scores, but low MAE and LPIPS scores. This example compares the low-light image enhancement model proposed in the first example with some existing reference models, such as LIME, NPE, SRIE, RRM, LR3M, Retinex-Net, KinD, Zero-DCE, RUAS.
The results of the comparison of the model performances are given in table one and table two, and it is clear that the low-light image enhancement model (uretiex-Net) proposed in example one achieves very good performance on the LOL and SICE datasets compared to other reference models.
Table one embodiment one describes the experimental evaluation of low-light image enhancement model and reference model on the LOL database test set
Figure BDA0003504364810000131
Figure BDA0003504364810000141
Table two examples experiments of the low-light image enhancement model and the reference model described in the first embodiment on the SICE database test set
Figure BDA0003504364810000142
Table one shows the experimental results of different low-light image enhancement models on the test set of the LOL database. It can be seen that the low-illumination image enhancement model (uretinix-Net) proposed in the first embodiment achieves good performance. Compared with the traditional Retinex optimization model based on manual prior, the model of the experimental example shows excellent effects on all indexes, and the optimization module described in the first embodiment can learn a more robust prior rule from data. Compared with other deep learning-based methods, the experimental example only slightly differs from KinD (0.0832vs 0.0804) in the index of MAE, and the difference is very small. However, in other indexes (PSNR, SSIM and LPIPS), the low-illumination image enhancement model (uretiniex-Net) proposed in the embodiment is significantly superior to other models, and further illustrates the advantages of the enhancement mode based on the iterative alternation optimization proposed in the present application.
In addition, in order to verify the generalization ability of the low-light image enhancement model proposed in the first embodiment, the performance of the model trained on the LOL data set was evaluated on the SICE data set, and the comparison results are given in table one. As is apparent from table one, the MAE, PSNR, and SSIM index scores of the low-light image enhancement model (uretinix-Net) proposed in the first embodiment are significantly better than those of other reference models, and the low-light image enhancement model proposed in the first embodiment exhibits better noise suppression capability and image structure information retention capability for the same data set. This is enough to show that the low-light image enhancement model proposed in the first embodiment has a relatively strong generalization capability, and can achieve a relatively good result even in scenes that do not appear in the training set.
The experimental example optionally visualizes some test results of the low-light image enhancement models on the LOL and SICE data sets in fig. 2A and 2B, including the low-light image enhancement model (Ours) proposed in the first embodiment, and the reference map (Ground-route) is displayed in the last column of the last row. Fig. 2A and 2B compare the low-light image enhancement model provided in this embodiment with LIME, NPE, SRIE, RRM, LR3M, Retinex-Net, KinD, Zero-DCE, and RUAS models. As shown in fig. 2B, the low-light enhancement model (Ours) proposed in the first embodiment performs well in some challenging cases. For example, the regions highlighted in FIG. 2B, it can be seen from FIG. 2B that the brightness is very low in the source image (Input), which necessarily introduces a lot of noise (e.g., LIME, NPE, SRIE, Retinex-Net and Zero-DCE) if only the contrast is increased, which severely disturbs important texture details. While algorithms that take noise into account (such as RRM, LR3M, KinD and RUAS), while the noise is significantly reduced, excessive smoothing results in loss of important details. In contrast, the low-illumination image enhancement model provided by the application can both sufficiently remove noise and retain important texture details. It can be seen from fig. 2A that the low-light image enhancement model (Ours) proposed in the first embodiment performs well in terms of color fidelity, noise suppression or exposure.
Example three:
third embodiment of the present invention is implemented based on the low-illumination image enhancement model described in the first embodiment, and fig. 3 shows an implementation flow of the low-illumination image enhancement method provided in the third embodiment of the present invention, and for convenience of description, only the parts related to the third embodiment of the present invention are shown, and details are as follows:
in step S301, the initialization module performs initialization decomposition on the input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image.
In an embodiment of the present invention, the input image is a low-light image to be subjected to image enhancement. Considering that variable initialization plays an important role for iterative optimization algorithms (e.g., ADMM), the input image can be initially decomposed using the commonly used all-zero initialization decomposition and random initialization. In order to provide the correct direction for the subsequent optimization by the initialized decomposition, therefore, the initialized illumination layer and the initialized reflection layer should contain important information of the input image, assuming that the reflection layer is an RGB three-channel image and three channels share the same illumination layer, the three-channel maximum value of the input image can be directly assigned to the illumination layer as the initialized illumination layer, and according to Retinex theory, the color of the object is composed of the reflectivity of the surface and the illumination intensity falling on the surface, so that the following imaging expression is provided:
I=R·L (1)
where I denotes an input image, R denotes a reflective layer of the input image, and L denotes an illumination layer of the input image, and thus, the initialized reflective layer can be obtained based on the Retinex theory described above.
However, the above-mentioned initialization decomposition method will enlarge the difference of three channel values in the reflective layer, and further destroy the statistical properties of R, G, and B. Thus, the initialization module preferably implements the initialization decomposition of the illumination layer and the reflective layer using a fully connected network such that the initialization illumination layer and the initialization reflective layer contain more information while avoiding corrupting R, G, B the statistical properties. Further preferably, the fully-connected neural network is a full convolution neural network including 4 convolution layers, so as to improve the operation efficiency of the low-illumination image enhancement model.
Specifically, the input of the initialization module is a low-illumination image, the output is an initialization illumination layer and an initialization reflection layer corresponding to the input image, and the abstract representation is as follows:
Figure BDA0003504364810000161
wherein R is0Denotes an initialization reflective layer, L0It is indicated that the illumination layer is initialized,
Figure BDA0003504364810000162
indicating an initialization module, I indicating an input image, thetaDIndicating the parameters of the initialization module.
For the purpose of preserving effective information of the input image by the initialization decomposition, preferably, the loss function adopted by the initialization module in training includes a fidelity term and a prior term, so that the initialization illumination layer and the initialization reflection layer after the initialization decomposition preserve effective information of the input image. The fidelity term is used for measuring the closeness degree of an initialization image formed by an initialization illumination layer and an initialization reflection layer of a training sample and the training sample so as to ensure that the initialization decomposition meets the Retinex theory, and the priori term is used for measuring the closeness degree of the initialization illumination layer of the training sample and the R, G, B three-channel maximum value of the training sample so as to enable the initialization illumination layer to learn richer structural information. The training sample is an input image during low-illumination image enhancement model training.
Preferably, fidelity terms in the loss function adopted during training of the initialization module are calculated by using a norm L1, and prior terms are calculated by using a norm L2, so that the training effect of the initialization module is ensured, and effective information of the input image is kept as much as possible by an illumination layer and a reflection layer after initialization decomposition. The loss function formula adopted during the training of the initialization module is as follows:
Figure BDA0003504364810000171
wherein L isinitDenotes the loss of the initialization module, I denotes the training samples (input images during model training), R0Initialized reflective layer, L, representing training samples0Represents the initial illumination layer of the training sample, μ is a constant, and R, G, B represents the red, green, and blue channels, respectively.
In step S302, the initializing illumination layer and the initializing reflection layer are subjected to a plurality of times of alternating iterative optimization by an optimization module using an undermining algorithm to obtain an optimized illumination layer and an optimized reflection layer.
In the embodiment of the present invention, according to the Retinex theory, the decomposition problem of the imaging expression (1) is a pathological problem, and therefore prior model constraints R and L need to be established when solving the problem, which are expressed as:
Figure BDA0003504364810000172
where Φ (R) and Ψ (L) represent a priori assumptions for the reflective layer R and the illumination layer L, respectively. To solve the problem, two new variables are introduced, namely a first intermediate variable P and a second intermediate variable Q, and equation (4) can then be rewritten as:
Figure BDA0003504364810000173
so far, formula (3) can be reasonably divided into fidelity terms and prior terms for independent solution, and the following expression is obtained:
Figure BDA0003504364810000174
Figure BDA0003504364810000175
Figure BDA0003504364810000176
Figure BDA0003504364810000177
wherein, PkAnd QkRepresenting the result of the solution of the fidelity term, R, at the kth iterationkAnd LkThe solution of the prior term at the kth iteration is shown. When k is 1, PkAnd QkIndicating an initialization reflective layer and an initialization reflective layer illumination layer.
It is considered that the initialized reflective layer tends to be saturated with noise, severely affecting the presentation of important details. The initialization illumination layer not only completely retains structural information, but also leaves redundant texture details to a great extent. Therefore, the purpose of the optimization module in this embodiment is to repair the initialized reflective layer, so that effective details can be completely retained while removing noise. The satisfactory illumination layer should be structurally complete and smooth in texture detail, and in order to achieve the above purpose, the fidelity terms and the prior terms in the above equations (6) - (9) can be alternately and iteratively optimized by using an unfolding algorithm based on the framework of the deep neural network. Based on the comprehensive consideration of time and performance, the number of times of alternate iterative optimization performed by the optimization module is preferably 3, so as to determine the number of times of alternate iterative optimization according to the actual experimental effect.
Preferably, the optimization module comprises a variable calculation sub-network, a reflection layer repair network and an illumination layer repair network, when current alternate iterative optimization is performed, the variable calculation sub-network is used for calculating a first intermediate variable and a second intermediate variable after current iterative optimization, the reflection layer repair network is used for obtaining an optimized reflection layer after current iterative optimization based on the first intermediate variable and the second intermediate variable after current iterative optimization, and the illumination layer repair network is used for obtaining an optimized illumination layer after current iterative optimization based on the second intermediate variable after current iterative optimization to replace the traditional manual prior solution based on the Retinex optimization algorithm, so that the optimization module learns more robust prior information from data. The reflection layer repairing network and the illumination layer repairing network are constructed on the basis of a neural network. Preferably, the variable quantity operator network is used for calculating the first intermediate variable and the second intermediate variable after the current iteration optimization by adopting a least square method so as to realize the calculation of the intermediate variables. In specific implementation, when the kth iterative optimization is performed, Q obtained according to the previous (k-1) iterative optimization can be firstly obtainedk-1、Rk-1Solving the above equation (6) to obtain a first intermediate variable P of the current iteration optimizationkThen fixing the first intermediate variable P optimized in the current iterationkAnd the previous oneOptimized illumination layer L for sub-iterative optimizationk-1Solving the above formula (7) to obtain a second intermediate variable Q of the current iteration optimizationkThe above equations (6) - (7) are understood as a classical least squares problem, so that the following closed-loop solution can be obtained by derivation:
Figure BDA0003504364810000181
Figure BDA0003504364810000191
wherein k is more than or equal to 1, k is a positive integer, and lambda is a constant.
In the formation of PkAnd QkThereafter, the prior term R is learnedkAnd Lk
According to experiments, the noise on the reflecting layer and the brightness distribution on the illuminating layer are highly correlated, and the noise is less when the brightness is higher, and vice versa. Preferably, the reflection layer repairing network is configured to perform convolution operation on the calculated feature map of the first intermediate variable and the second intermediate variable to obtain a first intermediate feature map and a second intermediate feature map, perform fusion processing on the first intermediate feature map and the second intermediate feature map to obtain a fusion feature map, and then obtain an optimized reflection layer after the current iteration based on the noise distribution of the fusion feature map and the first intermediate feature map, so that the reflection layer is repaired by combining information of the illumination layer, denoising of the reflection layer is achieved, and a learning effect of the optimized reflection layer is improved.
Further preferably, the reflection layer repair network is configured to perform convolution operation on the first intermediate variable and the second intermediate variable after the current iteration optimization to obtain a first intermediate feature map and a second intermediate feature map, perform cascade operation on the first feature map and the second intermediate feature map to obtain a spliced feature map, perform channel attention calculation on the spliced feature map by using a channel attention mechanism to obtain a re-weighted feature map, obtain noise distribution of the re-weighted feature map, and obtain the optimized reflection layer after the current iteration optimization based on the noise distribution and the first intermediate feature map.
Optimizing the reflective layer R at the time of the kth alternate iterative optimization in equation (8)kThe abstract representation is:
Figure BDA0003504364810000192
wherein, thetaRRepresenting a reflective layer repair network
Figure BDA0003504364810000193
Parameter of (A), PkFirst intermediate variable, Q, representing the kth alternate iterative optimizationkA second intermediate variable representing the kth alternating iterative optimization.
Optimizing the illumination layer L during the kth alternate iterative optimization in equation (9)kThe abstract representation is: :
Figure BDA0003504364810000194
wherein, thetaLRepresenting lighting layer repair networks
Figure BDA0003504364810000195
The parameter (c) of (c).
In step S303, the illumination adjustment module performs illumination adjustment on the optimized illumination layer to obtain a target illumination layer.
In the embodiments of the present invention, in the Retinex theory, the reflective layer is an inherent property of an object and does not change with the illumination condition. Based on the theory, the low-illumination imaging is caused by the lower intensity of the illumination layer, so that after the optimized illumination layer and the optimized reflection layer are obtained, the optimized reflection layer can be fixed and not changed, and the optimized illumination layer is adjusted.
The illumination adjustment module may adjust illumination of the illumination layer by using a gamma correction (gamma correction) technique, which can achieve different brightness enhancement effects by adjusting parameters, but it is difficult to determine an adjustment scale factor by adjusting the scale. Preferably, the illumination adjustment module is configured to perform illumination adjustment on the optimized illumination layer according to a preset adjustment scale factor, where the adjustment scale factor is specified by a user, so as to generate the target illumination layer according to a brightness adjustment scale specified by the user. In specific implementation, the input of the illumination adjustment module is an optimized illumination layer and a user-specified adjustment scale factor, and the output is a high-light illumination layer under a target adjustment scale, namely a target illumination layer.
Preferably, the illumination adjustment module includes an adjustment factor expansion submodule, a splicing submodule and a brightness adjustment network which are connected in sequence, wherein the adjustment factor expansion submodule is used for expanding a preset adjustment scale factor into a matrix with the same size as the optimized illumination layer, the splicing submodule is used for splicing the matrix and the optimized illumination layer to obtain a splicing result, and the brightness adjustment network is used for adjusting the brightness of the optimized illumination layer based on the splicing result to obtain a target illumination layer so as to adjust the illumination intensity. In a specific implementation, the scaling factor is expanded into a matrix with the same size as the illumination layer, and then the matrix is spliced with the optimized illumination layer to serve as an input of the electrical network after the brightness is adjusted, which can be specifically expressed as:
Figure BDA0003504364810000201
where ω denotes the scale factor of the adjustment, θARepresenting a brightness adjustment network
Figure BDA0003504364810000202
L denotes the optimized illumination layer.
Preferably, the brightness adjustment network has the same network structure as the initialization module, that is, the brightness adjustment network may be a full convolution neural network including 4 convolution layers, and the convolution kernel of the convolution layer of the brightness adjustment network has a larger size than that of the convolution layer of the initialization module, so as to maintain uniformity and limit the smoothness of the illumination layer.
In order to train the brightness adjustment network better, considering that the high-brightness illumination layer output by the brightness adjustment network should be consistent with the input low-brightness illumination layer in structure, preferably, the loss function adopted in the training of the brightness adjustment network includes a gradient level fidelity term to ensure the training effect of the brightness adjustment network. And the gradient layer fidelity item is used for measuring the horizontal or vertical gradient distance between the optimized illumination layer and the target illumination layer of the training sample.
In order to reconstruct an image of normal illumination based on the target illumination layer, it is preferable that the loss function used in the training of the brightness adjustment network includes a color-level fidelity term to further improve the training effect of the brightness adjustment network. The color level fidelity item is used for measuring the reconstruction loss of a target illumination image and a reference image of a training sample, so that the reconstructed image is a normal illumination image, namely the reference image.
In order to make the reconstructed image consistent with the reference image in structure, brightness and contrast, the loss function adopted in the training of the brightness adjustment network preferably includes one or more combinations of color-level fidelity terms and structure-level fidelity terms, so as to further improve the training effect of the brightness adjustment network. The structure-level fidelity item is used for measuring the distance between a target illumination image of a training sample and a reference image, so that the reconstructed image is consistent with the reference image in structure, brightness and contrast.
Preferably, the loss function adopted in the lighting adjustment module training includes a gradient level fidelity term, a color level fidelity term and a structure level fidelity term, so as to further improve the training effect of the brightness adjustment network through the constraints of three aspects.
Further preferably, the gradient level fidelity term is calculated by using a norm of L1, the color level fidelity term is calculated by using a norm of L2, the structural level fidelity term is calculated by using SSIM (image quality) loss, and the loss function adopted during the training of the brightness adjustment network is represented as:
Figure BDA0003504364810000211
wherein L isadjustIndicating the loss of the brightness adjustment network,
Figure BDA0003504364810000212
represents the gradient in the horizontal or vertical direction of the optimized illumination layer of the training sample,
Figure BDA0003504364810000213
representing the gradient of the target illumination layer of the training sample in the horizontal or vertical direction, IrefRepresenting a reference image, R represents an optimized reflective layer of a training sample,
Figure BDA0003504364810000214
representing the target illumination layer of the training sample, SSIM representing the image quality loss function.
In step S304, an image reconstruction module reconstructs an image according to the target illumination layer and the optimized reflection layer to obtain a target illumination image.
In the embodiment of the invention, the image reconstruction module multiplies the target illumination layer and the optimized reflection layer to reconstruct the image to obtain the target illumination image.
In the embodiment of the invention, an input image is initialized and decomposed to obtain an initialized illumination layer and an initialized reflection layer corresponding to the input image, the initialized illumination layer and the initialized reflection layer are subjected to a plurality of times of alternate iterative optimization by adopting an unfolding algorithm to obtain an optimized illumination layer and an optimized reflection layer, the optimized illumination layer is subjected to illumination adjustment to obtain a target illumination layer, and image reconstruction is carried out according to the target illumination layer and the optimized reflection layer to obtain a target illumination image, so that the flexibility and the interpretability of the low-illumination image enhancement model are ensured, the robustness of the low-illumination image enhancement model is improved, and the low-illumination image enhancement model retains detail information while suppressing noise.
Example four:
fig. 4 shows a structure of an electronic device according to a fourth embodiment of the present invention, and only a part related to the fourth embodiment of the present invention is shown for convenience of description.
The electronic device 4 of an embodiment of the invention comprises a processor 40, a memory 41 and a computer program 42 stored in the memory 41 and executable on the processor 40. The processor 40, when executing the computer program 42, implements the steps in the above-described method embodiments, for example, the steps S301 to S304 shown in fig. 3. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the modules in the low-light image enhancement model embodiment described above, e.g., the functions of the modules 11 to 14 shown in fig. 1A.
Example five:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps in the above-described method embodiment, for example, steps S301 to S304 shown in fig. 3. Alternatively, the computer program, when executed by a processor, implements the functionality of the modules in the above-described low-light image enhancement model embodiment, e.g., the functionality of modules 11 to 14 shown in fig. 1A.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1.一种低光照图像增强模型,其特征在于,所述低光照图像增强模型包括依次连接的初始化模块、优化模块、光照调整模块和图像重建模块,其中,1. A low-light image enhancement model, characterized in that the low-light image enhancement model comprises an initialization module, an optimization module, an illumination adjustment module and an image reconstruction module connected in sequence, wherein, 所述初始化模块,用于对输入图像进行初始化分解,得到与所述输入图像对应的初始化光照层和初始化反射层;The initialization module is used to initialize and decompose the input image to obtain an initialization illumination layer and an initialization reflection layer corresponding to the input image; 所述优化模块,用于采用unfolding算法对所述初始化光照层和所述初始化反射层进行若干次交替迭代优化,得到优化光照层和优化反射层;The optimization module is configured to perform several alternate iterative optimizations on the initialization illumination layer and the initialization reflection layer by using an unfolding algorithm to obtain an optimized illumination layer and an optimized reflection layer; 所述光照调整模块,用于对所述优化光照层进行光照调整,得到目标光照层;The illumination adjustment module is configured to perform illumination adjustment on the optimized illumination layer to obtain a target illumination layer; 所述图像重建模块,用于根据所述目标光照层和所述优化反射层进行图像重建,得到目标光照图像。The image reconstruction module is configured to perform image reconstruction according to the target illumination layer and the optimized reflection layer to obtain a target illumination image. 2.如权利要求1所述的低光照图像增强模型,其特征在于,所述初始化模块为全连接神经网络,所述全连接神经网络为包含4个卷积层的全卷积神经网络。2 . The low-light image enhancement model according to claim 1 , wherein the initialization module is a fully connected neural network, and the fully connected neural network is a fully convolutional neural network comprising 4 convolutional layers. 3 . 3.如权利要求1所述的低光照图像增强模型,其特征在于,所述初始化模块训练时采用的损失函数包括保真项和先验项,其中,所述保真项用于衡量由训练样本的初始化光照层和初始化反射层组成的初始化图像,与所述训练样本的接近程度,所述先验项用于衡量所述训练样本的初始化光照层与所述训练样本的R、G、B三通道最大值的接近程度;3. The low-light image enhancement model according to claim 1, wherein the loss function used in the training of the initialization module includes a fidelity term and a priori term, wherein the fidelity term is used to measure the training The initialization image composed of the initialization illumination layer and the initialization reflection layer of the sample, the closeness of the training sample, and the prior term is used to measure the initialization illumination layer of the training sample and the R, G, B of the training sample The proximity of the maximum value of the three channels; 所述初始化模块训练时采用的损失函数为:The loss function used in the training of the initialization module is:
Figure FDA0003504364800000011
Figure FDA0003504364800000011
其中,Linit表示所述初始化模块的损失,I表示所述训练样本,R0表示所述训练样本的初始化反射层,L0表示所述训练样本的初始化光照层,μ为常数,R、G、B分别表示红、绿、蓝通道。Wherein, L init represents the loss of the initialization module, I represents the training sample, R 0 represents the initialization reflection layer of the training sample, L 0 represents the initialization illumination layer of the training sample, μ is a constant, R, G , B represent the red, green, and blue channels, respectively.
4.如权利要求1所述的低光照图像增强模型,其特征在于,所述优化模块包括变量计算子网络、反射层修复网络和光照层修复网络,在进行当次交替迭代优化时,所述变量计算子网络用于计算当次迭代优化后的第一中间变量和第二中间变量,所述反射层修复网络用于基于所述当次迭代优化后的第一中间变量和第二中间变量,得到当次迭代优化后的优化反射层,所述光照层修复网络用于基于当次迭代优化后的第二中间变量,得到当次迭代优化后的优化光照层。4. The low-light image enhancement model according to claim 1, wherein the optimization module comprises a variable calculation sub-network, a reflection layer repair network and an illumination layer repair network, and when performing alternate iteration optimization, the The variable calculation sub-network is used to calculate the first intermediate variable and the second intermediate variable after the current iteration optimization, and the reflection layer repair network is used to calculate the first intermediate variable and the second intermediate variable after the current iteration optimization, The optimized reflection layer optimized by the current iteration is obtained, and the illumination layer repair network is used to obtain the optimized illumination layer optimized by the current iteration based on the second intermediate variable optimized by the current iteration. 5.如权利要求4所述的低光照图像增强模型,其特征在于,所述变量计算子网络用于采用最小二乘法计算所述当次迭代优化后的第一中间变量和第二中间变量;5. The low-light image enhancement model according to claim 4, wherein the variable calculation sub-network is used to calculate the first intermediate variable and the second intermediate variable after the current iteration optimization by using the least squares method; 所述反射层修复网络用于对所述当次迭代优化后的第一中间变量和第二中间变量进行卷积运算,得到第一中间特征图和第二中间特征图,对所述第一特征图和所述第二中间特征图进行级联操作,得到拼接特征图,采用通道自注意力机制对所述拼接特征图进行通道注意力计算,得到重加权特征图,获取所述重加权特征图的噪声分布,基于所述噪声分布和所述第一中间特征图得到当次迭代优化后的优化反射层。The reflection layer repair network is used to perform a convolution operation on the first intermediate variable and the second intermediate variable after the current iterative optimization to obtain a first intermediate feature map and a second intermediate feature map. The image and the second intermediate feature map are cascaded to obtain a spliced feature map, and the channel attention calculation is performed on the spliced feature map using the channel self-attention mechanism to obtain a re-weighted feature map, and the re-weighted feature map is obtained. The noise distribution of the current iteration is obtained based on the noise distribution and the first intermediate feature map. 6.如权利要求1所述的低光照图像增强模型,其特征在于,所述光照调整模块包括依次连接的调整因子扩张子模块、拼接子模块和亮度调整网络,其中,6. The low-light image enhancement model according to claim 1, wherein the illumination adjustment module comprises an adjustment factor expansion sub-module, a splicing sub-module and a brightness adjustment network connected in sequence, wherein, 所述调整因子扩张子模块,用于将预设的调整尺度因子扩张为与所述优化光照层相同大小的矩阵;The adjustment factor expansion sub-module is used to expand the preset adjustment scale factor into a matrix of the same size as the optimized illumination layer; 所述拼接子模块,用于将所述矩阵与所述优化光照层进行拼接,得到拼接结果;The splicing sub-module is used for splicing the matrix and the optimized illumination layer to obtain a splicing result; 所述亮度调整网络,用于基于所述拼接结果对所述优化光照层进行亮度调整,得到所述目标光照层,其中,所述亮度调整网络与所述初始化模块的网络结构相同,且所述亮度调整网络的卷积层的卷积核的尺寸大于所述初始化模块的卷积层的卷积核。The brightness adjustment network is used to adjust the brightness of the optimized illumination layer based on the splicing result to obtain the target illumination layer, wherein the brightness adjustment network is the same as the network structure of the initialization module, and the The size of the convolution kernel of the convolutional layer of the brightness adjustment network is larger than the size of the convolutional kernel of the convolutional layer of the initialization module. 7.如权利要求6所述的低光照图像增强模型,其特征在于,所述亮度调整网络训练时采用的损失函数包括梯度层面保真项、颜色层面保真项以及结构层面保真项中的一种或多种组合,其中,所述梯度层面保真项用于衡量训练样本的优化光照层与目标光照层之间的水平或垂直梯度距离,所述颜色层面保真项用于衡量所述训练样本的目标光照图像与参考图像的重建损失,所述结构层面保真项用于衡量所述训练样本的目标光照图像与所述参考图像的距离;7. The low-light image enhancement model according to claim 6, wherein the loss function used in the training of the brightness adjustment network comprises a gradient level fidelity item, a color level fidelity item and a structural level fidelity item. One or more combinations, wherein the gradient level fidelity term is used to measure the horizontal or vertical gradient distance between the optimized illumination layer of the training sample and the target illumination layer, and the color level fidelity term is used to measure the The reconstruction loss between the target illumination image of the training sample and the reference image, and the structural level fidelity item is used to measure the distance between the target illumination image of the training sample and the reference image; 所述亮度调整网络训练时采用的损失函数为:The loss function used in the training of the brightness adjustment network is:
Figure FDA0003504364800000031
Figure FDA0003504364800000031
其中,Ladjust表示所述亮度调整网络的损失,
Figure FDA0003504364800000032
表示所述训练样本的优化光照层的在水平或垂直方向的梯度,
Figure FDA0003504364800000033
表示所述训练样本的目标光照层在水平或垂直方向的梯度,Iref表示所述参考图像,R表示所述训练样本的优化反射层,
Figure FDA0003504364800000034
表示所述训练样本的目标光照层,SSIM表示图像质量损失函数。
where L adjust represents the loss of the brightness adjustment network,
Figure FDA0003504364800000032
represents the gradient in the horizontal or vertical direction of the optimized illumination layer of the training sample,
Figure FDA0003504364800000033
represents the gradient of the target illumination layer of the training sample in the horizontal or vertical direction, Iref represents the reference image, R represents the optimized reflection layer of the training sample,
Figure FDA0003504364800000034
represents the target illumination layer of the training samples, and SSIM represents the image quality loss function.
8.一种基于权利要求1-7任意一项所述低光照图像增强模型的低光照图像增强方法,其特征在于,所述方法包括下述步骤:8. A low-light image enhancement method based on the low-light image enhancement model of any one of claims 1-7, wherein the method comprises the following steps: 通过所述初始化模块对输入图像进行初始化分解,得到与所述输入图像对应的初始化光照层和初始化反射层;Initialize and decompose the input image by the initialization module, and obtain the initialization illumination layer and the initialization reflection layer corresponding to the input image; 通过所述优化模块,采用unfolding算法对所述初始化光照层和所述初始化反射层进行若干次交替迭代优化,得到优化光照层和优化反射层;Through the optimization module, the initialized illumination layer and the initialized reflection layer are optimized alternately several times by using the unfolding algorithm to obtain the optimized illumination layer and the optimized reflection layer; 通过所述光照调整模块对所述优化光照层进行光照调整,得到目标光照层;Perform illumination adjustment on the optimized illumination layer by the illumination adjustment module to obtain a target illumination layer; 通过所述图像重建模块,根据所述目标光照层和所述优化反射层进行图像重建,得到目标光照图像。Through the image reconstruction module, image reconstruction is performed according to the target illumination layer and the optimized reflection layer to obtain a target illumination image. 9.一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述低光照图像增强模型的功能。9. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements the computer program as claimed in the claims The function of the low-light image enhancement model described in any one of 1 to 7. 10.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述低光照图像增强模型的功能。10. A computer-readable storage medium storing a computer program, wherein when the computer program is executed by a processor, the low-light image according to any one of claims 1 to 7 is realized Enhance the functionality of the model.
CN202210135560.2A 2022-02-14 2022-02-14 Low-light image enhancement models, methods, electronic devices, and storage media Active CN114638749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210135560.2A CN114638749B (en) 2022-02-14 2022-02-14 Low-light image enhancement models, methods, electronic devices, and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210135560.2A CN114638749B (en) 2022-02-14 2022-02-14 Low-light image enhancement models, methods, electronic devices, and storage media

Publications (2)

Publication Number Publication Date
CN114638749A true CN114638749A (en) 2022-06-17
CN114638749B CN114638749B (en) 2025-12-12

Family

ID=81945688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210135560.2A Active CN114638749B (en) 2022-02-14 2022-02-14 Low-light image enhancement models, methods, electronic devices, and storage media

Country Status (1)

Country Link
CN (1) CN114638749B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115358943A (en) * 2022-08-10 2022-11-18 中国科学院深圳先进技术研究院 Low-light image enhancement method, system, terminal and storage medium
CN115861121A (en) * 2022-12-20 2023-03-28 哲库科技(上海)有限公司 Model training method, image processing method, device, electronic device and medium
CN116310276A (en) * 2023-05-24 2023-06-23 泉州装备制造研究所 Target detection method, device, electronic equipment and storage medium
CN116797490A (en) * 2023-07-12 2023-09-22 青岛理工大学 A lightweight turbid water image enhancement method
CN117058031A (en) * 2023-08-15 2023-11-14 中国科学院长春光学精密机械与物理研究所 Robust structure and texture low-light image enhancement algorithm
CN118644414A (en) * 2024-08-15 2024-09-13 山东舜凯电气设备有限公司 A method for processing internal defects of high-voltage distribution cabinet images
CN119295359A (en) * 2024-10-29 2025-01-10 平安科技(深圳)有限公司 Image enhancement method, image enhancement device, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570381A (en) * 2019-09-17 2019-12-13 合肥工业大学 A Semi-decoupled Image Decomposition Dark Light Image Enhancement Method Based on Gaussian Total Variation
CN111968044A (en) * 2020-07-16 2020-11-20 中国科学院沈阳自动化研究所 Low-illumination image enhancement method based on Retinex and deep learning
CN112381897A (en) * 2020-11-16 2021-02-19 西安电子科技大学 Low-illumination image enhancement method based on self-coding network structure
CN112465726A (en) * 2020-12-07 2021-03-09 北京邮电大学 Low-illumination adjustable brightness enhancement method based on reference brightness index guidance
KR102339584B1 (en) * 2020-11-10 2021-12-16 숭실대학교산학협력단 Method for restoring low light image and computing device for executing the method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570381A (en) * 2019-09-17 2019-12-13 合肥工业大学 A Semi-decoupled Image Decomposition Dark Light Image Enhancement Method Based on Gaussian Total Variation
CN111968044A (en) * 2020-07-16 2020-11-20 中国科学院沈阳自动化研究所 Low-illumination image enhancement method based on Retinex and deep learning
KR102339584B1 (en) * 2020-11-10 2021-12-16 숭실대학교산학협력단 Method for restoring low light image and computing device for executing the method
CN112381897A (en) * 2020-11-16 2021-02-19 西安电子科技大学 Low-illumination image enhancement method based on self-coding network structure
CN112465726A (en) * 2020-12-07 2021-03-09 北京邮电大学 Low-illumination adjustable brightness enhancement method based on reference brightness index guidance

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHENG CHUANJUN,ET.AL: "Adaptive Unfolding Total Variation Network for Low-Light Image Enhancement", 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 17 October 2021 (2021-10-17), pages 2 *
刘颖;刘佳琳;刘卫华;陈焕平;: "基于加权引导滤波的Retinex刑侦图像增强", 西安邮电大学学报, no. 05, 10 September 2018 (2018-09-10) *
李淼,周冬明,刘琰煜,谢诗冬,王长城,卫依雪: "结合深度残差神经网络与Retinex理论的低照度图像增强", 云南大学学报(自然科学版), 2 August 2021 (2021-08-02) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115358943A (en) * 2022-08-10 2022-11-18 中国科学院深圳先进技术研究院 Low-light image enhancement method, system, terminal and storage medium
CN115358943B (en) * 2022-08-10 2025-12-19 中国科学院深圳先进技术研究院 Low-light image enhancement method, system, terminal and storage medium
CN115861121A (en) * 2022-12-20 2023-03-28 哲库科技(上海)有限公司 Model training method, image processing method, device, electronic device and medium
CN116310276A (en) * 2023-05-24 2023-06-23 泉州装备制造研究所 Target detection method, device, electronic equipment and storage medium
CN116310276B (en) * 2023-05-24 2023-08-08 泉州装备制造研究所 Target detection method, target detection device, electronic equipment and storage medium
CN116797490A (en) * 2023-07-12 2023-09-22 青岛理工大学 A lightweight turbid water image enhancement method
CN116797490B (en) * 2023-07-12 2024-02-09 青岛理工大学 A lightweight turbid water image enhancement method
CN117058031A (en) * 2023-08-15 2023-11-14 中国科学院长春光学精密机械与物理研究所 Robust structure and texture low-light image enhancement algorithm
CN118644414A (en) * 2024-08-15 2024-09-13 山东舜凯电气设备有限公司 A method for processing internal defects of high-voltage distribution cabinet images
CN118644414B (en) * 2024-08-15 2024-11-08 山东舜凯电气设备有限公司 Method for processing internal defect image of high-voltage power distribution cabinet
CN119295359A (en) * 2024-10-29 2025-01-10 平安科技(深圳)有限公司 Image enhancement method, image enhancement device, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114638749B (en) 2025-12-12

Similar Documents

Publication Publication Date Title
CN114638749A (en) Low-illumination image enhancement model, method, electronic device and storage medium
US11055828B2 (en) Video inpainting with deep internal learning
Li et al. Underwater scene prior inspired deep underwater image and video enhancement
Zhang et al. Image restoration: From sparse and low-rank priors to deep priors [lecture notes]
CN114419392B (en) Hyperspectral snapshot image restoration method, device, equipment and medium
CN105631807B (en) The single-frame image super-resolution reconstruction method chosen based on sparse domain
Qin et al. Unsupervised image stitching based on Generative Adversarial Networks and feature frequency awareness algorithm
CN110276726A (en) An image deblurring method guided by multi-channel network prior information
Cheng et al. Optimizing image compression via joint learning with denoising
JP2023035928A (en) Neural network training based on consistency loss
Conde et al. Raw image reconstruction from RGB on smartphones. NTIRE 2025 challenge report
Mai et al. Deep unrolled low-rank tensor completion for high dynamic range imaging
Ren et al. Enhanced latent space blind model for real image denoising via alternative optimization
CN118823558A (en) A 3D point cloud quality prediction method based on graph convolutional neural network
CN119006326A (en) Image rain removing method and system based on improved diffusion model
Li et al. Interpretable unsupervised joint denoising and enhancement for real-world low-light scenarios
CN116823973B (en) Black-white video coloring method, black-white video coloring device and computer readable medium
CN113160081A (en) Depth face image restoration method based on perception deblurring
CN113706400A (en) Image correction method, image correction device, microscope image correction method, and electronic apparatus
CN110443754B (en) Method for improving resolution of digital image
Liu et al. Learning to generate realistic images for bit-depth enhancement via camera imaging processing
CN116029916A (en) Low-illumination image enhancement method based on dual-branch network combined with dense wavelet
CN119006339B (en) A Physical Prior-Based Image Dehazing Method
Wang et al. Dual degradation-inspired deep unfolding network for low-light image enhancement
Wang et al. Channel self-attention based low-light image enhancement network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant