A Low-Illumination Enhancement Method Based on Structural Layer and Detail Layer
<p>Framework of the proposed method.</p> "> Figure 2
<p>Brightness enhancement network and brightness adjustment network.</p> "> Figure 3
<p>Structural layer images and detail layer images.</p> "> Figure 4
<p>Comparison of low-illumination image enhancement effects of different algorithms. (<b>a</b>) Origin images, (<b>b</b>) RetinexNet results, (<b>c</b>) UretinexNet results, (<b>d</b>) LIME results, (<b>e</b>) Zero-Dce++ results, (<b>f</b>) KinD++ results, and (<b>g</b>) our results.</p> "> Figure 5
<p>Comparison of low-illumination image enhancement effects of different algorithms. (<b>a</b>) Origin images, (<b>b</b>) Kind results, (<b>c</b>) Sci results, (<b>d</b>) RUAS results, and (<b>e</b>) our results.</p> "> Figure 6
<p>Comparison of local enlargement details of low-illuminance images using different algorithms. (<b>a</b>) Origin images, (<b>b</b>) RetinexNet results, (<b>c</b>) UretinexNet results, (<b>d</b>) LIME results, (<b>e</b>) Zero-Dce++ results, (<b>f</b>) KinD++ results, and (<b>g</b>) our results.</p> "> Figure 7
<p>Comparison of local enlargement details of low-illuminance images using different algorithms. (<b>a</b>) Origin images, (<b>b</b>) Kind results, (<b>c</b>) Sci results, (<b>d</b>) RUAS results, and (<b>e</b>) our results.</p> "> Figure 8
<p>Ablation experiment for loss function variable.</p> "> Figure 9
<p>Ablation experiment based on enhancement module.</p> ">
Abstract
:1. Introduction
- (1)
- This proposed SRetinex-Net model is mainly divided into two parts: a decomposition module and an enhancement module. The decomposition module mainly adopts the SU-Net structure, which decomposes the input image into a structural layer image and a detail layer image. The enhancement module mainly adopts the SDE-Net structure, which is divided into two branches: the SDE-S branch and the SDE-D branch. The SDE-S branch mainly enhances the brightness of the structural layer, while the SDE-D branch enhances the textural detail of the detail layer.
- (2)
- The SU-Net structure is an unsupervised network, which mainly extracts and merges the structural features of input images through a sampling layer and skip connection. A brightness calibration module was added to the SDE-S branch. After the brightness enhancement of the structural layer image through the Ehnet module, the feature extraction and reconstruction of the enhanced image should be completed through the Adnet module to adjust the image brightness, making the image brightness more balanced and accurate. The SDE-D branch is mainly denoised and enhanced with detailed textures through a denoising module. This network structure greatly improves computational efficiency.
- (3)
- The total variation optimization model was improved as a mixed loss function, and the structure component and texture component were added as variables on the basis of the original loss function, which can make the edge and texture better separated so that the edge of the structural layer image is clear and the details of the detail layer image are more abundant.
- (4)
- Compared with previous methods, the structural layer image structure obtained by decomposing the image is more complete in preservation, and the detail layer image contains more abundant details. In image enhancement, our method does not refer to normal light images. We can adaptively adjust image brightness to better match human visual effects and have conducted extensive experimental comparisons to demonstrate the superiority of our method. Compared with all other methods, we can self-calibrate image brightness, enhance image contrast, and improve image details and visibility.
2. Methods
2.1. Framework of the Proposed Method
2.2. Structure of the Network
2.2.1. Decomposition Module
2.2.2. Enhancement Module
2.3. Loss Function
2.3.1. Fully Variational Loss Function
2.3.2. Unsupervised Loss Function
3. Experimental Results and Analysis
3.1. Subjective Evaluation
3.2. Objective Evaluation Indicators
3.3. Ablation Experiment
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bhutto, J.A.; Tian, L.; Du, Q.; Sun, Z.; Yu, L.; Tahir, M.F. CT and MRI Medical Image Fusion Using Noise-Removal and Contrast Enhancement Scheme with Convolutional Neural Network. Entropy 2022, 24, 393. [Google Scholar] [CrossRef] [PubMed]
- Land, E.H.; McCann, J.J. Lightness and retinex theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
- Rahman, Z.U.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image enhancement. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; Volume 3, pp. 1003–1006. [Google Scholar] [CrossRef]
- Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
- Guo, X.; Yu, L.; Ling, H. LIME: Low-light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Ren, X.; Li, M.; Cheng, W.H.; Liu, J. Joint Enhancement and Denoising Method via Sequential Decomposition. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar]
- Liang, J.; Zhang, X. Retinex by Higher Order Total Variation L1 Decomposition. J. Math. Imaging Vis. 2015, 52, 345–355. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef] [Green Version]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-Light Image/Video Enhancement Using CNNs. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5627–5636. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Springer: Cham, Switzerland, 2015. [Google Scholar]
- Zhang, Q.; Shen, X.; Xu, L.; Jia, J. Rolling guidance filter. In Proceedings of the Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 815–830. [Google Scholar]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Xu, L.; Yan, Q.; Xia, Y.; Jia, J. Structure extraction from texture via relative total variation. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Zhou, F.; Chen, Q.; Liu, B.; Qiu, G. Structure and Texture-Aware Image Decomposition via Training a Neural Network. IEEE Trans. Image Process. 2020, 29, 3458–3473. [Google Scholar] [CrossRef] [PubMed]
- Yin, W.; Goldfarb, D.; Osher, S. A comparison of three total variation based texture extraction models. J. Vis. Commun. Image Represent. 2007, 18, 240–252. [Google Scholar] [CrossRef] [Green Version]
- Aujol, J.F.; Gilboa, G.; Chan, T.; Osher, S. Structure-Texture Image Decomposition—Modeling, Algorithms, and Parameter Selection. Int. J. Comput. Vis. 2006, 67, 111–136. [Google Scholar] [CrossRef]
- Chen, Q.; Liu, B.; Zhou, F. Anisotropy-based image smoothing via deep neural network training. Electron. Lett. 2019, 55, 1279–1281. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5891–5900. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond Brightening Low-light Images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar] [CrossRef] [Green Version]
- Xu, X.; Wang, R.; Fu, C.W.; Jia, J. SNR-Aware Low-light Image Enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired Unrolling with Cooperative Prior Architecture Search for Low-light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
Comparison Algorithm | NIQE ↓ | SSIM ↑ | PSNR ↑ | LPIPS ↓ | BIQI ↑ | EMEE ↑ | SDME ↑ | BRISQUE ↓ | AME ↑ | Visibility ↑ |
---|---|---|---|---|---|---|---|---|---|---|
Retinex-Net | 7.1888 | 0.6449 | 13.7448 | 2.3146 | 0.4075 | 9.1803 | 89.1120 | 93.0386 | 78.9014 | 1.4980 |
URetinex-Net | 4.7599 | 0.8238 | 21.3282 | 1.3234 | 0.2692 | 8.8664 | 72.2450 | 94.4427 | 43.9180 | 1.3153 |
SIRE | 6.2109 | 0.4937 | 10.9447 | 1.8563 | 0.3428 | 8.4146 | 52.3258 | 93.3717 | 37.7913 | 1.5000 |
LIME | 6.4282 | 0.7410 | 16.2744 | 2.0601 | 0.3436 | 7.9899 | 114.8789 | 94.8650 | 83.0246 | 1.3913 |
Zero-DCE++ | 4.3693 | 0.5479 | 14.3098 | 1.8905 | 0.3604 | 7.8689 | 69.8208 | 94.3531 | 52.4144 | 1.4879 |
KinD++ | 4.8106 | 0.7962 | 15.2666 | 1.4899 | 0.3652 | 8.5482 | 97.2805 | 93.3560 | 73.7956 | 1.4440 |
SNR-Aware [30] | 5.7982 | 0.7834 | 17.3118 | 1.6384 | 0.3073 | 8.6534 | 68.6665 | 96.2248 | 58.0088 | 1.2607 |
RUAS [31] | 6.2769 | 0.6075 | 12.9109 | 1.9274 | 0.2815 | 9.8992 | 65.1625 | 95.7833 | 50.9025 | 1.4222 |
OURS | 4.3195 | 0.8321 | 21.4243 | 1.3882 | 0.4394 | 10.9775 | 110.7982 | 92.1687 | 83.1254 | 1.5169 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ge, W.; Zhang, L.; Zhan, W.; Wang, J.; Zhu, D.; Hong, Y. A Low-Illumination Enhancement Method Based on Structural Layer and Detail Layer. Entropy 2023, 25, 1201. https://doi.org/10.3390/e25081201
Ge W, Zhang L, Zhan W, Wang J, Zhu D, Hong Y. A Low-Illumination Enhancement Method Based on Structural Layer and Detail Layer. Entropy. 2023; 25(8):1201. https://doi.org/10.3390/e25081201
Chicago/Turabian StyleGe, Wei, Le Zhang, Weida Zhan, Jiale Wang, Depeng Zhu, and Yang Hong. 2023. "A Low-Illumination Enhancement Method Based on Structural Layer and Detail Layer" Entropy 25, no. 8: 1201. https://doi.org/10.3390/e25081201