Final Report
Final Report
Map Estimation
A PROJECT REPORT
JUNE-2025
R.V.R & J.C. COLLEGE OF ENGINEERING (AUTONOMOUS)
NAAC A+ Grade, NBA Accredited (Approved by A.I.C.T.E)
(AFFILIATED TO ACHARYA NAGARJUNA UNIVERSITY)
Chandramoulipuram::Chowdavaram, GUNTUR
DEPARTMENT OF INFORMATION TECHNOLOGY
RVR & JC COLLEGE OF ENGINEERING
(AUTONOMOUS)
BONAFIDE CERTIFICATE
This is to certify that this project work titled “Low Light Image Enhancement
Via Illumination Map Estimation” is the Bonafide work of Sathi Sai Vinay
Kashyap Reddy(Y21IT108), Musuluri Sri Vardhan( Y21IT081), Marakala
Manjunadh (Y21IT075) who have carried the work under my supervision and
submitted in partial fulfillment of the requirement for the award of the degree,
BACHELOR OF TECHNOLOGY, during the year 2024-2025.
Dr.A.Srikrishna
Prof. & HOD, Dept of IT
ACKNOWLEDGEMENT
We would like to express our gratitude to the Management of R.V.R & J.C
College of Engineering for providing us with a pleasant environment and
excellent lab facility.
We regard our sincere thanks to our Principal, Dr. Kolla Srinivas for providing
support and stimulating environment.
We would like to express our special thanks of gratitude to our guide Smt
.K.Chandhana, Asst. Professor who helped us in doing the Term Paper
successfully.
i
ABSTRACT
ii
LIST OF CONTENTS
ACKNOWLEDGEMENT i
ABSTRACT ii
LIST OF TABLES v
LIST OF FIGURES vi
CHAPTER 1 INTRODUCTION 1
1.1 Applications 2
1.3 Datasets 7
2.1.1 Introduction 10
2.1.2 Methodology 13
2.2.1 Introduction. 17
2.2.2 Methodology 20
2.2.3 Results and Discussion 23
2.3 Summary 24
3.2 Methodology 31
3.4 Summary 35
v
LIST OF FIGURES
Technique
Multi-peak Algorithm
Algorithm
enhanced image.
vi
CHAPTER 1
INTRODUCTION
1
1.1Applications:
Medical Imaging:
In medical imaging, low light image enhancement helps improve the clarity and
detail of images captured in low-light environments, such as endoscopy,
microscopy, and diagnostic imaging.
Environmental Monitoring:
Enhanced visibility in low-light conditions is crucial for environmental monitoring
applications, such as wildlife observation, forestry, and conservation efforts,
allowing researchers to study nocturnal animals and monitor ecosystems during
nighttime.
Underwater Imaging:
Deep-sea environments often lack sufficient natural light, making visibility a
challenge. Image enhancement boosts clarity and color in underwater photography,
helping marine researchers explore aquatic life with greater precision.
2
1.2 Literature Survey
Guo et al. [3] introduced the LIME (Low-light Image Enhancement via
Illumination Map Estimation) framework, which estimates the illumination of each
pixel using the maximum among its R, G, and B channel values. This initial map is
then refined using a structure-aware smoothing technique to preserve edges and
suppress noise. The final enhanced image is obtained by dividing the input by the
refined illumination map, with optional gamma correction and denoising using
BM3D. Their method demonstrates high efficiency and superior visual quality on
low-light images compared to HE, AHE, and other Retinex-based techniques.
Fu et al. [4] proposed a Multi-Deviation Fusion (MF) approach that generates
several illumination maps and fuses them to enhance image quality. Although
effective, MF lacks structural awareness and may blur textured regions. Li et al. [5]
addressed this by segmenting images and performing adaptive denoising within
segments, yielding improved visual results but at a higher computational cost.
Dong et al. [6] observed that inverted low-light images resemble hazy images
and thus applied dehazing techniques to enhance brightness. However, the physical
model underlying this approach is less intuitive and may produce unrealistic results
in certain cases.
Jobson et al. [7], through Retinex theory, decomposed images into reflectance
and illumination components. Single-Scale and Multi-Scale Retinex (SSR, MSR)
methods achieved good enhancement but often resulted in unnatural appearances
due to over-enhancement. SRIE [8] improved upon these by jointly estimating
reflectance and illumination via weighted variational models.
Huang et al. [9] proposed a local descriptor-based enhancement using
extended LBP on range images. Although designed for 3D face recognition, their
approach highlights the significance of structural features in low-light contexts.
3
Lore et al. [10] pioneered a deep learning approach called LLNet, which trains
stacked sparse denoising autoencoders for enhancing dark images. While this
method is robust, it requires large training data and lacks interpretability.
Wei et al. [11] introduced Retinex-Net, a deep neural network that separates
illumination and reflectance in an end-to-end trainable architecture. Though it
produces high-quality results, it is resource-intensive and demands labeled image
pairs.
Zhang et al. [12] developed Zero-DCE, a zero-reference deep curve
estimation model that adjusts image contrast through learned mapping curves. It
enhances real-world low-light images effectively without ground truth, but may
overfit certain illumination distributions.
Zhang and Guo [13] also proposed EnlightenGAN, a generative adversarial
network trained to learn illumination-aware mappings. This method generalizes well
but suffers from instability in training and potential hallucinations in complex
scenes.
Chen et al. [14] proposed a deep image prior-based method that models the
structure of low-light images without external data. Despite its novelty, it is slow
and unsuitable for real-time applications.
Zhao et al. [15] focused on histogram specification and tone mapping,
combining classical techniques with adaptive weighting. While efficient, this
method offers limited performance in severely underexposed areas.
Lv et al. [16] developed MBLLEN, a multi-branch low-light enhancement
network, to extract illumination and color details in a parallel manner. Though
visually impressive, it adds significant training complexity.
Gu et al. [17] proposed a self-regularized attention mechanism for low-light
enhancement by introducing spatial attention into Retinex decomposition. Their
approach effectively artifacts and noise in non-uniform lighting but requires careful
4
tuning of attention parameters for stable results.
Ren et al. [18] introduced a hybrid method combining traditional Retinex
theory with lightweight CNN modules. Their goal was to maintain low
computational overhead while leveraging learned features. Although faster than
many deep models, the enhancement quality is often subpar in extremely dark
regions.
Li et al. [19] developed a frequency-aware Retinex model that enhances
details using multi-band decomposition. By adjusting brightness in both spatial and
frequency domains, their method achieves balanced enhancement, but is prone to
edge ringing in sharp transitions.
Jiang et al. [20] explored dual-branch networks for joint brightness and detail
enhancement. One branch restores global illumination while the other sharpens local
contrast. Their method improves clarity in low-light scenes but is limited by high
training complexity.
Wang et al. [21] proposed an iterative enhancement model where illumination
is refined progressively through learned feedback. While this method adapts well to
various illumination levels, it may over-amplify bright regions if not constrained
effectively.
Kim et al. [22] focused on integrating human visual perception models into
deep networks. Their perceptual-based enhancement achieves natural brightness and
tone, but lacks adaptability in synthetic low-light datasets due to domain shift.
In summary, recent trends in low-light image enhancement show a transition
from hand-crafted illumination models to data-driven learning frameworks. Despite
improvements in visual quality, many learning-based methods face challenges such
as overfitting, lack of interpretability, or high computational cost. This motivates the
continued interest in physically-grounded models like LIME.
5
1.3 Datasets
c) Flower d) Road
6
e)Street f) Star
g) Robot h)Cube
fig 1.1 Sample datasets
7
1.3 Drawbacks of Existing Methods
1. Over-Enhancement and Loss of Naturalness
Many Retinex-based and histogram equalization methods tend to over-enhance
the image, resulting in unnatural colors and exaggerated contrast.
2. Lack of Structural Preservation
Classical methods like Gamma Correction or basic Retinex approaches often fail
to preserve image edges and fine details, causing blurring or artifacting in structured
regions.
3. Amplification of Noise in Dark Regions
Enhancing brightness without denoising can lead to significant noise
amplification, especially in very low-light inputs (e.g., Retinex, LIME without post-
processing).
4. Color Distortion
Several techniques (especially histogram-based ones) alter the original color
distribution, leading to hue shifts or unnatural tones, particularly under non-uniform
illumination.
5. High Computational Complexity
Deep learning methods (e.g., Retinex-Net, EnlightenGAN, MBLLEN) require
large models, powerful hardware, and long training times, making them unsuitable
for real-time or embedded applications.
8
1.5 Objectives of the Present Study
1. To improve the illumination estimation process for low-light images using a
modified LIME framework.
2. To preserve structural details and edges through refined, structure-aware
smoothing techniques
3. To enhance visibility in dark regions while preventing overexposure in already
bright areas.
4. To reduce noise amplification by integrating optional post-processing like
denoising.
5. To achieve a balance between enhancement quality and computational efficiency
for practical use.
9
CHAPTER 2
EXISTING METHODS
2.1.1 Introduction
Local methods focus on modifying image contrast based on localized features like
edges, local means, and standard deviations. Techniques such as local histogram
equalization, histogram stretching, and nonlinear mappings—including square,
exponential, and logarithmic functions—are employed to improve local texture and
details. However, these methods sometimes introduce distortions by altering the
original order of gray levels, leading to potential loss of visual consistency.
10
enhancement. The method maintains better control over gray-level relationships,
reducing the risk of distortions while enhancing textural details. This approach
demonstrates effective performance across various images, particularly those with
narrow intensity distributions, and offers clear improvements in image clarity and
detail visibility.
2.1.2 Methodology
The next step involves generating a histogram based on these combined values.
Since the raw histogram may contain noise, smoothing techniques are applied to
reduce fluctuations. The smoothed histogram is then analyzed to identify multiple
peaks, and the histogram is segmented accordingly. Each segment is equalized
independently, allowing adaptive enhancement across different intensity ranges.
This process results in improved contrast and better preservation of fine details, even
in images with complex or narrow intensity distribution.
11
2.1.2.1 GHE Multipeak Algorithm Procedure
Step 1: Calculate the edge values V (x, y).
Step 2: Get the values of u(x, y) and v(x, y) by normalizing I (x, y) and V (x, y)
12
Fig 2.1 Procedure for Histogram Equalization Multi-peak Technique
13
2.1.3 Results:
One significant advantage of this approach lies in its ability to create more
uniform intensity distributions. Unlike traditional HE methods, which may map
consecutive gray levels to the same value, multi-peak GHE ensures that pixels with
the same intensity may have different values based on their local information. This
leads to a more uniform distribution of gray levels, preserving image details and
textures more effectively.
Furthermore, this algorithm offers fast processing times, making it suitable
for real-time applications. With a processing time of only 0.15 seconds for enhancing
14
a 512 × 512 gray level image, the algorithm proves to be efficient even on relatively
older hardware configurations.
In summary, the experimental results demonstrate the effectiveness and efficiency
of the proposed multi-peak GHE method in enhancing images with narrow intensity
distributions. By leveraging local information and ensuring a more uniform intensity
distribution, the approach produces superior image enhancements with clearer
textures and details, overcoming the limitations of traditional HE methods.
Fig.2.2 (c) Multi Peak GHE Fig.2.2(d) Multi peak GHE(α = 5, β .01)
15
2.1.4 Discussion
16
2.2 Naturalness preserved Enhancement Algorithm for Non-
Uniform Illumination Images
2.2.1 Introduction
Image enhancement is a critical technique in the field of image processing,
aimed at improving the visual quality of images for better interpretation or further
analysis. Various enhancement methods have been proposed, including histogram
equalization, unsharp masking, and Retinex-based approaches. Among these,
Retinex theory has gained widespread adoption due to its ability to enhance image
details by separating reflectance from illumination. However, a common drawback
of many Retinex-based algorithms is their tendency to over-enhance details at the
cost of naturalness, particularly in images with non-uniform illumination. This often
results in visual artifacts, incorrect lighting perceptions, and a loss of the original
scene’s ambience.
The proposed method introduces three key innovations. First, it defines a new
objective metric, the Lightness-Order-Error (LOE), to quantitatively evaluate
naturalness preservation based on the consistency of lightness order before and after
enhancement. Second, the algorithm employs a novel “bright-pass filter” to
accurately decompose an image into illumination and reflectance components while
17
ensuring the reflectance remains within a physically meaningful range. Third, a bi-
logarithmic transformation is applied to the illumination component, balancing the
enhancement of details with the preservation of overall scene lighting.
2.2.2 Methology
18
To objectively assess naturalness preservation, the Lightness-Order-Error (LOE)
metric is introduced, quantifying changes in lightness relationships between the
original and enhanced images.
19
Step 3: Synthesis of Reflectance and Mapped Illumination.
To enhance details and preserve naturalness, the mapped illumination is taken
into consideration. We synthesize R(x, y) and Lm(x, y) together to get the final
enhanced image
2.2.3 Results
20
Objective evaluations using metrics such as Discrete Entropy, Visibility Level
Descriptor (VLD), and Lightness-Order-Error (LOE) support these findings. This
method consistently showed high entropy and visibility scores, indicating strong
detail enhancement. Its LOE scores were among the lowest, confirming superior
preservation of natural lightness ordering.
Visual comparisons also demonstrated improved visibility in shadowed
regions and enhanced contrast in darker areas, without introducing visual artifacts
or unnatural color shifts. Images maintained their original atmosphere and structural
integrity across diverse scenes.
21
Fig.2.4(c)Enhanced Image of BPDHE Fig.2.4(d) Enhanced Image of MSR
22
2.2.4 Discussion
This enhancement algorithm demonstrates a strong capability to balance detail
improvement and naturalness retention, particularly in challenging lighting
conditions. Its image decomposition using a bright-pass filter ensures physically
consistent reflectance values, avoiding over-enhancement that is common in other
methods. The bi-logarithmic transformation applied to the illumination component
enables effective dynamic range compression without compromising the spatial light
distribution.
In comparison with techniques like SSR and MSR, this method preserves the
scene’s ambience and avoids distortions of light direction. While methods like
BPDHE perform well in preserving brightness, they often fail to enhance low-light
regions adequately. GUM and NECI either over-enhance or underrepresent certain
features, while RACE can introduce color shifts that reduce visual realism.
The algorithm’s performance across various metrics entropy, VLD, and LOE shows
consistent alignment with both human visual preferences and statistical measures.
The LOE metric in particular highlights the algorithm’s effectiveness in maintaining
lightness relationships, which are essential for natural visual perception.
A noted limitation is its current focus on still images. Since illumination consistency
across frames is not addressed, applying this algorithm to video sequences may
introduce flickering. Future developments could aim to incorporate temporal
coherence for improved video performance.
In practical terms, the algorithm is well-suited for applications where both
visual clarity and naturalness are critical, such as digital photography, surveillance,
and remote sensing.
23
2.3 SUMMARY
Low light image enhancement via illumination map estimation offers several
advantages over alternative methods such as histogram equalization (HE), multi-
peak algorithms, and naturalness-preserved enhancement algorithms. Firstly,
illumination map estimation specifically addresses the challenges posed by low light
conditions, which often result in poor visibility and loss of detail. By accurately
estimating and enhancing the illumination map, this approach effectively improves
image quality and visibility in low light scenarios.
Unlike HE and multi-peak algorithms, which may lead to over-enhancement or
unnatural-looking results, illumination map estimation focuses on enhancing local
illumination while preserving naturalness. This targeted approach ensures that
enhancements are applied judiciously, resulting in visually pleasing images that
maintain the integrity of the scene.
Moreover, illumination map estimation offers greater flexibility and
adaptability compared to traditional enhancement methods. By dynamically
adjusting illumination levels based on local image characteristics, this approach can
effectively handle varying lighting conditions within the same scene. This ensures
consistent and optimal enhancement across different regions of the image, regardless
of lighting variations.
Additionally, low light image enhancement via illumination map estimation
aligns well with the goal of improving image quality while maintaining realism.
Unlike some naturalness-preserved enhancement algorithms, which may struggle
with non-uniform illumination and introduce artifacts, illumination map estimation
prioritizes the preservation of natural lighting effects. This results in enhanced
images that closely resemble the original scene, with improved visibility and detail
in low light conditions.
24
CHAPTER 3
LOW LIGHT IMAGE ENHANCEMNET VIA ILLUMINATION
MAP ESTIMATION
3.1 INTRODUCTION
In the field of digital image processing, image quality plays a pivotal role in
determining the effectiveness of higher-level vision tasks such as object detection,
classification, tracking, and scene understanding. However, in real-world scenarios,
images are often captured under suboptimal lighting conditions due to environmental
constraints, inadequate exposure settings, or low-cost imaging hardware. These low-
light images typically suffer from poor visibility, low contrast, high noise, and color
distortion, which collectively degrade both the visual appeal and the performance of
computer vision algorithms.
To address this challenge, a variety of low-light image enhancement
techniques have been proposed. Traditional methods such as Histogram Equalization
(HE), Adaptive Histogram Equalization (AHE), and Gamma Correction aim to
stretch or shift the dynamic range of pixel intensities to improve brightness and
contrast. Although computationally simple, these techniques often lead to over-
enhancement, color distortion, and a lack of structure preservation. Retinex-based
models, inspired by human visual perception, attempt to separate the image into
illumination and reflectance components. While these models improve
interpretability, many suffer from issues like noise amplification and unnatural color
rendering, especially in the presence of non-uniform illumination.
Recent advancements in low-light enhancement include learning-based
approaches, which leverage deep neural networks to learn complex mappings from
dark to bright images. Although effective, these methods are often computationally
intensive, require large-scale labeled datasets, due to their black-box nature.
25
Among the physically interpretable and computationally efficient methods,
Low-light Image Enhancement via Illumination Map Estimation (LIME) has gained
attention. LIME estimates an illumination map directly from the input image and
uses it to enhance brightness in a non-uniform yet controlled manner. However, the
original LIME method can still be improved in terms of structure preservation, noise
handling, and adaptability.
This study proposes a modified implementation of the LIME algorithm with
refined illumination estimation and structure-aware enhancement. The objective is
to enhance the visual quality of low-light images while maintaining computational
efficiency and preserving natural details. The proposed approach is evaluated against
existing methods to demonstrate its effectiveness and applicability in practical
scenarios.
3.1 Methodology
26
image while suppressing minor textural details. The refinement is achieved by
minimizing an energy function that balances fidelity to the initial illumination map
and spatial smoothness. The optimization incorporates gradient-based weighting,
where stronger gradients are preserved to maintain edges, and weaker gradients are
smoothed to reduce noise. This technique ensures that the enhanced image maintains
its natural structure and avoids the artifacts commonly introduced by basic
smoothing techniques.
After obtaining the refined illumination map, the input image is enhanced by
performing pixel-wise division of the original intensity values by the corresponding
illumination values. To further control the brightness and contrast, a gamma
correction step is applied to the illumination map before enhancement. This allows
the user to fine-tune the visual appearance of the output based on desired brightness
levels. To address this, a post-processing denoising step using BM3D is optionally
applied, particularly in darker regions of the image. This hybrid methodology
balances efficiency, interpretability, and enhancement quality, making it suitable for
practical low-light image enhancement tasks.
The following are the steps required for this method:
1) Illumination Map Estimation
2) Exact solver to Problem
3) Post processing
4) Denoising the image
Step 1: The first color constancy methods, Max-RGB [8] tries to estimate the
illumination by seeking the maximum value of three color channels, say R, G and
B. But this estimation can only boost the global illumination. In this paper, to handle
27
non-uniform illuminations, we alternatively adopt the following initial estimation
The obtained Tˆ (x) guarantees that the recovery will not be saturated, because of:
where Ω(x) is a region centered at pixel x, and y is the location index within the
region. These schemes can somewhat enhance the local consistency, but they are
structure-blind.
Step 3: we provide a more powerful scheme to better achieve this goal.
To address this issue, based on the initial illumination map Tˆ , we propose to solve
the following optimization problem:
where α is the coefficient to balance the involved two terms and, |·|F
and|·|1.designate the Frobenious and L1norms, respectively. Further, W is the
weight matrix, and ∇T is the first order derivative filter. In this work, it only contains
∇hT (horizontal) and ∇vT (vertical).
The first term takes care of the fidelity between the initial map Tˆ and the refined
28
one T, while the second term considers the (structure-aware) smoothness.
2) Exact Solver to Problem(Augmented Lagrangian Method)
where I is the identity matrix with proper size. And D contains Dh and Dv,
which are the Toeplitz matrices from the discrete gradient operators with forward
difference. We note that, for convenience, the operations DX and DT X represent
reshape(Dx) and reshape(DT x).
Step 3: Dropping the terms unrelated to G leads to the following optimization
problem.
The closed form solution of above can be easily obtained by performing the
shrinkage operation like:
29
3) Post-Processing
30
Algorithm for LIME method
INPUT:
Low-light image L, positive coefficients α, γ, ρ, and µ(0)
INITIALIZE:
k = 0, G(0) = Z (0) = 0 ∈ R2M×N Estimate the initial illumination map Tˆ on L
while k < k0 do
Update T(k+1);
Update G(k+1);
Update Z (k+1) and µ (k+1);
k=k+1;
end
Apply Gamma correction on T(k0) ;
Obtain R using Gamma corrected T(k0);
Denoise the result using BM3D ;
OUTPUT : Final enhanced result
31
32
33
Fig3.1 Results of comparision between original image and enhanced image
34
3.4 Summary
35
CHAPTER 4
For our experiment, input images were resized and normalized to a consistent
resolution to ensure uniform processing. The dataset consists of low-light images in
BMP format, processed using a custom Python implementation of the modified
LIME algorithm. All experiments were conducted on a system with Intel Core i5
processor @2.40 GHz and 8 GB RAM, using Python 3.10 along with libraries such
as NumPy, SciPy, Matplotlib, and scikit-image. The enhancement process includes
estimation of the initial illumination map using the maximum RGB values, followed
by structure-aware refinement through iterative optimization involving Laplacian
vectors and directional derivatives. Gamma correction is applied to adjust
brightness, and the final enhanced image is obtained by normalizing the input with
the refined illumination map. A total of 10 iterations were performed per image. For
qualitative analysis, enhanced images were visually compared with their original
versions using side-by-side visualization. Quantitative evaluation was done using
the Level of Error (LOE) metric, where a lower LOE value indicates better
enhancement. The entire pipeline automates loading, processing, saving, and
comparing results, and outputs were stored for reproducibility and comparison.
36
Lightness Order Error (LOE)
The LOE is defined as:
Where:
m is the number of pixels.
Q(x) and Qr(x) are the maximum intensity values among R, G, and B channels
at location x in the enhanced and reference images, respectively.
U(p,q) is a step function:
A lower LOE value indicates better preservation of the natural lightness ordering,
meaning the enhancement method more effectively maintains the natural appearance
of the image.
To further substantiate the qualitative visual improvements, LOE values are
computed for each enhanced image. These values are then used as a comparative
measure between different enhancement methods. The images that yield lower LOE
scores tend to maintain the relative brightness structure of the original scenes,
preserving critical perceptual cues such as shading, contrast, and lighting direction.
This quantitative metric helps in objectively ranking enhancement results, especially
when visual judgment is subjective or inconsistent. For instance, when multiple
algorithms produce visually similar results, LOE serves as a decisive indicator of
which method better preserves the inherent illumination relationships.
37
In practice, enhanced images with LOE scores below a threshold (e.g., < 2.0) are
typically perceived as visually natural and consistent with human perception.
Conversely, higher LOE values signify that the enhancement process has altered the
original lightness relationships, leading to artifacts or unnatural appearance.
The Lightness Order Error (LOE) values displayed below each image represent the
degree to which the enhanced image preserves the natural lightness ordering
compared to the original. A lower LOE value indicates better preservation of visual
realism and natural lighting. As seen, the images with smaller LOE scores (e.g.,
1.2543) demonstrate more natural enhancement results, while higher values reflect
greater deviation. These quantitative scores support visual evaluation by objectively
measuring enhancement performance across different methods or algorithms.
38
CHAPTER 5
CONCLUSION
In this study, a modified implementation of the LIME algorithm was done to enhance
low-light images effectively. By incorporating structure-aware illumination
refinement and gamma correction, the method successfully improves visibility in
dark regions while preserving important image details. Experimental results
demonstrate that the proposed approach achieves visually appealing enhancement
and lower error levels compared to traditional methods. The use of a physically
interpretable model ensures computational efficiency and practical applicability.
Overall, the method provides a balanced solution for real-world low-light image
enhancement tasks and serves as a reliable foundation for further improvements and
integration into vision-based applications.
39
REFERENCES
40
[10] D. Jobson, Z. Rahman, and G. Woodell, “A multi-scale retinex for
bridging the gap between color images and the human observation of scenes,”
TIP, vol. 6, no. 7, pp. 965–976, 1997.
[11] S. Wang, J. Zheng, H. Hu, and B. Li, “Naturalness preserved
enhancement algorithm for non-uniform illumination images,” TIP, vol. 22,
no. 9, pp. 3538–3578, 2013.
[12] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-
based enhancing method for weakly illuminated images,” Signal Processing,
vol. 129, pp. 82–96, 2016.
[13] X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A weighted
variaional model for simultaneous reflectance and illumination estimation,”
in CVPR, pp. 2782–2790, 2016.
[14] X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu, “Fast
efficient algorithm for enhancement of low lighting video,” in ICME, pp. 1–
6, 2011.
[15] L.Li,R.Wang, W. Wang, and W. Gao, “A low-light image enhancement
method for both denoising and contrast enlarging,” in ICIP, pp. 3730– 3734,
2015.
[16] R. Grosse, M. Johnson, E. Adelson, and W. Freeman, “Ground-truth
dataset and baseline evaluations for intrinsic image algorithms,” in ICCV, pp.
2335–2342, 2009.
[17] P. Gehler, C. Rother, M. Kiefel, L. Zhang, and B. Scholkopf, “Recover-
¨ ing intrinsic images with a prior on reflectance,” in NIPS, pp. 765–773, 2011.
41