[go: up one dir, main page]

0% found this document useful (0 votes)
19 views49 pages

Final Report

The document presents a project report on 'Low Light Image Enhancement Via Illumination Map Estimation' submitted for a Bachelor of Technology degree at Acharya Nagarjuna University. It outlines the challenges of low-light image quality and proposes a method to enhance visibility by estimating and refining an illumination map, demonstrating its effectiveness through experiments. The report includes acknowledgments, an abstract, and a structured analysis of existing methods and objectives for improving low-light image enhancement techniques.

Uploaded by

praneethsaisuda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views49 pages

Final Report

The document presents a project report on 'Low Light Image Enhancement Via Illumination Map Estimation' submitted for a Bachelor of Technology degree at Acharya Nagarjuna University. It outlines the challenges of low-light image quality and proposes a method to enhance visibility by estimating and refining an illumination map, demonstrating its effectiveness through experiments. The report includes acknowledgments, an abstract, and a structured analysis of existing methods and objectives for improving low-light image enhancement techniques.

Uploaded by

praneethsaisuda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Low Light Image Enhancement Via Illumination

Map Estimation
A PROJECT REPORT

Submitted in partial fulfillment of requirements to

ACHARYA NAGARJUNA UNIVERSITY


For the award of the degree
Bachelor of Technology
in
INFORMATION TECHNOLOGY
By
Sathi Sai Vinay Kashyap(Y21IT108)
Musuluri Sri Vardhan(Y21IT081)
Marakala Manjunadh(Y21it075)

JUNE-2025
R.V.R & J.C. COLLEGE OF ENGINEERING (AUTONOMOUS)
NAAC A+ Grade, NBA Accredited (Approved by A.I.C.T.E)
(AFFILIATED TO ACHARYA NAGARJUNA UNIVERSITY)
Chandramoulipuram::Chowdavaram, GUNTUR
DEPARTMENT OF INFORMATION TECHNOLOGY
RVR & JC COLLEGE OF ENGINEERING
(AUTONOMOUS)

BONAFIDE CERTIFICATE

This is to certify that this project work titled “Low Light Image Enhancement
Via Illumination Map Estimation” is the Bonafide work of Sathi Sai Vinay
Kashyap Reddy(Y21IT108), Musuluri Sri Vardhan( Y21IT081), Marakala
Manjunadh (Y21IT075) who have carried the work under my supervision and
submitted in partial fulfillment of the requirement for the award of the degree,
BACHELOR OF TECHNOLOGY, during the year 2024-2025.

Dr.A.Srikrishna
Prof. & HOD, Dept of IT
ACKNOWLEDGEMENT

The successful completion of any task would be incomplete without a proper


guidance and environment. Combination of these three factors acts like
backbone to our Term Paper “LOW LIGHT IMAGE ENHANCEMENT
VIA ILLUMINATION MAP ESTIMATION ”.

We would like to express our gratitude to the Management of R.V.R & J.C
College of Engineering for providing us with a pleasant environment and
excellent lab facility.

We regard our sincere thanks to our Principal, Dr. Kolla Srinivas for providing
support and stimulating environment.

We are greatly indebted to Dr.A.Srikrishna, Professor, and Head of the


Department Information Technology, for her valuable suggestions during
course period.

We would like to express our special thanks of gratitude to our guide Smt
.K.Chandhana, Asst. Professor who helped us in doing the Term Paper
successfully.

We would be thankful to all the teaching and non-teaching staff of the


department of Information Technology for the cooperation given for the
successful completion of project.

SATHI SAI VINAY KASHYAP REDDY(Y21IT108)


MUSULURI SRI VARDHAN(Y21IT081)
MARAKALA MANJUNADH(Y21IT075)

i
ABSTRACT

When one captures images in low-light conditions, the images often


suffer from low visibility. Besides degrading the visual aesthetics of
images, this poor quality may also significantly degenerate the
performance of many computer vision and multimedia algorithms that
are primarily designed for high quality inputs. In this paper, we propose
a simple yet effective low-light image enhancement (LIME) method.
More concretely, the illumination of each pixel is first estimated
individually by finding the maximum value in R, G and B channels.
Further, we refine the initial illumination map by imposing a structure
prior on it, as the final illumination map. Having the well constructed
illumination map, the enhancement can be achieved accordingly.
Experiments on a number of challenging low-light images are present to
reveal the efficacy of our LIME and show its superiority over several
state-of-the-arts in terms of enhancement quality and efficiency.

ii
LIST OF CONTENTS

ACKNOWLEDGEMENT i

ABSTRACT ii

LIST OF TABLES v

LIST OF FIGURES vi

CHAPTER 1 INTRODUCTION 1

1.1 Applications 2

1.2 Literature Survey 3

1.3 Datasets 7

1.4 Drawbacks of Existing methods 8

1.5 Objectives of The Present Study 9

1.6 Scope of The Present Study 9

CHAPTER 2 VARIANTS OF LOCAL PATTERNS 10

2.1 Face Recognition Using Local Binary Pattern 10

2.1.1 Introduction 10

2.1.2 Methodology 13

2.1.3 Results and Discussion 16


2.2 Face Recognition Using Local Directional Pattern 17

2.2.1 Introduction. 17

2.2.2 Methodology 20
2.2.3 Results and Discussion 23

2.3 Summary 24

CHAPTER 3 COMPARISON OF HIGHER ORDER 25


LOCAL NORMAL DERIVATIVE
PATTERNS FOR 3D FACE RECOGNITION
SYSTEM
3.1 Introduction 26

3.2 Methodology 31

3.3 Results and Discussion 34

3.4 Summary 35

CHAPTER 4 PERFORMANCE EVALUATION 36

4.1 Experimental Setup 36

4.2 Performance Metrices 38

4.3 Experimental Results 38

CHA PTER 5 CONCLUSION 39


LIST OF TABLES

S.NO DESCRIPTION PAGE NO

4.1 Performance metrics of enhanced images by 38


using LOE method

v
LIST OF FIGURES

S.NO DESCRIPTION PAGE NO

1.1 List of sample Datasets 7

2.1 Procedure for Histogram Equalization Multi-peak 13

Technique

2.2 Resultant images of various methods comparision to 15

Multi-peak Algorithm

2.3 Flowchart of Naturalness preserved Enhancement 19

Algorithm

2.4 Resultant images of various methods comarision to 22

Naturalness preserved Enhancement Algorithm

3.1 Results of comparision between original image and 34

enhanced image.

vi
CHAPTER 1
INTRODUCTION

Image quality is fundamental for computer vision applications, yet low-


light conditions severely degrade image visibility, detail, and introduce noise,
consequently hampering the performance of algorithms trained on high-quality data.
While traditional enhancement methods like histogram equalization and Retinex
offer partial solutions, they often suffer from over-enhancement, color distortion, or
neglect structural information. Furthermore, some techniques rely on strong prior
assumptions or intensive computation, limiting their practicality..

The Low-light Image Enhancement via Illumination Map Estimation


(LIME) method addressed these limitations by proposing a straightforward yet
effective framework. LIME operates by estimating an illumination map from the
input image, which is then used to reconstruct a well-lit version. This approach,
particularly when augmented with structure-aware refinement, has demonstrated
notable efficiency and practical applicability in enhancing low-light images..

Building upon the foundational principles of LIME, our work aims to


further enhance performance and adaptability. We maintain the core concept of
illumination-guided enhancement but introduce key modifications. These
advancements are designed to improve robustness across various low-light scenarios
while preserving computational efficiency, ultimately leading to improved visual
clarity and detail in dark regions, better preservation of natural scene characteristics,
and broader applicability in real-world settings.

1
1.1Applications:

Medical Imaging:
In medical imaging, low light image enhancement helps improve the clarity and
detail of images captured in low-light environments, such as endoscopy,
microscopy, and diagnostic imaging.

Photography and Filmmaking:


Low light image enhancement techniques are used by photographers and filmmakers
to improve the quality of images and footage captured in low-light conditions, such
as indoor settings, night scenes, and artistic compositions.

Environmental Monitoring:
Enhanced visibility in low-light conditions is crucial for environmental monitoring
applications, such as wildlife observation, forestry, and conservation efforts,
allowing researchers to study nocturnal animals and monitor ecosystems during
nighttime.

Search and Rescue Operations:


Low light image enhancement aids search and rescue operations in low-light
conditions, such as during nighttime or in dark environments, helping rescuers locate
individuals, objects, or hazards more effectively.

Underwater Imaging:
Deep-sea environments often lack sufficient natural light, making visibility a
challenge. Image enhancement boosts clarity and color in underwater photography,
helping marine researchers explore aquatic life with greater precision.

2
1.2 Literature Survey
Guo et al. [3] introduced the LIME (Low-light Image Enhancement via
Illumination Map Estimation) framework, which estimates the illumination of each
pixel using the maximum among its R, G, and B channel values. This initial map is
then refined using a structure-aware smoothing technique to preserve edges and
suppress noise. The final enhanced image is obtained by dividing the input by the
refined illumination map, with optional gamma correction and denoising using
BM3D. Their method demonstrates high efficiency and superior visual quality on
low-light images compared to HE, AHE, and other Retinex-based techniques.
Fu et al. [4] proposed a Multi-Deviation Fusion (MF) approach that generates
several illumination maps and fuses them to enhance image quality. Although
effective, MF lacks structural awareness and may blur textured regions. Li et al. [5]
addressed this by segmenting images and performing adaptive denoising within
segments, yielding improved visual results but at a higher computational cost.
Dong et al. [6] observed that inverted low-light images resemble hazy images
and thus applied dehazing techniques to enhance brightness. However, the physical
model underlying this approach is less intuitive and may produce unrealistic results
in certain cases.
Jobson et al. [7], through Retinex theory, decomposed images into reflectance
and illumination components. Single-Scale and Multi-Scale Retinex (SSR, MSR)
methods achieved good enhancement but often resulted in unnatural appearances
due to over-enhancement. SRIE [8] improved upon these by jointly estimating
reflectance and illumination via weighted variational models.
Huang et al. [9] proposed a local descriptor-based enhancement using
extended LBP on range images. Although designed for 3D face recognition, their
approach highlights the significance of structural features in low-light contexts.

3
Lore et al. [10] pioneered a deep learning approach called LLNet, which trains
stacked sparse denoising autoencoders for enhancing dark images. While this
method is robust, it requires large training data and lacks interpretability.
Wei et al. [11] introduced Retinex-Net, a deep neural network that separates
illumination and reflectance in an end-to-end trainable architecture. Though it
produces high-quality results, it is resource-intensive and demands labeled image
pairs.
Zhang et al. [12] developed Zero-DCE, a zero-reference deep curve
estimation model that adjusts image contrast through learned mapping curves. It
enhances real-world low-light images effectively without ground truth, but may
overfit certain illumination distributions.
Zhang and Guo [13] also proposed EnlightenGAN, a generative adversarial
network trained to learn illumination-aware mappings. This method generalizes well
but suffers from instability in training and potential hallucinations in complex
scenes.
Chen et al. [14] proposed a deep image prior-based method that models the
structure of low-light images without external data. Despite its novelty, it is slow
and unsuitable for real-time applications.
Zhao et al. [15] focused on histogram specification and tone mapping,
combining classical techniques with adaptive weighting. While efficient, this
method offers limited performance in severely underexposed areas.
Lv et al. [16] developed MBLLEN, a multi-branch low-light enhancement
network, to extract illumination and color details in a parallel manner. Though
visually impressive, it adds significant training complexity.
Gu et al. [17] proposed a self-regularized attention mechanism for low-light
enhancement by introducing spatial attention into Retinex decomposition. Their
approach effectively artifacts and noise in non-uniform lighting but requires careful

4
tuning of attention parameters for stable results.
Ren et al. [18] introduced a hybrid method combining traditional Retinex
theory with lightweight CNN modules. Their goal was to maintain low
computational overhead while leveraging learned features. Although faster than
many deep models, the enhancement quality is often subpar in extremely dark
regions.
Li et al. [19] developed a frequency-aware Retinex model that enhances
details using multi-band decomposition. By adjusting brightness in both spatial and
frequency domains, their method achieves balanced enhancement, but is prone to
edge ringing in sharp transitions.
Jiang et al. [20] explored dual-branch networks for joint brightness and detail
enhancement. One branch restores global illumination while the other sharpens local
contrast. Their method improves clarity in low-light scenes but is limited by high
training complexity.
Wang et al. [21] proposed an iterative enhancement model where illumination
is refined progressively through learned feedback. While this method adapts well to
various illumination levels, it may over-amplify bright regions if not constrained
effectively.
Kim et al. [22] focused on integrating human visual perception models into
deep networks. Their perceptual-based enhancement achieves natural brightness and
tone, but lacks adaptability in synthetic low-light datasets due to domain shift.
In summary, recent trends in low-light image enhancement show a transition
from hand-crafted illumination models to data-driven learning frameworks. Despite
improvements in visual quality, many learning-based methods face challenges such
as overfitting, lack of interpretability, or high computational cost. This motivates the
continued interest in physically-grounded models like LIME.

5
1.3 Datasets

a) Buildings at Thunder b) Building at moon light

c) Flower d) Road

6
e)Street f) Star

g) Robot h)Cube
fig 1.1 Sample datasets

7
1.3 Drawbacks of Existing Methods
1. Over-Enhancement and Loss of Naturalness
Many Retinex-based and histogram equalization methods tend to over-enhance
the image, resulting in unnatural colors and exaggerated contrast.
2. Lack of Structural Preservation
Classical methods like Gamma Correction or basic Retinex approaches often fail
to preserve image edges and fine details, causing blurring or artifacting in structured
regions.
3. Amplification of Noise in Dark Regions
Enhancing brightness without denoising can lead to significant noise
amplification, especially in very low-light inputs (e.g., Retinex, LIME without post-
processing).
4. Color Distortion
Several techniques (especially histogram-based ones) alter the original color
distribution, leading to hue shifts or unnatural tones, particularly under non-uniform
illumination.
5. High Computational Complexity
Deep learning methods (e.g., Retinex-Net, EnlightenGAN, MBLLEN) require
large models, powerful hardware, and long training times, making them unsuitable
for real-time or embedded applications.

8
1.5 Objectives of the Present Study
1. To improve the illumination estimation process for low-light images using a
modified LIME framework.
2. To preserve structural details and edges through refined, structure-aware
smoothing techniques
3. To enhance visibility in dark regions while preventing overexposure in already
bright areas.
4. To reduce noise amplification by integrating optional post-processing like
denoising.
5. To achieve a balance between enhancement quality and computational efficiency
for practical use.

1.6 Scope of the Present Study


To accomplish the stated objectives, the present study is organized into five
chapters. The first chapter provides a detailed overview of image enhancement
concepts and discusses the Histogram Equalization using multi-peak algorithm,
including its methodology and results. The second chapter focuses on the
Naturalness Preserved Enhancement (NPE) method, emphasizing its illumination
estimation process and ability to retain perceptual realism. The third chapter
introduces the proposed low-light image enhancement approach based on a modified
LIME algorithm, elaborating on its illumination map construction, refinement
techniques, and enhancement process. The fourth chapter outlines the experimental
setup and presents a comparative analysis between the proposed method and existing
algorithms, evaluating performance through both qualitative and quantitative
metrics. Finally, the fifth chapter summarizes the key findings of the study, outlines
the conclusions drawn, and suggests directions for future work.

9
CHAPTER 2
EXISTING METHODS

2.1 Histogram Equalization using Multi-peak GHE Technique

2.1.1 Introduction

Image enhancement is a fundamental aspect of low-level image processing,


aimed at improving the visual quality of images, particularly those with low contrast.
The main goal is to increase the intensity differences between objects and
backgrounds, making features within the image more distinguishable for both human
interpretation and automated analysis. Enhancement techniques generally fall into
two categories: global methods and local methods. Global methods, such as
Histogram Equalization (HE), adjust the entire intensity distribution of an image to
achieve a more uniform spread of pixel values. Although HE is widely used due to
its simplicity and efficiency, it often results in over-enhancement or under-
enhancement, especially in images containing multiple objects or complex contrast
variations.

Local methods focus on modifying image contrast based on localized features like
edges, local means, and standard deviations. Techniques such as local histogram
equalization, histogram stretching, and nonlinear mappings—including square,
exponential, and logarithmic functions—are employed to improve local texture and
details. However, these methods sometimes introduce distortions by altering the
original order of gray levels, leading to potential loss of visual consistency.

Multi-peak Generalized Histogram Equalization (multi-peak GHE) integrates both


global and local information to enhance image quality more effectively. It utilizes
local features, such as edge values obtained through operators like Laplacian and
Sobel, and combines them with histogram equalization to achieve adaptive contrast

10
enhancement. The method maintains better control over gray-level relationships,
reducing the risk of distortions while enhancing textural details. This approach
demonstrates effective performance across various images, particularly those with
narrow intensity distributions, and offers clear improvements in image clarity and
detail visibility.

2.1.2 Methodology

The image enhancement process is carried out by combining global histogram


equalization with local information to improve both overall contrast and textural
details. The method begins by calculating local edge information using operators
such as the Laplacian, which captures the intensity variations in the image. The edge
value for each pixel is computed, and both the original intensity and the edge value
are normalized to bring them into a consistent range.

A generalized function combines the normalized intensity and edge information,


controlled by two parameters: the distortion factor (α) and the enhancement factor
(β). The distortion factor regulates the influence of local information on the
enhancement, while the enhancement factor fine-tunes the contrast adjustment. The
combined values are then mapped back to the full gray-level range of the image.

The next step involves generating a histogram based on these combined values.
Since the raw histogram may contain noise, smoothing techniques are applied to
reduce fluctuations. The smoothed histogram is then analyzed to identify multiple
peaks, and the histogram is segmented accordingly. Each segment is equalized
independently, allowing adaptive enhancement across different intensity ranges.
This process results in improved contrast and better preservation of fine details, even
in images with complex or narrow intensity distribution.

11
2.1.2.1 GHE Multipeak Algorithm Procedure
Step 1: Calculate the edge values V (x, y).
Step 2: Get the values of u(x, y) and v(x, y) by normalizing I (x, y) and V (x, y)

Step 3: Calculate the values p(x, y) for every pixel.

Step 4: Map the range of p(x, y) into [Gmin, Gmax]

Step 5: Calculate the histogram H (p) based on the values G(x, y)


Step 6: Computing the local minimums, {pi, i = 1,,m − 1}, and let p0 = Gmin, pm
max
Step 7: Equalize the histogram H (p) piecewise and independently according to the
segments between pi and pi+1 {i = 0, 1,...,m − 1}. Finally, output the enhanced
image.

12
Fig 2.1 Procedure for Histogram Equalization Multi-peak Technique

13
2.1.3 Results:

The experimental results indicate the efficacy of the image enhancement


method, particularly on images characterized by narrow intensity distributions.
Traditional Histogram Equalization (HE) often leads to over-enhancement or under-
enhancement, failing to maintain a balanced enhancement across different image
regions. Multi-peak HE improves upon this to some extent but lacks the utilization
of local information, resulting in less effective enhancement of image textures.
In contrast, the multi-peak Global Histogram Equalization (GHE) method exhibits
superior performance. It avoids over- or under-enhancement issues, ensuring clear
details and textures throughout the image. Notably, the method preserves the clarity
of intricate details like castle structures and clouds without sacrificing other image
components.
The effectiveness of this method extends to images with narrow intensity
distributions, whether they are dark or bright. Traditional HE and multi-peak HE
struggle to maintain balanced enhancements in such cases, often resulting in
darkened or blurred regions. However, this method successfully overcomes these
issues, producing images with enhanced textures and details.

One significant advantage of this approach lies in its ability to create more
uniform intensity distributions. Unlike traditional HE methods, which may map
consecutive gray levels to the same value, multi-peak GHE ensures that pixels with
the same intensity may have different values based on their local information. This
leads to a more uniform distribution of gray levels, preserving image details and
textures more effectively.
Furthermore, this algorithm offers fast processing times, making it suitable
for real-time applications. With a processing time of only 0.15 seconds for enhancing

14
a 512 × 512 gray level image, the algorithm proves to be efficient even on relatively
older hardware configurations.
In summary, the experimental results demonstrate the effectiveness and efficiency
of the proposed multi-peak GHE method in enhancing images with narrow intensity
distributions. By leveraging local information and ensuring a more uniform intensity
distribution, the approach produces superior image enhancements with clearer
textures and details, overcoming the limitations of traditional HE methods.

Fig.2.2(a) Original Image Fig.2.2(b) HE

Fig.2.2 (c) Multi Peak GHE Fig.2.2(d) Multi peak GHE(α = 5, β .01)

15
2.1.4 Discussion

The experimental findings emphasize the effectiveness of this mage


enhancement technique, particularly concerning images with narrow intensity
distributions. Traditional methods like standard Histogram Equalization (HE) often
struggle to produce satisfactory results with such images, leading to over or under-
enhancement. In contrast, this multi-peak Global Histogram Equalization (GHE)
method demonstrates superior performance in achieving balanced enhancements
without sacrificing image details. Comparisons with standard HE and multi-peak HE
underscore the limitations of these traditional approaches. While standard HE may
cause over or under-enhancement, multi-peak HE provides marginal improvements
but lacks the utilization of local information. This deficiency becomes evident in the
texture enhancement, where this method excels by preserving intricate details such
as the clarity of castles and clouds
The confirmation of observations through additional experiments further
solidifies the efficacy of this approach across various image types, regardless of their
initial intensity distribution. Generating dark or bright images from the original
serves as a validation of this method's versatility and effectiveness, highlighting its
ability to adapt to different brightness levels while maintaining high-quality
enhancement. In conclusion, the discussion encapsulates the significant findings of
the experiments, highlighting the effectiveness, versatility, and efficiency of this
image enhancement approach. By overcoming the limitations of traditional methods
and leveraging local information, this method offers a promising solution for
enhancing images with narrow intensity distributions, with broad implications
across diverse applications.

16
2.2 Naturalness preserved Enhancement Algorithm for Non-
Uniform Illumination Images

2.2.1 Introduction
Image enhancement is a critical technique in the field of image processing,
aimed at improving the visual quality of images for better interpretation or further
analysis. Various enhancement methods have been proposed, including histogram
equalization, unsharp masking, and Retinex-based approaches. Among these,
Retinex theory has gained widespread adoption due to its ability to enhance image
details by separating reflectance from illumination. However, a common drawback
of many Retinex-based algorithms is their tendency to over-enhance details at the
cost of naturalness, particularly in images with non-uniform illumination. This often
results in visual artifacts, incorrect lighting perceptions, and a loss of the original
scene’s ambience.

Naturalness preservation is a vital goal in image enhancement, especially


when dealing with real-world images that contain complex lighting conditions. An
ideal enhancement algorithm should improve local contrast without altering the
global illumination cues or introducing artificial lighting effects. To address this
challenge, the authors of this paper propose a novel image enhancement algorithm
designed specifically for non-uniform illumination scenarios, with a strong emphasis
on preserving the natural appearance of the scene.

The proposed method introduces three key innovations. First, it defines a new
objective metric, the Lightness-Order-Error (LOE), to quantitatively evaluate
naturalness preservation based on the consistency of lightness order before and after
enhancement. Second, the algorithm employs a novel “bright-pass filter” to
accurately decompose an image into illumination and reflectance components while

17
ensuring the reflectance remains within a physically meaningful range. Third, a bi-
logarithmic transformation is applied to the illumination component, balancing the
enhancement of details with the preservation of overall scene lighting.

Experimental evaluations demonstrate that the proposed approach


outperforms several state-of-the-art techniques in enhancing image details without
compromising naturalness. This makes it particularly suitable for applications in
photography, remote sensing, and surveillance where maintaining the integrity of
visual information is crucial.

2.2.2 Methology

This image enhancement algorithm is designed to enhance image details while


preserving naturalness in scenes with non-uniform illumination. The methodology
involves three main stages: image decomposition, illumination mapping, and image
reconstruction.
First, the input image is decomposed into reflectance and illumination
components using a novel bright-pass filter. This filter identifies brighter
neighboring pixels within a defined local patch and uses their statistical frequency
to estimate illumination. This approach ensures that the computed reflectance
remains within the physically valid range of [0, 1].
Second, the illumination component is processed using a bi-logarithmic
transformation. Unlike traditional methods that may suppress details or distort
natural lighting, the bi-log transformation compresses the dynamic range of
illumination while preserving the relative lightness order. This maintains the visual
ambiance of the original scene.
Third, the final enhanced image is reconstructed by multiplying the
reflectance with the mapped illumination. This synthesis restores brightness and
contrast without introducing artifacts such as halos or unnatural lighting.

18
To objectively assess naturalness preservation, the Lightness-Order-Error (LOE)
metric is introduced, quantifying changes in lightness relationships between the
original and enhanced images.

Fig.2.3 Flowchart of Naturalness preserved Enhancement


2.2.2.1 Natural Preserved Enhancement Procedure
Step 1: Image Decomposition Using the Bright-Pass Filter.
Define the reflex lightness, which is the product of reflectance and illumination:

Step 2: Illumination Mapping Using the Bi-Log Transformation.


Mapped illumination can be obtained through the BLT transformation

19
Step 3: Synthesis of Reflectance and Mapped Illumination.
To enhance details and preserve naturalness, the mapped illumination is taken
into consideration. We synthesize R(x, y) and Lm(x, y) together to get the final
enhanced image

Step 4: Final Enhancement image

2.2.3 Results

The Naturalness Preserved Enhancement Algorithm for Non-Uniform


Illumination Images was evaluated using a comprehensive dataset containing over
150 images with varied lighting conditions, including daylight, cloudy skies,
nighttime, and uneven lighting environments. Its performance was compared against
several well-known enhancement techniques: Single Scale Retinex (SSR), Multi-
Scale Retinex (MSR), Generalized Unsharp Masking (GUM), Natural Enhancement
of Color Images (NECI), Brightness Preserving Dynamic Histogram Equalization
(BPDHE), and RACE.
Subjective assessments revealed that this algorithm enhances local details
effectively while maintaining the overall ambience of the scene. Unlike SSR and
MSR, which tend to distort illumination patterns and introduce halo artifacts, this
method retains realistic lighting direction and visual consistency. While GUM
improves sharpness, it often results in unnatural appearances due to over-
enhancement. NECI preserves naturalness well but sacrifices some local detail.
BPDHE maintains global brightness but struggles to enhance low-light regions.
RACE offers good color correction but alters the original scene's ambience.

20
Objective evaluations using metrics such as Discrete Entropy, Visibility Level
Descriptor (VLD), and Lightness-Order-Error (LOE) support these findings. This
method consistently showed high entropy and visibility scores, indicating strong
detail enhancement. Its LOE scores were among the lowest, confirming superior
preservation of natural lightness ordering.
Visual comparisons also demonstrated improved visibility in shadowed
regions and enhanced contrast in darker areas, without introducing visual artifacts
or unnatural color shifts. Images maintained their original atmosphere and structural
integrity across diverse scenes.

Fig.2.4(a) Original image Fig.2.4(b) Enhanced image of SSR

21
Fig.2.4(c)Enhanced Image of BPDHE Fig.2.4(d) Enhanced Image of MSR

Fig.2.4(e) Enhanced Image of Race Fig.2.4(f) Enhanced image of existing


algorithm

22
2.2.4 Discussion
This enhancement algorithm demonstrates a strong capability to balance detail
improvement and naturalness retention, particularly in challenging lighting
conditions. Its image decomposition using a bright-pass filter ensures physically
consistent reflectance values, avoiding over-enhancement that is common in other
methods. The bi-logarithmic transformation applied to the illumination component
enables effective dynamic range compression without compromising the spatial light
distribution.
In comparison with techniques like SSR and MSR, this method preserves the
scene’s ambience and avoids distortions of light direction. While methods like
BPDHE perform well in preserving brightness, they often fail to enhance low-light
regions adequately. GUM and NECI either over-enhance or underrepresent certain
features, while RACE can introduce color shifts that reduce visual realism.
The algorithm’s performance across various metrics entropy, VLD, and LOE shows
consistent alignment with both human visual preferences and statistical measures.
The LOE metric in particular highlights the algorithm’s effectiveness in maintaining
lightness relationships, which are essential for natural visual perception.
A noted limitation is its current focus on still images. Since illumination consistency
across frames is not addressed, applying this algorithm to video sequences may
introduce flickering. Future developments could aim to incorporate temporal
coherence for improved video performance.
In practical terms, the algorithm is well-suited for applications where both
visual clarity and naturalness are critical, such as digital photography, surveillance,
and remote sensing.

23
2.3 SUMMARY
Low light image enhancement via illumination map estimation offers several
advantages over alternative methods such as histogram equalization (HE), multi-
peak algorithms, and naturalness-preserved enhancement algorithms. Firstly,
illumination map estimation specifically addresses the challenges posed by low light
conditions, which often result in poor visibility and loss of detail. By accurately
estimating and enhancing the illumination map, this approach effectively improves
image quality and visibility in low light scenarios.
Unlike HE and multi-peak algorithms, which may lead to over-enhancement or
unnatural-looking results, illumination map estimation focuses on enhancing local
illumination while preserving naturalness. This targeted approach ensures that
enhancements are applied judiciously, resulting in visually pleasing images that
maintain the integrity of the scene.
Moreover, illumination map estimation offers greater flexibility and
adaptability compared to traditional enhancement methods. By dynamically
adjusting illumination levels based on local image characteristics, this approach can
effectively handle varying lighting conditions within the same scene. This ensures
consistent and optimal enhancement across different regions of the image, regardless
of lighting variations.
Additionally, low light image enhancement via illumination map estimation
aligns well with the goal of improving image quality while maintaining realism.
Unlike some naturalness-preserved enhancement algorithms, which may struggle
with non-uniform illumination and introduce artifacts, illumination map estimation
prioritizes the preservation of natural lighting effects. This results in enhanced
images that closely resemble the original scene, with improved visibility and detail
in low light conditions.

24
CHAPTER 3
LOW LIGHT IMAGE ENHANCEMNET VIA ILLUMINATION
MAP ESTIMATION

3.1 INTRODUCTION

In the field of digital image processing, image quality plays a pivotal role in
determining the effectiveness of higher-level vision tasks such as object detection,
classification, tracking, and scene understanding. However, in real-world scenarios,
images are often captured under suboptimal lighting conditions due to environmental
constraints, inadequate exposure settings, or low-cost imaging hardware. These low-
light images typically suffer from poor visibility, low contrast, high noise, and color
distortion, which collectively degrade both the visual appeal and the performance of
computer vision algorithms.
To address this challenge, a variety of low-light image enhancement
techniques have been proposed. Traditional methods such as Histogram Equalization
(HE), Adaptive Histogram Equalization (AHE), and Gamma Correction aim to
stretch or shift the dynamic range of pixel intensities to improve brightness and
contrast. Although computationally simple, these techniques often lead to over-
enhancement, color distortion, and a lack of structure preservation. Retinex-based
models, inspired by human visual perception, attempt to separate the image into
illumination and reflectance components. While these models improve
interpretability, many suffer from issues like noise amplification and unnatural color
rendering, especially in the presence of non-uniform illumination.
Recent advancements in low-light enhancement include learning-based
approaches, which leverage deep neural networks to learn complex mappings from
dark to bright images. Although effective, these methods are often computationally
intensive, require large-scale labeled datasets, due to their black-box nature.

25
Among the physically interpretable and computationally efficient methods,
Low-light Image Enhancement via Illumination Map Estimation (LIME) has gained
attention. LIME estimates an illumination map directly from the input image and
uses it to enhance brightness in a non-uniform yet controlled manner. However, the
original LIME method can still be improved in terms of structure preservation, noise
handling, and adaptability.
This study proposes a modified implementation of the LIME algorithm with
refined illumination estimation and structure-aware enhancement. The objective is
to enhance the visual quality of low-light images while maintaining computational
efficiency and preserving natural details. The proposed approach is evaluated against
existing methods to demonstrate its effectiveness and applicability in practical
scenarios.

3.1 Methodology

The proposed methodology enhances low-light images by building upon the


LIME (Low-light Image Enhancement via Illumination Map Estimation)
framework, introducing modifications that improve both illumination estimation and
structural detail preservation. The process begins by estimating an initial
illumination map from the input image. This is done by computing the maximum
value among the red, green, and blue (RGB) channels for each pixel. This choice
ensures that the illumination estimation is simple, fast, and physically meaningful,
effectively capturing the brightness information at each pixel location while
avoiding saturation in the final enhanced image.
Once the initial illumination map is obtained, it is refined using a structure-
aware optimization approach. This refinement process is crucial for enhancing the
visual quality of the final result, as it preserves the prominent structural edges of the

26
image while suppressing minor textural details. The refinement is achieved by
minimizing an energy function that balances fidelity to the initial illumination map
and spatial smoothness. The optimization incorporates gradient-based weighting,
where stronger gradients are preserved to maintain edges, and weaker gradients are
smoothed to reduce noise. This technique ensures that the enhanced image maintains
its natural structure and avoids the artifacts commonly introduced by basic
smoothing techniques.
After obtaining the refined illumination map, the input image is enhanced by
performing pixel-wise division of the original intensity values by the corresponding
illumination values. To further control the brightness and contrast, a gamma
correction step is applied to the illumination map before enhancement. This allows
the user to fine-tune the visual appearance of the output based on desired brightness
levels. To address this, a post-processing denoising step using BM3D is optionally
applied, particularly in darker regions of the image. This hybrid methodology
balances efficiency, interpretability, and enhancement quality, making it suitable for
practical low-light image enhancement tasks.
The following are the steps required for this method:
1) Illumination Map Estimation
2) Exact solver to Problem
3) Post processing
4) Denoising the image

1)Illumination map estimation

Step 1: The first color constancy methods, Max-RGB [8] tries to estimate the
illumination by seeking the maximum value of three color channels, say R, G and
B. But this estimation can only boost the global illumination. In this paper, to handle

27
non-uniform illuminations, we alternatively adopt the following initial estimation

The obtained Tˆ (x) guarantees that the recovery will not be saturated, because of:

Step 2: we employ 2 to initially estimate illumination map Tˆ,due to its simplicity,


although various approaches. Most of these improvements essentially consider the
local consistency of illumination by taking into account neighboring pixels within a
small region around the target pixel.

where Ω(x) is a region centered at pixel x, and y is the location index within the
region. These schemes can somewhat enhance the local consistency, but they are
structure-blind.
Step 3: we provide a more powerful scheme to better achieve this goal.
To address this issue, based on the initial illumination map Tˆ , we propose to solve
the following optimization problem:

where α is the coefficient to balance the involved two terms and, |·|F
and|·|1.designate the Frobenious and L1norms, respectively. Further, W is the
weight matrix, and ∇T is the first order derivative filter. In this work, it only contains
∇hT (horizontal) and ∇vT (vertical).
The first term takes care of the fidelity between the initial map Tˆ and the refined

28
one T, while the second term considers the (structure-aware) smoothness.
2) Exact Solver to Problem(Augmented Lagrangian Method)

Step1: An auxiliary variable G is introduced to replace ∇T for making the problem


separable and thus easy to solve. Accordingly, ∇T = G is added as a constraint. As
a result, we have the following equivalent optimization problem:

Step2: Collecting T terms from the above equation.

The solution can be computed through differentiating above with respect to


T and setting it to 0:

where I is the identity matrix with proper size. And D contains Dh and Dv,
which are the Toeplitz matrices from the discrete gradient operators with forward
difference. We note that, for convenience, the operations DX and DT X represent
reshape(Dx) and reshape(DT x).
Step 3: Dropping the terms unrelated to G leads to the following optimization
problem.

The closed form solution of above can be easily obtained by performing the
shrinkage operation like:

Step 4: The updating of Z and µ can be done via

29
3) Post-Processing

Step1: The refined illumination map T, gamma correction is applied on T, in order


to improve the visual perception of the result, as follows:

4) Denoising the image

Block-matching and 3-D filtering (BM3D)


In this method, the first step is the grouping of regions (2-D blocks of same size) of
the image based on their similarity with respect to a reference block.Denoising is
performed by a transform-domain shrinkage such as Wiener filtering, after which
this transform is inverted to reproduce the denoised blocks .The overlapping blocks
are weight-averaged before replacement.The image is converted from RGB color
space to the YUV colors pace and BM3D is applied only on the Y channel. In order
to compensate for the non-uniformity in the result, since the darker regions are
denoised at the cost of smoothening.

30
Algorithm for LIME method

INPUT:
Low-light image L, positive coefficients α, γ, ρ, and µ(0)
INITIALIZE:
k = 0, G(0) = Z (0) = 0 ∈ R2M×N Estimate the initial illumination map Tˆ on L
while k < k0 do
Update T(k+1);
Update G(k+1);
Update Z (k+1) and µ (k+1);
k=k+1;
end
Apply Gamma correction on T(k0) ;
Obtain R using Gamma corrected T(k0);
Denoise the result using BM3D ;
OUTPUT : Final enhanced result

3.3 Results and Discussioon

The performance of the proposed low-light image enhancement method was


evaluated using the Level of Error (LOE) metric. This metric quantifies the
difference in lightness order between the enhanced image and a reference, with lower
LOE values indicating better preservation of natural illumination. The experimental
results demonstrate that the modified LIME algorithm consistently produced lower
LOE values compared to traditional methods such as Histogram Equalization and
NPE. This confirms the effectiveness of the proposed structure-aware refinement
and illumination estimation. A lower LOE value signifies improved enhancement
quality, while higher values indicate visual inconsistency or over-enhancement.

31
32
33
Fig3.1 Results of comparision between original image and enhanced image

34
3.4 Summary

This method is an efficient and effective method to enhance low-light images.


The key to the low-light enhancement is how well the illumination map is estimated.
The structure-aware smoothing model has been developed to improve the
illumination consistency. We have designed two algorithms: one can obtain the exact
optimal solution to the target problem, while the other alternatively solves the
approximate problem with significant saving of time. Moreover, the model is general
to different (structure) weighting strategies. The experimental results have revealed
the advance of the method compared with several state-of-the-art alternatives. It is
positive that the low-light image enhancement technique can feed many vision-based
applications, such as edge detection, feature matching.

35
CHAPTER 4

4.1 Experimental Setup

For our experiment, input images were resized and normalized to a consistent
resolution to ensure uniform processing. The dataset consists of low-light images in
BMP format, processed using a custom Python implementation of the modified
LIME algorithm. All experiments were conducted on a system with Intel Core i5
processor @2.40 GHz and 8 GB RAM, using Python 3.10 along with libraries such
as NumPy, SciPy, Matplotlib, and scikit-image. The enhancement process includes
estimation of the initial illumination map using the maximum RGB values, followed
by structure-aware refinement through iterative optimization involving Laplacian
vectors and directional derivatives. Gamma correction is applied to adjust
brightness, and the final enhanced image is obtained by normalizing the input with
the refined illumination map. A total of 10 iterations were performed per image. For
qualitative analysis, enhanced images were visually compared with their original
versions using side-by-side visualization. Quantitative evaluation was done using
the Level of Error (LOE) metric, where a lower LOE value indicates better
enhancement. The entire pipeline automates loading, processing, saving, and
comparing results, and outputs were stored for reproducibility and comparison.

4.2 Performance Metrics

To evaluate the naturalness of enhanced images, we employ the Lightness Order


Error (LOE) as the primary performance metric. This metric measures how well the
enhancement preserves the relative order of lightness in local areas, which reflects
the original light source direction and lightness variation.

36
Lightness Order Error (LOE)
The LOE is defined as:

Where:
 m is the number of pixels.
 Q(x) and Qr(x) are the maximum intensity values among R, G, and B channels
at location x in the enhanced and reference images, respectively.
 U(p,q) is a step function:

 ⊕ denotes the exclusive-or (XOR) operation

A lower LOE value indicates better preservation of the natural lightness ordering,
meaning the enhancement method more effectively maintains the natural appearance
of the image.
To further substantiate the qualitative visual improvements, LOE values are
computed for each enhanced image. These values are then used as a comparative
measure between different enhancement methods. The images that yield lower LOE
scores tend to maintain the relative brightness structure of the original scenes,
preserving critical perceptual cues such as shading, contrast, and lighting direction.
This quantitative metric helps in objectively ranking enhancement results, especially
when visual judgment is subjective or inconsistent. For instance, when multiple
algorithms produce visually similar results, LOE serves as a decisive indicator of
which method better preserves the inherent illumination relationships.

37
In practice, enhanced images with LOE scores below a threshold (e.g., < 2.0) are
typically perceived as visually natural and consistent with human perception.
Conversely, higher LOE values signify that the enhancement process has altered the
original lightness relationships, leading to artifacts or unnatural appearance.

4.3 Experimental Results

SL No. Image Name Loe Value


1 Buildings at Thunder 1.7334
2 Building at moon light 1.3211
3 Flower 1.3849
4 Road 1.4667
5 Street 1.5998
6 Star 1.2789
7 Robot 1.5682
8 Cube 1.3421

Table 4.1, Loe values of Each image.

The Lightness Order Error (LOE) values displayed below each image represent the
degree to which the enhanced image preserves the natural lightness ordering
compared to the original. A lower LOE value indicates better preservation of visual
realism and natural lighting. As seen, the images with smaller LOE scores (e.g.,
1.2543) demonstrate more natural enhancement results, while higher values reflect
greater deviation. These quantitative scores support visual evaluation by objectively
measuring enhancement performance across different methods or algorithms.

38
CHAPTER 5
CONCLUSION

In this study, a modified implementation of the LIME algorithm was done to enhance
low-light images effectively. By incorporating structure-aware illumination
refinement and gamma correction, the method successfully improves visibility in
dark regions while preserving important image details. Experimental results
demonstrate that the proposed approach achieves visually appealing enhancement
and lower error levels compared to traditional methods. The use of a physically
interpretable model ensures computational efficiency and practical applicability.
Overall, the method provides a balanced solution for real-world low-light image
enhancement tasks and serves as a reliable foundation for further improvements and
integration into vision-based applications.

39
REFERENCES

[1] D. Oneata, J. Revaud, J. Verbeek, and C. Schmid, “Spatio-temporal object


detection proposals,” in ECCV, pp. 737–752, 2014.
[2] K. Zhang, L. Zhang, and M. Yang, “Real-time compressive tracking,” in
ECCV, pp. 866–879, 2014.
[3] E. Pisano, S. Zong, B. Hemminger, M. DeLuce, J. Maria, E. Johnston, K.
Muller, P. Braeuning, and S. Pizer, “Contrast limited adaptive histogram
equalization image processing to improve the detection of simulated
spiculations in dense mammograms,” Journal of Digital Imaging, vol. 11, no.
4, pp. 193–200, 1998.
[4] H. Cheng and X. Shi, “A simple and effective histogram equalization
approach to image enhancement,” Digital Signal Processing, vol. 14, no. 2,
pp. 158–170, 2004.
[5] M. Abdullah-Al-Wadud, M. Kabir, M. Dewan, and O. Chae, “A dynamic
histograme equalization for image contrast enhancement,” IEEE Trans. on
Consumer Electronics, vol. 53, no. 2, pp. 593–600, 2007.
[6] T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,”
TIP, vol. 20, no. 12, pp. 3431–3441, 2011.
[7] C. Lee and C. Kim, “Contrast enhancement based on layered difference
representation,” TIP, vol. 22, no. 12, pp. 5372–5384, 2013.
[8] E. Land, “The retinex theory of color vision,” Scientific American, vol. 237,
no. 6, pp. 108–128, 1977.
[9] D. Jobson, Z. Rahman, and G. Woodell, “Properties and performance of a
center/surround retinex,” TIP, vol. 6, no. 3, pp. 451–462, 1996.

40
[10] D. Jobson, Z. Rahman, and G. Woodell, “A multi-scale retinex for
bridging the gap between color images and the human observation of scenes,”
TIP, vol. 6, no. 7, pp. 965–976, 1997.
[11] S. Wang, J. Zheng, H. Hu, and B. Li, “Naturalness preserved
enhancement algorithm for non-uniform illumination images,” TIP, vol. 22,
no. 9, pp. 3538–3578, 2013.
[12] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-
based enhancing method for weakly illuminated images,” Signal Processing,
vol. 129, pp. 82–96, 2016.
[13] X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A weighted
variaional model for simultaneous reflectance and illumination estimation,”
in CVPR, pp. 2782–2790, 2016.
[14] X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu, “Fast
efficient algorithm for enhancement of low lighting video,” in ICME, pp. 1–
6, 2011.
[15] L.Li,R.Wang, W. Wang, and W. Gao, “A low-light image enhancement
method for both denoising and contrast enlarging,” in ICIP, pp. 3730– 3734,
2015.
[16] R. Grosse, M. Johnson, E. Adelson, and W. Freeman, “Ground-truth
dataset and baseline evaluations for intrinsic image algorithms,” in ICCV, pp.
2335–2342, 2009.
[17] P. Gehler, C. Rother, M. Kiefel, L. Zhang, and B. Scholkopf, “Recover-
¨ ing intrinsic images with a prior on reflectance,” in NIPS, pp. 765–773, 2011.

41

You might also like