[go: up one dir, main page]

CN119295359A - Image enhancement method, image enhancement device, electronic device, and storage medium - Google Patents

Image enhancement method, image enhancement device, electronic device, and storage medium Download PDF

Info

Publication number
CN119295359A
CN119295359A CN202411530817.XA CN202411530817A CN119295359A CN 119295359 A CN119295359 A CN 119295359A CN 202411530817 A CN202411530817 A CN 202411530817A CN 119295359 A CN119295359 A CN 119295359A
Authority
CN
China
Prior art keywords
sample
enhancement
image
map
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411530817.XA
Other languages
Chinese (zh)
Other versions
CN119295359B (en
Inventor
时勇杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202411530817.XA priority Critical patent/CN119295359B/en
Publication of CN119295359A publication Critical patent/CN119295359A/en
Application granted granted Critical
Publication of CN119295359B publication Critical patent/CN119295359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image enhancement method, an image enhancement device, electronic equipment and a storage medium, belongs to the technical field of image processing, and is suitable for the technical field of finance. The method comprises the steps of extracting a sample original illumination map based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features, enhancing the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map, encoding the sample brightness enhancement map and the sample contrast enhancement map based on an image encoding layer of the preset enhancement model to obtain sample enhancement features, decoding the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain a sample enhancement map, and determining model loss values based on the sample original illumination map, a sample normal illumination map and the sample enhancement map to conduct parameter adjustment on the preset enhancement model to obtain an image enhancement model for image enhancement of a target image. The embodiment of the application can improve the enhancement accuracy of the low-illumination image.

Description

Image enhancement method, image enhancement device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image enhancement method, an image enhancement apparatus, an electronic device, and a storage medium.
Background
Image enhancement refers to a technique that makes an image more suitable for human eye observation or for machine analysis and processing by improving its visual effect. For example, in the field of insurance in financial science and technology, when an object of claim settlement starts an automatic system for claim settlement of vehicle damage, the object of claim settlement can upload the image of vehicle damage to cause the automatic system for claim settlement to judge the condition of vehicle damage through the image of vehicle damage, thereby performing claim settlement evaluation. However, under the condition of weak light or backlight at night, the quality degradation problems such as underexposure, detail loss, color distortion or blurry and the like of the low-illumination image captured by the mobile phone generally exist, so that the image quality of the captured vehicle-damage image is insufficient. Therefore, the accuracy of evaluating the damage condition of the vehicle can be ensured by carrying out image enhancement on the uploaded damage image.
However, the current image enhancement method can only perform image enhancement on some low-illumination images without noise, and only by increasing the contrast or brightness of the images, serious noise and color distortion still exist, so that the image enhancement effect is not good. Therefore, how to improve the image enhancement accuracy of the low-illumination image, and avoid serious noise and color distortion, becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application mainly aims to provide an image enhancement method, an image enhancement device, electronic equipment and a storage medium, aiming at improving the enhancement accuracy of a low-illumination image, avoiding serious noise and color distortion and improving the recognition performance of the enhanced image.
To achieve the above object, a first aspect of an embodiment of the present application provides an image enhancement method, including:
acquiring sample data, wherein the sample data comprises a sample original illumination map and a sample normal illumination map, the sample original illumination map is an image which is acquired under low illumination and contains a sample object, and the sample normal illumination map is an image which is acquired under normal illumination and contains the sample object;
Performing feature extraction on the original sample illumination map based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features;
Performing image gray scale attribute enhancement on the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map;
image coding is carried out on the sample brightness enhancement map and the sample contrast enhancement map based on the image coding layer of the preset enhancement model, so that sample enhancement characteristics are obtained;
Performing image decoding on the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain a sample enhancement map;
Determining a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and performing parameter adjustment on the preset enhancement model based on the model loss value to obtain an image enhancement model;
and carrying out image enhancement on the target image based on the image enhancement model to obtain a target enhanced image.
In some embodiments, the feature extraction layer, based on a preset enhancement model, performs feature extraction on the sample original illumination map to obtain sample low-illumination image features, including:
Performing image decomposition on the original sample illumination map to obtain a sample reflection map and a sample illumination map;
and carrying out feature extraction on the sample reflection map and the sample original illumination map based on the feature extraction layer to obtain the sample low-illumination image features.
In some embodiments, the determining a model loss value based on the sample original illumination map, the sample normal illumination map, and the sample enhancement map comprises:
Performing visual characteristic loss calculation based on the sample normal illumination map and the sample enhancement map to obtain a visual characteristic loss value;
performing image decomposition loss calculation based on the sample original illumination map, the sample reflection map and the sample illumination map to obtain an image decomposition loss value;
performing image decomposition on the sample enhancement map to obtain a sample enhancement reflection map;
Performing image gray scale attribute loss calculation based on the sample enhanced reflection map, the sample enhanced map and the sample normal illumination map to obtain an image gray scale attribute loss value;
and weighting and calculating the vision characteristic loss value, the image decomposition loss value and the image gray attribute loss value to obtain the model loss value.
In some embodiments, the calculating the visual characteristic loss based on the sample normal illumination map and the sample enhancement map to obtain a visual characteristic loss value includes:
Performing color loss calculation on the sample normal illumination map and the sample enhancement map based on a preset color loss function to obtain an image color loss value;
Performing texture loss calculation on the sample normal illumination map and the sample enhancement map based on a preset texture loss function to obtain an image texture loss value;
Performing content loss calculation on the sample normal illumination map and the sample enhancement map based on a preset content loss function to obtain an image content loss value;
and carrying out weighted sum calculation on the image color loss value, the image texture loss value and the image content loss value to obtain a visual characteristic loss value.
In some embodiments, the calculating the image gray scale attribute loss based on the sample enhanced reflection map, the sample enhanced map, and the sample normal illumination map to obtain an image gray scale attribute loss value includes:
Performing color enhancement loss calculation on the sample enhancement reflection map and the sample normal illumination map based on a preset three-channel weight and a preset brightness value to obtain a color enhancement loss value, wherein the preset brightness value is used for indicating the enhancement degree of brightness attribute;
performing image contrast loss calculation based on the sample enhancement map and the sample normal illumination map to obtain an image contrast loss value;
and carrying out weighted sum calculation on the color enhancement loss value and the image contrast loss value to obtain the image gray attribute loss value.
In some embodiments, the image coding layer based on the preset enhancement model performs image coding on the sample brightness enhancement map and the sample contrast enhancement map to obtain sample enhancement features, including:
Image coding is carried out on the sample brightness enhancement graph, so that brightness enhancement characteristics are obtained;
image coding is carried out on the sample contrast enhancement graph, so that contrast enhancement characteristics are obtained;
Performing feature stitching on the brightness enhancement features and the contrast enhancement features to obtain sample stitching features;
and extracting the characteristics of the sample splicing characteristics to obtain the sample enhancement characteristics.
In some embodiments, the performing feature stitching on the brightness enhancement feature and the contrast enhancement feature to obtain a sample stitching feature includes:
Acquiring the detection precision of the sample target based on the target type of the sample target;
Determining enhancement parameters based on the target type and the detection accuracy, the enhancement parameters including a brightness enhancement weight of the brightness enhancement feature and a contrast enhancement weight of the contrast enhancement feature;
and performing feature weighted stitching based on the brightness enhancement features, the brightness enhancement weights, the contrast enhancement features and the contrast enhancement weights to obtain the sample stitching features.
To achieve the above object, a second aspect of an embodiment of the present application proposes an image enhancement device, including:
the acquisition module is used for acquiring sample data, wherein the sample data comprises a sample original illumination map and a sample normal illumination map, the sample original illumination map is an image which is acquired under low illumination and contains a sample object, and the sample normal illumination map is an image which is acquired under normal illumination and contains the sample object;
the extraction module is used for carrying out feature extraction on the original sample illumination map based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features;
The attribute enhancement module is used for carrying out image gray attribute enhancement on the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map;
The coding module is used for carrying out image coding on the sample brightness enhancement map and the sample contrast enhancement map based on an image coding layer of the preset enhancement model to obtain sample enhancement characteristics;
The decoding module is used for carrying out image decoding on the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain a sample enhancement map;
the training module is used for determining a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and carrying out parameter adjustment on the preset enhancement model based on the model loss value to obtain an image enhancement model;
And the image enhancement module is used for carrying out image enhancement on the target image based on the image enhancement model to obtain a target enhanced image.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device, including a memory storing a computer program and a processor implementing the method according to the first aspect when the processor executes the computer program.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of the first aspect.
The image enhancement method, the image enhancement device, the electronic equipment and the storage medium are characterized in that sample data are obtained, the sample data comprise a sample original illumination image and a sample normal illumination image, the sample original illumination image is an image which is collected under low illumination and contains a sample object, the sample normal illumination image is an image which is collected under normal illumination and contains a sample object, further, feature extraction is carried out on the sample original illumination image based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features, image gray attribute enhancement is carried out on the sample original illumination image to obtain a sample brightness enhancement image and a sample contrast enhancement image, further, image encoding is carried out on the sample brightness enhancement image and the sample contrast enhancement image based on an image encoding layer of the preset enhancement model to obtain sample enhancement features, further, image decoding is carried out on the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain the sample enhancement image, further, the loss value is determined based on the sample original illumination image, the sample normal illumination image and the sample enhancement image, parameter adjustment is carried out on the preset enhancement model based on the model loss value to obtain image enhancement model, and finally, image enhancement is carried out on a target image based on the target image enhancement model. Compared with the prior art, only a few low-illumination images without noise can be subjected to image enhancement, and only by increasing the contrast or brightness of the images, the embodiment of the application considers the brightness enhancement image and the contrast enhancement image of the images at the same time when enhancing the images, and the detail brightness and contrast characteristics of the low-illumination images can be enhanced and learned, so that the generated enhanced images can retain the essential attribute of the images and the brightness contrast. Therefore, the embodiment of the application can improve the enhancement accuracy of the low-illumination image, avoid serious noise and color distortion, and improve the recognition performance of the enhanced image.
Drawings
FIG. 1 is a flowchart of an image enhancement method provided by an embodiment of the present application;
Fig. 2 is a flowchart of step S120 in fig. 1;
Fig. 3 is a flowchart of step S140 in fig. 1;
fig. 4 is a flowchart of step S330 in fig. 3;
Fig. 5 is a flowchart of step S160 in fig. 1;
Fig. 6 is a flowchart of step S510 in fig. 5;
fig. 7 is a flowchart of step S540 in fig. 5;
FIG. 8 is a schematic flow chart of image enhancement based on an image enhancement model according to an embodiment of the present application;
Fig. 9 is a schematic structural view of an image enhancement device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
First, several nouns involved in the present application are parsed:
Artificial intelligence ARTIFICIAL INTELLIGENCE, AI is a new technical science for studying, developing theories, methods, techniques and application systems for simulating, extending and expanding human intelligence, is a branch of computer science, and attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar manner to human intelligence, including robotics, language recognition, image recognition, natural language processing, expert systems, and the like. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Image decomposition refers to the process of decomposing an image into several components or parts, to describe the different characteristics or properties of the image by these parts. The purpose of image decomposition is to represent the information of the original image in a form that is easier to process or analyze for further processing or analysis. Types of image decomposition may include color space decomposition (i.e., decomposing the pixel values of the image into different color channels according to color space (e.g., RGB, HSV, lab, etc.), frequency domain decomposition (i.e., decomposing the image into different frequency components using fourier transform, etc.), spatial domain decomposition (i.e., decomposing the image into different spatial scale components, such as into sub-band images of different resolutions via wavelet transform), illumination and reflectivity decomposition (decomposing the image into illumination components and reflectivity components, as done by the Retinex algorithm, where the illumination components describe the illumination distribution in the image and the reflectivity components describe the reflectivity characteristics of the object surface), etc.
Retinex image decomposition is a process of decomposing an image into two main components, namely illumination (illumination, for example, in the following examples) and reflection (reflection, for example, in the following examples). This decomposition simulates the perception of changes in illumination by the human eye, with the aim of improving the brightness and contrast of the image, particularly in low-light conditions.
Gaussian blur operator (Gaussian Blur Operator) is a common blurring technique used in image processing and computer vision. The gaussian blur operator may use a gaussian function (Gaussian Function) to reduce image noise and detail, thereby blurring the image. Gaussian blur smoothes the image by a weighted average with the pixel values in the neighborhood, the weights being determined by a gaussian distribution.
Image enhancement refers to a technique that makes an image more suitable for human eye observation or for machine analysis and processing by improving its visual effect. For example, in the field of insurance in financial science and technology, when an object of claim settlement starts an automatic system for claim settlement of vehicle damage, the object of claim settlement can upload the image of vehicle damage to cause the automatic system for claim settlement to judge the condition of vehicle damage through the image of vehicle damage, thereby performing claim settlement evaluation. However, under the condition of weak light or backlight at night, the quality degradation problems such as underexposure, detail loss, color distortion or blurry and the like of the low-illumination image captured by the mobile phone generally exist, so that the image quality of the captured vehicle-damage image is insufficient. Therefore, the accuracy of evaluating the damage condition of the vehicle can be ensured by carrying out image enhancement on the uploaded damage image.
However, the current image enhancement method can only perform image enhancement on some low-illumination images without noise, and only by increasing the contrast or brightness of the images, serious noise and color distortion still exist, so that the image enhancement effect is not good. Therefore, how to improve the image enhancement accuracy of the low-illumination image, and avoid serious noise and color distortion, becomes a technical problem to be solved urgently.
Based on the above, the embodiment of the application provides an image enhancement method, an image enhancement device, electronic equipment and a storage medium, which aim to improve the accuracy of image enhancement of a low-illumination image, avoid serious noise and color distortion, and further improve the recognition performance of the enhanced image.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides an image enhancement method, which relates to the technical field of artificial intelligence. The image enhancement method provided by the embodiment of the application can be applied to the terminal, can be applied to the server side, and can also be software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., the server may be configured as an independent physical server, may be configured as a server cluster or a distributed system formed by a plurality of physical servers, and may be configured as a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligent platforms, and the software may be an application for implementing the image enhancement method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. Such as a personal computer, a server computer, a hand-held or portable device, a tablet device, a multiprocessor system, a microprocessor-based system, a set top box, a programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, a distributed computing environment that includes any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In the embodiments of the present application, when related processing is performed according to data related to the identity or characteristics of the object, such as object feedback data, object article data, etc., permission or agreement of the object is obtained first, and the collection, use, processing, etc. of the data complies with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the object, the independent permission or independent consent of the object is acquired through a popup window or a jump to a confirmation page and the like, and after the independent permission or independent consent of the object is explicitly acquired, the necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Referring to fig. 1, fig. 1 is an optional flowchart of an image enhancement method according to an embodiment of the present application. In some embodiments of the present application, the method in fig. 1 may specifically include, but is not limited to, steps S110 to S170.
Step S110, obtaining sample data;
step S120, carrying out feature extraction on the original illumination map of the sample based on a feature extraction layer of a preset enhancement model to obtain the features of the low-illumination image of the sample;
step S130, carrying out image gray scale attribute enhancement on the original illumination map of the sample to obtain a sample brightness enhancement map and a sample contrast enhancement map;
step S140, image coding is carried out on the sample brightness enhancement map and the sample contrast enhancement map based on an image coding layer of a preset enhancement model, so as to obtain sample enhancement characteristics;
step S150, performing image decoding on the sample enhancement features and the sample low-illumination image features based on a decoding layer of a preset enhancement model to obtain a sample enhancement map;
Step S160, determining a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and performing parameter adjustment on a preset enhancement model based on the model loss value to obtain an image enhancement model;
And step S170, performing image enhancement on the target image based on the image enhancement model to obtain a target enhanced image.
Compared with the prior art, the steps S110 to S170 of the application can only enhance the image of some low-illumination images without noise, and only enhance the contrast or brightness of the image by increasing the contrast or brightness of the image, the embodiment of the application considers the brightness enhancement map and the contrast enhancement map of the image at the same time, can enhance the detail brightness and contrast characteristics of the low-illumination map, and enhances the image by combining the backbone network. Thus, the generated enhanced image can not only keep the intrinsic properties of the image, but also keep the contrast of light and dark. Therefore, the embodiment of the application can improve the enhancement accuracy of the low-illumination image, avoid serious noise and color distortion, and improve the recognition performance of the enhanced image.
In step S110 of some embodiments, a training sample set is acquired, which is used for training to obtain an image enhancement model. The training sample set comprises a plurality of sample data, and each sample data comprises a pair of sample original illumination maps and a sample normal illumination map, wherein the sample original illumination maps are images which are collected under low illumination and contain sample objects, and the sample normal illumination maps are images which are collected under normal illumination and contain sample objects. The sample normal illumination map and the sample original illumination map have the same image content, represent images photographed under different illumination conditions, and can be used as a reference label of the sample original illumination map.
It should be noted that, the original illumination pattern of the sample corresponds to an image collected under a low-light environment condition with insufficient light, such as a low-light condition at night, indoor or backlight condition. In low illumination conditions, image acquisition may have drawbacks such as increased image noise (the photosensitive element of the camera may increase its sensitivity (ISO value), which may lead to increased image noise, underexposure, color distortion, reduced dynamic range (i.e., in the case of insufficient light, the dynamic range of the image (i.e., details of the brightest and darkest portions of the image) may be reduced, resulting in overexposure of the highlight or loss of details of the shadow portion), difficulty in focusing (i.e., in low illumination environments, the autofocus system may have difficulty in focusing accurately, resulting in image blur), etc. Although the low-light image may be improved by using a larger aperture, a slower shutter speed, increasing the ISO value, etc., the image enhancement algorithm may significantly improve the visibility and quality of the image without increasing hardware costs.
In step S120 of some embodiments, in the model training process, the present application may input the original illumination map of the sample into a preset enhancement model, where the preset enhancement model may be a model constructed based on a convolutional neural network model, a machine learning model, and other structures, and is not limited specifically. The preset enhancement model may include a feature extraction layer, an image encoding layer, and a decoding layer. Further, a feature extraction layer in a preset enhancement model can be used for extracting features of the original illumination map of the sample taken under the low-illumination condition, and the obtained features of the low-illumination image of the sample can represent key information in the original illumination map of the sample, such as textures, outlines and the like, so that the subsequent image enhancement process is facilitated.
Referring to fig. 2, fig. 2 is a specific flowchart of step S120 according to an embodiment of the present application. In some embodiments of the present application, step S120 may specifically include, but is not limited to, steps S210 to S220.
Step S210, performing image decomposition on a sample original illumination map to obtain a sample reflection map and a sample illumination map;
Step S220, carrying out feature extraction on the sample reflection map and the sample original illumination map based on the feature extraction layer to obtain sample low-illumination image features.
In step S210 of some embodiments, in order to learn the detail features of the low-light level image better, the present application may perform image decomposition on the original sample light level image to obtain a sample reflection image and a sample light level image. The sample reflection map may reflect color and material information of an image, and may be relatively unchanged under different illumination conditions, so that the sample reflection map is also called a native color map. The sample illumination map (also referred to as luminance map, gao Guangtu, etc.) may reflect the illumination of the image, or may reflect the luminance distribution of the image.
It should be noted that, the image decomposition of the present application may use the Retinex image decomposition algorithm, that is, the image may be decomposed into two independent components, namely, a reflection component (reflection) and an illumination component (Illumination), so that the image in any scene may be represented as the product of the reflection component and the illumination component of the object. That is, the final color and brightness of the image may be determined by both the color (reflected component) of the object surface and the light (irradiated component) irradiated on the object. In addition, the present application may also employ other image decomposition algorithms, such as fourier transform (Fourier Transform), wavelet transform (Wavelet Transform), etc., without limitation.
In step S220 of some embodiments, further, feature extraction may be performed on the sample reflection map and the sample original illumination map based on the feature extraction layer, so as to obtain sample low-illumination image features. The sample low-illumination image features are extracted from a sample reflection diagram and a sample original illumination diagram, and comprise key information capable of representing textures, outlines and the like and detailed information such as color distribution and the like. Wherein the feature extraction layer is typically comprised of a series of convolution layers that enable the extraction of useful features from the image from which the system can better understand the content of the image and use this information to enhance the image in subsequent steps.
The specific process for obtaining the sample low-illumination image features comprises the steps of carrying out image coding on a sample reflection image to obtain reflection image coding features, carrying out image coding on the sample original illumination image to obtain original illumination image coding features, and carrying out feature fusion on the reflection image coding features and the original illumination image coding features to obtain the sample low-illumination image features. The feature fusion refers to adding feature graphs (feature maps) extracted from different images to retain image information of different layers and enhance semantic information and spatial details.
It should be noted that, in other embodiments, the present application may further perform feature extraction on the sample reflection map, the sample illumination map, and the sample original illumination map based on the feature extraction layer to obtain sample low-illumination image features, that is, the sample low-illumination image features at this time refer to image features fused with features of the sample reflection map, the sample illumination map, and the sample original illumination map, and the specific extraction process may refer to the above steps and will not be described in detail.
In step S130 of some embodiments, since it is difficult to accurately estimate scene brightness and contrast only for feature extraction of the sample original illumination map, the embodiments of the present application also perform image gray attribute enhancement on the sample original illumination map at the same time, so as to obtain a sample brightness enhancement map and a sample contrast enhancement map, and use the sample brightness enhancement map, the sample contrast enhancement map, the sample reflection map, the sample illumination map and the sample original illumination map together for image enhancement of the sample original illumination map.
It should be noted that, the gray scale properties of the image of the present application include brightness and contrast, the brightness and contrast are different, the brightness is the brightness of light, the contrast is the difference between different colors, the human feeling is different, and the enhancement methods are different. The image gray attribute enhancement of the application comprises adjustment of brightness and contrast, the brightness enhancement graph aims to improve the overall brightness of the image, and the contrast enhancement graph aims to enhance the contrast between different areas in the image, so that the details of the image are clearer. For example, the image brightness enhancement can be performed on the original illumination map by adopting a histogram equalization mode to obtain a sample brightness enhancement map. And the image contrast enhancement can be carried out on the original illumination image of the sample in a linear transformation mode, so that the sample contrast enhancement image is obtained. In addition, the application can also be used for enhancing the image brightness of the illumination map by other modes, such as gamma correction, linear transformation and the like, and can also be used for enhancing the image contrast of the illumination map by other modes, such as self-adaptive histogram equalization, multi-scale contrast enhancement and the like, and the application is not limited in particular.
In step S140 of some embodiments, further, the present application may use an encoding layer of a preset enhancement model to respectively perform image encoding on the sample brightness enhancement map and the sample contrast enhancement map, so as to extract more significant sample enhancement features for further processing and enhancement of the image.
Referring to fig. 3, fig. 3 is a specific flowchart of step S140 according to an embodiment of the present application. In some embodiments of the present application, step S140 may specifically include, but is not limited to, steps S310 to S340.
Step S310, carrying out image coding on the sample brightness enhancement graph to obtain brightness enhancement characteristics;
step S320, image coding is carried out on the sample contrast enhancement graph, and contrast enhancement characteristics are obtained;
Step S330, performing feature stitching on the brightness enhancement features and the contrast enhancement features to obtain sample stitching features;
And step S340, extracting the characteristics of the sample splicing characteristics to obtain sample enhancement characteristics.
In steps S310-S340 of some embodiments, the purpose of image encoding is to convert image data into a set of features that can represent brightness, contrast information, etc. of an image. According to the application, the sample brightness enhancement map and the sample contrast enhancement map are respectively encoded, so that the visual differences between different areas in the image can be improved, the details are more obvious, and the encoding process converts the visual differences into digital characteristics, so that a basis is provided for subsequent analysis. Further, the brightness enhancement features and the contrast enhancement features are combined or stitched together. The concatenation may be a simple data level merge or an algorithm-based fusion, with the aim of integrating the information contained in both features into a unified feature representation. Finally, the spliced features are further extracted and processed, namely, more abstract and representative features can be learned from the sample spliced features through a machine learning algorithm (such as a deep learning model), so that sample enhancement features are obtained, and important information of the image after brightness and contrast enhancement can be better represented.
In the embodiment, the image of the sample original illumination image after the image gray attribute enhancement image is subjected to image coding, the characteristics capable of representing the image enhancement effect are extracted, and the characteristics are used for subsequent image analysis or machine learning tasks, so that the accuracy of image enhancement of the low-illumination image can be improved, serious noise and color distortion are avoided, and the recognition performance of the enhanced image is improved.
Referring to fig. 4, fig. 4 is a specific flowchart of step S330 according to an embodiment of the present application. In some embodiments of the present application, step S330 may specifically include, but is not limited to, steps S410 to S430.
Step S410, acquiring the detection precision of a sample target based on the target type of the sample target;
Step S420, determining enhancement parameters based on the target type and the detection accuracy;
and step S430, performing feature weighted stitching based on the brightness enhancement features, the brightness enhancement weights, the contrast enhancement features and the contrast enhancement weights to obtain sample stitching features.
In step S410 of some embodiments, the sample original illumination map may include a sample target to be detected, for example, in the insurance field of the financial technology, the sample original illumination map may be a vehicle damage image, and the sample target in the sample original illumination map is the whole of the claim vehicle, and may also be local details of the claim vehicle (such as a vehicle door, a trunk, etc.). Therefore, the application can acquire the detection precision of the target in the target detection task according to the target type of the sample target, for example, the detection precision of the rear mirror of the claim vehicle is higher compared with the rear mirror of the claim vehicle for the whole claim vehicle.
In step S420 of some embodiments, further, the present application may determine enhancement parameters including a brightness enhancement weight of the brightness enhancement feature and a contrast enhancement weight of the contrast enhancement feature based on the target type and the detection accuracy. These weights are used to adjust the degree of brightness and contrast enhancement to optimize image quality and improve accuracy of target detection. For example, if the accuracy of detection of a certain object type is low, the system may increase the weight of brightness or contrast enhancement in an effort to improve the detectability of the object.
In step S430 of some embodiments, further, the present application may combine the brightness enhancement features and the contrast enhancement features, and the weights corresponding to the brightness enhancement features and the contrast enhancement features, to perform feature weighted stitching. This means that the brightness enhancement features and the contrast enhancement features are not simply combined, but rather are integrated according to the corresponding weights to generate a more comprehensive sample stitching feature. In this way it can be ensured that in the final feature representation the contributions of the different enhancement features are balanced and optimized.
In the above embodiment, the present application can dynamically adjust the image enhancement strategy according to the target type and the detection accuracy of the sample target in the sample original illumination map, so as to expect to improve the performance of target detection.
In step S150 of some embodiments, further, the present application may use a decoding layer of a preset enhancement model to combine the extracted sample enhancement features with the original low-illumination image features to perform image decoding, so as to generate an enhanced sample enhancement map. This sample enhancement map is an image effect under normal illumination conditions simulated.
The method comprises the steps of carrying out feature fusion on the sample enhancement features and the sample low-illumination image features to obtain sample target enhancement features, and carrying out image decoding on the sample target enhancement features based on the decoding layer to obtain the sample enhancement map.
It should be noted that, the decoding layer of the present application may be constructed based on the UNet network model, or may be constructed based on other CNN models, which is not limited.
In step S160 of some embodiments, further, a model loss value is determined based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and a parameter adjustment is performed on the preset enhancement model based on the model loss value, so as to obtain an image enhancement model. The model loss value can reflect the performance of a preset enhancement model, namely the difference between the current performance of the model and the ideal state, so as to optimize the performance of the model and obtain an image enhancement model capable of better improving the accuracy of image enhancement of a low-illumination image.
The sample reflection map and the sample illumination map of the application are obtained by normalizing an RGB pixel map obtained by decomposing an image of a sample original illumination map.
Referring to fig. 5, fig. 5 is a specific flowchart of step S160 according to an embodiment of the present application. In some embodiments of the present application, step S160 may specifically include, but is not limited to, steps S510 to S550.
Step S510, performing visual characteristic loss calculation based on the sample normal illumination map and the sample enhancement map to obtain a visual characteristic loss value;
Step S520, performing image decomposition loss calculation based on the sample original illumination map, the sample reflection map and the sample illumination map to obtain an image decomposition loss value;
Step S530, performing image decomposition on the sample enhancement map to obtain a sample enhancement reflection map;
step S540, performing image gray attribute loss calculation based on the sample enhanced reflection map, the sample enhanced map and the sample normal illumination map to obtain an image gray attribute loss value;
step S550, weighting and calculating the vision characteristic loss value, the image decomposition loss value and the image gray attribute loss value to obtain a model loss value.
In step S510 of some embodiments, to enhance the underexposed image, the present application may calculate a visual feature loss value using a composite loss function that includes three parts of content, texture, and color to preserve detailed parts in the image. And the composite loss function can be used as the loss function of the backbone network of the image enhancement model of the application.
Referring to fig. 6, fig. 6 is a specific flowchart of step S510 according to an embodiment of the present application. In some embodiments of the present application, step S510 may specifically include, but is not limited to, steps S610 to S640.
Step S610, performing color loss calculation on the sample normal illumination map and the sample enhancement map based on a preset color loss function to obtain an image color loss value;
Step S620, performing texture loss calculation on the sample normal illumination map and the sample enhancement map based on a preset texture loss function to obtain an image texture loss value;
Step S630, carrying out content loss calculation on the sample normal illumination map and the sample enhancement map based on a preset content loss function to obtain an image content loss value;
in step S640, the image color loss value, the image texture loss value, and the image content loss value are weighted and calculated to obtain the visual characteristic loss value.
In step S610 of some embodiments, the present application may perform color loss calculation on the sample normal illumination map and the sample enhancement map based on the preset color loss function to obtain an image color loss value, and a specific process of calculating the image color loss value may be as follows in equation 1:
In the case of the formula 1 of the present invention, Representing the value of the color loss of the image,And (3) representing a sample enhancement graph, wherein Y represents a sample normal illumination graph, and G (-) represents a Gaussian blur operator.
In step S620 of some embodiments, at the same time, the present application may perform texture loss calculation on the sample normal illumination map and the sample enhancement map based on the preset texture loss function to obtain an image texture loss value, and a specific process of calculating the image texture loss value may be as follows in equation 2:
In the case of the formula 2 of the present invention, The D (·) represents the discrimination convolutional neural network (Convolutional Neural Network, CNN), and may be the last layer of the decoding layer of the present application, or may be a network structure that is additionally provided, without limitation,And (3) representing a sample enhancement map, and Y representing a sample normal illumination map.
In step S630 of some embodiments, at the same time, the present application may perform content loss calculation on the sample normal light map and the sample enhancement map based on the preset content loss function to obtain an image content loss value, and a specific process of calculating the image content loss value may be as follows in equation 3:
in the case of the formula 3 of the present invention, Representing the image content loss value, llllll 2 represents the L2 norm of the vector (in mathematics, the norm is one way to measure the size of the vector, different norms give different "lengths" or "sizes" of the vector, the L2 norm (also called euclidean norm) being defined as the square root of the sum of the squares of the vector elements).
In step S640 of some embodiments, further, the image color loss value, the image texture loss value, and the image content loss value may be weighted and calculated to obtain the visual feature loss value, so that a specific process for calculating the visual feature loss value may be as shown in the following formula 4:
in formula 4, α 1 represents an image content loss value Alpha 2 represents the image texture penalty valueAlpha 3 represents the image color loss valueAnd L 1 represents the visual characteristics loss value. For example, α 1 is 1, α 2 is 0.4, and α 3 is 0.1, these weights represent the extent to which the corresponding loss value contributes to the visual characteristic loss value.
In step S520 of some embodiments, the product of the sample reflection map and the sample illumination map obtained by image decomposition (e.g., retinex decomposition is also a deep learning-based process) is not identical to the sample original illumination map. Therefore, the application can also calculate the image decomposition loss based on the original sample illumination map, the sample reflection map and the sample illumination map to obtain the image decomposition loss value. Therefore, a specific process of calculating the image decomposition loss value can be seen as shown in the following equation 5:
In formula 5, L 2 represents an image decomposition loss value, S represents a sample original illumination map, R represents a sample reflection map, I represents a sample illumination map, Representing gradients, including horizontal and vertical directions, exp (·) representing a natural exponential function, ii 1 representing an L1 norm of the vector (also referred to as manhattan distance, which is the sum of absolute values of components of the vector), λ 1、λ2、λ3 and λ 4 being weight parameters representing the degree of contribution of the corresponding function to calculating the image decomposition loss value, and being flexibly adjustable according to actual needs, without being limited specifically.
In step S530 of some embodiments, further, the present application may decompose the enhanced image to obtain a sample enhanced reflection map. The image decomposition method adopted for the decomposition is already described in the above embodiments, and will not be described again. This step helps to further analyze and evaluate the quality of the enhanced image, as well as the effect of the enhancement process on the reflective component of the image.
In step S540 of some embodiments, the present application may further perform image gray attribute loss calculation based on the sample enhanced reflection map, the sample enhanced map, and the sample normal illumination map, to obtain an image gray attribute loss value, so as to normalize brightness and contrast of the enhanced image.
Referring to fig. 7, fig. 7 is a specific flowchart of step S540 according to an embodiment of the present application. In some embodiments of the present application, step S540 may specifically include, but is not limited to, steps S710 to S730.
Step S710, performing color enhancement loss calculation on the sample enhancement reflection map and the sample normal illumination map based on the preset three-channel weight and the preset brightness value to obtain a color enhancement loss value;
step S720, performing image contrast loss calculation based on the sample enhancement map and the sample normal illumination map to obtain an image contrast loss value;
in step S730, the color enhancement loss value and the image contrast loss value are weighted and calculated to obtain the image gray attribute loss value.
In step S710 of some embodiments, the preset three-channel weight refers to a preset enhancement weight for corresponding RGB-based luminance attribute. The preset brightness value is used for indicating the enhancement degree of the brightness attribute. Based on this, a specific process of calculating the color enhancement loss value can be seen as follows in equations 6 and 7:
In the case of the formula 6 of the present invention, Representing the value of the color enhancement loss,Representing a sample enhanced reflectance map, Y R represents an RGB channel image of a sample normal illumination map. In formula 7, X1 represents an input parameter of the function G, i represents a pixel number in the input parameter, n represents a total number of pixels of the input parameter, t represents a preset luminance value, X1 iR represents an R channel parameter in an RGB channel image of the sample normal illumination map, X1 iG represents a G channel parameter in an RGB channel image of the sample normal illumination map, X1 iB represents a B channel parameter in an RGB channel image of the sample normal illumination map, β 1、β2、β3 represents a weight corresponding to each channel parameter, and the weight can be flexibly adjusted as required, for example, β 1 is 1, β 2 is 1.5, and β 3 is 0.6.K is a customization determined based on three weights of β 1、β2、β3, for example, when β 1 is 1, β 2 is 1.5, β 3 is 0.6, t is 2.2, Representing a weighted average of the RGB three channels.
In step S720 of some embodiments, at the same time, the present application may perform image contrast loss calculation based on the sample enhancement map and the sample normal illumination map to obtain an image contrast loss value, and a specific process for calculating the image contrast loss value may be shown in the following formula 8:
In the case of the formula 8 of the present invention, The image contrast loss value is represented by a value,Representing a sample enhancement map, Y representing a sample normal illumination map,Where F (X2) is the sum of the chromatic aberration of the neighbors of the image k, X2 represents the input parameter of the function F, k represents the number of neighbors of pixel i in feature X2, n represents the total number of pixels of the input parameter, and X2 i represents the ith pixel in feature X2, X2 j represents the jth neighbor of pixel i in feature X2.
In step S730 of some embodiments, after obtaining the color enhancement loss value and the image contrast loss value, the two loss values may be weighted and calculated, so that a specific process for obtaining the image gray attribute loss value may be shown in the following formula 9:
In equation 9, L 3 represents an image gradation attribute loss value, and γ 1 represents a color enhancement loss value Corresponding weights, gamma 2, represent image contrast loss valuesThe corresponding weights can be flexibly adjusted according to actual needs, and are not particularly limited.
In the embodiment, the gray attribute difference between the enhancement map and the image under the normal illumination condition is calculated based on the sample enhancement reflection map, the sample enhancement map and the sample normal illumination map, which reflects the accuracy and naturalness of the enhancement image on the gray level, that is, whether the enhanced image is close to the image under the normal illumination condition in brightness and contrast, so that the enhancement accuracy of the low-illumination image can be improved, and serious noise and color distortion are avoided.
In step S550 of some embodiments, further, the present application may perform weighted sum calculation on the visual characteristic loss value L 1, the image decomposition loss value L 2, and the image gray attribute loss value L 3, so that a specific process of calculating the model loss value may be described as follows in equation 10:
L=a·l 1+b·L2+c·L3 (formula 10)
In formula 10, L represents a model loss value, a represents a weight corresponding to a visual feature loss value L 1, b represents a weight corresponding to an image decomposition loss value L 2, c represents a weight corresponding to an image gray attribute loss value L 3, and the three weights can be flexibly adjusted according to actual needs, which is not particularly limited.
Furthermore, the application can perform iterative training according to the constructed model loss value L until reaching the training ending condition preset by the model, such as reaching the preset iteration times or reaching the preset recognition accuracy, without limitation.
In the embodiment, the application can guide the model to learn how to improve the enhancement effect of the low-illumination image by comprehensively evaluating different loss values, so that the model is more similar to the image under the normal illumination condition in vision, the model can be helped to optimize the performance of the model on a plurality of layers, the enhancement accuracy of the low-illumination image is improved, serious noise and color distortion are avoided, and the recognition performance of the enhanced image is improved.
In step S170 of some embodiments, after training to obtain an image enhancement model, the present application may perform image enhancement on a target image input to the model based on the image enhancement model to obtain a target enhanced image, where the obtained target enhanced image may retain the essential attribute of the image and also retain the light-dark contrast.
In some embodiments, referring to fig. 8, fig. 8 is a schematic flow chart of image enhancement based on an image enhancement model according to an embodiment of the present application. The image enhancement model includes, among other things, a feature extraction layer 810, a picture coding layer 820, and a decoding layer 830. The application can enhance the gray scale attribute of the input target image to obtain a brightness enhancement image and a contrast enhancement image, and decompose the target image to obtain a target reflection image. Further, the target image, the target reflection map, the brightness enhancement map, and the contrast enhancement map may be input together into the image enhancement model for image enhancement. Specifically, the feature extraction layer 810 may perform feature extraction on the target image and the target reflection map respectively to obtain an original image feature and a target reflection map feature, and perform feature fusion on the target image feature and the target reflection map feature to obtain a target image feature. The luminance enhancement map and the contrast enhancement map may be respectively image-coded based on the map coding layer 820 to obtain a luminance enhancement feature and a target contrast enhancement feature, and feature extraction is performed on features obtained by stitching the luminance enhancement feature and the target contrast enhancement feature to obtain a target enhancement feature. Further, the present application may perform feature fusion on the target image feature and the target enhancement feature based on the decoding layer 830, and perform image decoding on the fused feature after fusion, to obtain the target enhancement image.
It should be noted that the non-native software tools or components presented in the embodiments of the present application are presented by way of example only and are not representative of actual use.
Compared with the image enhancement method which can only enhance some low-illumination images without noise in the related art, the image enhancement method provided by the embodiment of the application only enhances the contrast or brightness of the images, and the embodiment of the application considers the brightness enhancement image and the contrast enhancement image of the images at the same time when enhancing the images, so that the detail brightness and the contrast characteristic of the low-illumination image can be enhanced and learned. The model is trained by obtaining a more comprehensive loss function through comparison calculation of multiple dimensions, so that the generated enhanced image can retain the essential attribute of the image and the light-dark contrast ratio. Therefore, the embodiment of the application can improve the enhancement accuracy of the low-illumination image, avoid serious noise and color distortion, and improve the recognition performance of the enhanced image.
Referring to fig. 9, an embodiment of the present application further provides an image enhancement apparatus, which may implement the image enhancement method, where the apparatus includes:
The obtaining module 910 is configured to obtain sample data, where the sample data includes a sample original illumination map and a sample normal illumination map, the sample original illumination map is an image including a sample object collected under low illumination, and the sample normal illumination map is an image including a sample object collected under normal illumination;
The extracting module 920 is configured to perform feature extraction on the original illumination map of the sample based on a feature extracting layer of a preset enhancement model, so as to obtain a feature of the low-illumination image of the sample;
the attribute enhancement module 930 is configured to perform image gray attribute enhancement on the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map;
The encoding module 940 is configured to perform image encoding on the sample brightness enhancement map and the sample contrast enhancement map based on an image encoding layer of a preset enhancement model, so as to obtain sample enhancement features;
The decoding module 950 is configured to perform image decoding on the sample enhancement feature and the sample low-illumination image feature based on a decoding layer of a preset enhancement model, so as to obtain a sample enhancement map;
the training module 960 is configured to determine a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and perform parameter adjustment on a preset enhancement model based on the model loss value to obtain an image enhancement model;
The image enhancement module 970 is configured to perform image enhancement on the target image based on the image enhancement model, so as to obtain a target enhanced image.
The specific implementation of the image enhancement device in the embodiment of the present application is substantially the same as the specific embodiment of the image enhancement method described above, and will not be described herein.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the image enhancement method when executing the computer program. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 10, fig. 10 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
The processor 1010 may be implemented by a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, for executing related programs, so as to implement the technical solution provided by the embodiments of the present application;
The Memory 1020 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). Memory 1020 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in memory 1020, and the image enhancement method for executing the embodiments of the present disclosure is invoked by processor 1010;
An input/output interface 1030 for implementing information input and output;
The communication interface 1040 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (such as USB, network cable, etc.), or may implement communication in a wireless manner (such as mobile network, WIFI, bluetooth, etc.);
A bus 1050 that transfers information between the various components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040);
wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the image enhancement method when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the application are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" is used to describe an association relationship of an associated object, and indicates that three relationships may exist, for example, "a and/or B" may indicate that only a exists, only B exists, and three cases of a and B exist simultaneously, where a and B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b or c may represent a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. The storage medium includes various media capable of storing programs, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method of image enhancement, the method comprising:
acquiring sample data, wherein the sample data comprises a sample original illumination map and a sample normal illumination map, the sample original illumination map is an image which is acquired under low illumination and contains a sample object, and the sample normal illumination map is an image which is acquired under normal illumination and contains the sample object;
Performing feature extraction on the original sample illumination map based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features;
Performing image gray scale attribute enhancement on the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map;
image coding is carried out on the sample brightness enhancement map and the sample contrast enhancement map based on the image coding layer of the preset enhancement model, so that sample enhancement characteristics are obtained;
Performing image decoding on the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain a sample enhancement map;
Determining a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and performing parameter adjustment on the preset enhancement model based on the model loss value to obtain an image enhancement model;
and carrying out image enhancement on the target image based on the image enhancement model to obtain a target enhanced image.
2. The method of claim 1, wherein the feature extraction layer based on the preset enhancement model performs feature extraction on the sample original illumination map to obtain sample low-illumination image features, and the method comprises:
Performing image decomposition on the original sample illumination map to obtain a sample reflection map and a sample illumination map;
and carrying out feature extraction on the sample reflection map and the sample original illumination map based on the feature extraction layer to obtain the sample low-illumination image features.
3. The method of claim 2, wherein the determining a model loss value based on the sample original illumination map, the sample normal illumination map, and the sample enhancement map comprises:
Performing visual characteristic loss calculation based on the sample normal illumination map and the sample enhancement map to obtain a visual characteristic loss value;
performing image decomposition loss calculation based on the sample original illumination map, the sample reflection map and the sample illumination map to obtain an image decomposition loss value;
performing image decomposition on the sample enhancement map to obtain a sample enhancement reflection map;
Performing image gray scale attribute loss calculation based on the sample enhanced reflection map, the sample enhanced map and the sample normal illumination map to obtain an image gray scale attribute loss value;
and weighting and calculating the vision characteristic loss value, the image decomposition loss value and the image gray attribute loss value to obtain the model loss value.
4. The method of claim 3, wherein the performing a visual feature loss calculation based on the sample normal illumination map and the sample enhancement map to obtain a visual feature loss value comprises:
Performing color loss calculation on the sample normal illumination map and the sample enhancement map based on a preset color loss function to obtain an image color loss value;
Performing texture loss calculation on the sample normal illumination map and the sample enhancement map based on a preset texture loss function to obtain an image texture loss value;
Performing content loss calculation on the sample normal illumination map and the sample enhancement map based on a preset content loss function to obtain an image content loss value;
and carrying out weighted sum calculation on the image color loss value, the image texture loss value and the image content loss value to obtain a visual characteristic loss value.
5. The method of claim 3, wherein the performing image gray scale attribute loss calculation based on the sample enhanced reflectance map, the sample enhanced map, and the sample normal illumination map to obtain an image gray scale attribute loss value comprises:
Performing color enhancement loss calculation on the sample enhancement reflection map and the sample normal illumination map based on a preset three-channel weight and a preset brightness value to obtain a color enhancement loss value, wherein the preset brightness value is used for indicating the enhancement degree of brightness attribute;
performing image contrast loss calculation based on the sample enhancement map and the sample normal illumination map to obtain an image contrast loss value;
and carrying out weighted sum calculation on the color enhancement loss value and the image contrast loss value to obtain the image gray attribute loss value.
6. The method according to any one of claims 1 to 5, wherein the image coding layer based on the preset enhancement model performs image coding on the sample brightness enhancement map and the sample contrast enhancement map to obtain sample enhancement features, including:
Image coding is carried out on the sample brightness enhancement graph, so that brightness enhancement characteristics are obtained;
image coding is carried out on the sample contrast enhancement graph, so that contrast enhancement characteristics are obtained;
Performing feature stitching on the brightness enhancement features and the contrast enhancement features to obtain sample stitching features;
and extracting the characteristics of the sample splicing characteristics to obtain the sample enhancement characteristics.
7. The method of claim 6, wherein the feature stitching the brightness enhancement feature and the contrast enhancement feature to obtain a sample stitched feature comprises:
Acquiring the detection precision of the sample target based on the target type of the sample target;
Determining enhancement parameters based on the target type and the detection accuracy, the enhancement parameters including a brightness enhancement weight of the brightness enhancement feature and a contrast enhancement weight of the contrast enhancement feature;
and performing feature weighted stitching based on the brightness enhancement features, the brightness enhancement weights, the contrast enhancement features and the contrast enhancement weights to obtain the sample stitching features.
8. An image enhancement device, the device comprising:
the acquisition module is used for acquiring sample data, wherein the sample data comprises a sample original illumination map and a sample normal illumination map, the sample original illumination map is an image which is acquired under low illumination and contains a sample object, and the sample normal illumination map is an image which is acquired under normal illumination and contains the sample object;
the extraction module is used for carrying out feature extraction on the original sample illumination map based on a feature extraction layer of a preset enhancement model to obtain sample low-illumination image features;
The attribute enhancement module is used for carrying out image gray attribute enhancement on the sample original illumination map to obtain a sample brightness enhancement map and a sample contrast enhancement map;
The coding module is used for carrying out image coding on the sample brightness enhancement map and the sample contrast enhancement map based on an image coding layer of the preset enhancement model to obtain sample enhancement characteristics;
The decoding module is used for carrying out image decoding on the sample enhancement features and the sample low-illumination image features based on a decoding layer of the preset enhancement model to obtain a sample enhancement map;
the training module is used for determining a model loss value based on the sample original illumination map, the sample normal illumination map and the sample enhancement map, and carrying out parameter adjustment on the preset enhancement model based on the model loss value to obtain an image enhancement model;
And the image enhancement module is used for carrying out image enhancement on the target image based on the image enhancement model to obtain a target enhanced image.
9. An electronic device comprising a memory storing a computer program and a processor implementing the method of any of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202411530817.XA 2024-10-29 2024-10-29 Image enhancement method, image enhancement device, electronic device, and storage medium Active CN119295359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411530817.XA CN119295359B (en) 2024-10-29 2024-10-29 Image enhancement method, image enhancement device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411530817.XA CN119295359B (en) 2024-10-29 2024-10-29 Image enhancement method, image enhancement device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN119295359A true CN119295359A (en) 2025-01-10
CN119295359B CN119295359B (en) 2025-09-30

Family

ID=94153098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411530817.XA Active CN119295359B (en) 2024-10-29 2024-10-29 Image enhancement method, image enhancement device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN119295359B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819702A (en) * 2019-11-15 2021-05-18 北京金山云网络技术有限公司 Image enhancement method and device, electronic equipment and computer readable storage medium
CN114549362A (en) * 2022-02-28 2022-05-27 讯飞智元信息科技有限公司 Low-illumination image enhancement method, related device and readable storage medium
CN114638749A (en) * 2022-02-14 2022-06-17 深圳大学 Low-illumination image enhancement model, method, electronic device and storage medium
WO2023236445A1 (en) * 2022-06-09 2023-12-14 北京大学 Low-illumination image enhancement method using long-exposure compensation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819702A (en) * 2019-11-15 2021-05-18 北京金山云网络技术有限公司 Image enhancement method and device, electronic equipment and computer readable storage medium
CN114638749A (en) * 2022-02-14 2022-06-17 深圳大学 Low-illumination image enhancement model, method, electronic device and storage medium
CN114549362A (en) * 2022-02-28 2022-05-27 讯飞智元信息科技有限公司 Low-illumination image enhancement method, related device and readable storage medium
WO2023236445A1 (en) * 2022-06-09 2023-12-14 北京大学 Low-illumination image enhancement method using long-exposure compensation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓红;翟焱修;麻祥才;: "基于NSST多尺度自适应的Retinex低照度图像增强算法", 包装工程, no. 03, 10 February 2020 (2020-02-10), pages 211 - 217 *

Also Published As

Publication number Publication date
CN119295359B (en) 2025-09-30

Similar Documents

Publication Publication Date Title
Choi et al. Referenceless prediction of perceptual fog density and perceptual image defogging
CN111178183A (en) Face detection method and related device
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN110807757A (en) Image quality evaluation method and device based on artificial intelligence and computer equipment
CN107316035A (en) Object identifying method and device based on deep learning neutral net
US20140126808A1 (en) Recursive conditional means image denoising
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
CN116452469B (en) An image defogging method and device based on deep learning
CN114187515A (en) Image segmentation method and image segmentation device
Kottayil et al. Blind quality estimation by disentangling perceptual and noisy features in high dynamic range images
CN117078574B (en) Image deraining method and device
Li et al. Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement
Singh et al. A review on computational low-light image enhancement models: Challenges, benchmarks, and perspectives
Daud et al. Underwater image enhancement using lightweight vision transformer
CN111179245A (en) Image quality detection method, device, electronic equipment and storage medium
CN119295359B (en) Image enhancement method, image enhancement device, electronic device, and storage medium
CN119672499A (en) Visual decision model training method and related methods, devices, equipment and media
Gibson et al. A no-reference perceptual based contrast enhancement metric for ocean scenes in fog
Nair et al. Benchmarking single image dehazing methods
CN115170420A (en) Image contrast processing method and system
Karimi et al. HNQA: histogram-based descriptors for fast night-time image quality assessment
CN116798041B (en) Image recognition method and device and electronic equipment
Shi et al. Deep quality assessment toward defogged aerial images
Tasnim et al. Normalizing images in various weather and lighting conditions using ColorPix2Pix generative adversarial network
CN116012248B (en) Image processing method, device, computer equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant