CN109345487B - Image enhancement method and computing device - Google Patents
Image enhancement method and computing device Download PDFInfo
- Publication number
- CN109345487B CN109345487B CN201811252617.7A CN201811252617A CN109345487B CN 109345487 B CN109345487 B CN 109345487B CN 201811252617 A CN201811252617 A CN 201811252617A CN 109345487 B CN109345487 B CN 109345487B
- Authority
- CN
- China
- Prior art keywords
- image
- output
- processed
- enhancement
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image enhancement method, which comprises the following steps: inputting an image to be processed into a preset image enhancement model, and outputting the image after carrying out convolution processing for multiple times; respectively converting the image to be processed and the output image into a preset color space; and fusing the image to be processed and the output image in a predetermined color space to generate an enhanced image. The invention also discloses a computing device for executing the method.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image enhancement method and a computing device.
Background
With the development of internet technology, people increasingly rely on rapidly acquiring information such as pictures, videos, and the like through a network. However, the visual effect of a large number of pictures spread through the internet is general, and it is often difficult for internet users to find pictures with good content and good image colors. On the other hand, a mobile terminal (such as a mobile phone, a tablet computer, etc.) also becomes a common photographing device for people, but a photo taken by the mobile terminal hardly meets a higher visual requirement. Based on the two considerations, the method for enhancing the visual effect of the image by the image enhancement method has wide application scenes.
Conventional image enhancement algorithms typically adjust the pixel values of each channel of the image by fixed parameter values to improve the sharpness, saturation and contrast of the image. However, the method has a single effect, and is easy to have the problems of unnatural adjustment effect, color blocks and the like. The development of a Convolutional Neural Network (CNN) brings a new idea for image processing, and the enhancement effect of the CNN is superior to that of a conventional algorithm in some aspects, but the CNN-based algorithm is prone to problems of unnatural transition, color cast and the like.
Therefore, there is a need for an image enhancement scheme that overcomes the above-mentioned disadvantages.
Disclosure of Invention
To this end, the present invention provides an image enhancement method and computing device in an attempt to solve or at least alleviate at least one of the problems identified above.
According to an aspect of the present invention, there is provided an image enhancement method, performed in a computing device, comprising: inputting an image to be processed into a preset image enhancement model, and outputting the image after carrying out convolution processing for multiple times; respectively converting the image to be processed and the output image into a preset color space; and fusing the image to be processed and the output image in a predetermined color space to generate an enhanced image.
Optionally, in the method according to the present invention, the step of converting the image to be processed and the output image into a predetermined color space respectively further comprises: converting the image to be processed into a preset color space to obtain a first image to be processed, a second image to be processed and a third image to be processed on three channels; and converting the output image into a preset color space to obtain a first output image, a second output image and a third output image on three channels.
Optionally, in the method according to the present invention, the step of fusing the image to be processed and the output image in a predetermined color space to generate an enhanced image includes: judging the pixel value of a pixel point in the first image to be processed, and generating a first enhancement image by combining the judgment result with the first output image; combining the second to-be-processed graph and the second output graph to generate a second enhancement graph; combining the third to-be-processed graph and the third output graph to generate a third enhancement graph; and fusing the first enhancement map, the second enhancement map and the third enhancement map to generate an enhanced image.
Optionally, in the method according to the present invention, the step of generating the first enhancement map by combining the first output map according to the determination result includes: if the pixel value of the pixel point in the first graph to be processed is smaller than the first threshold value, taking the pixel value of the pixel point corresponding to the first output graph as the pixel value of the pixel point corresponding to the first enhancement graph; if the pixel value of the pixel point in the first graph to be processed is larger than the second threshold value, the pixel value of the pixel point corresponding to the first enhancement graph is generated in a first mode by combining the pixel values of the corresponding pixel points in the first graph to be processed and the first output graph; and if the pixel value of the pixel point in the first image to be processed is not less than the first threshold value nor more than the second threshold value, combining the pixel values of the corresponding pixel points in the first image to be processed and the first output image to generate the pixel value of the corresponding pixel point of the first enhancement image in a second mode.
Optionally, in the method according to the present invention, the step of generating the second enhancement map by combining the second to-be-processed map and the second output map includes: performing weighted calculation on the second graph to be processed and the second output graph to generate a second enhancement graph; the step of generating a third enhancement map by combining the third pending map and the third output map comprises: and performing weighted calculation on the third graph to be processed and the third output graph to generate a third enhancement graph.
Optionally, in the method according to the invention, the predetermined color space is a Lab color space.
Optionally, in the method according to the present invention, the image enhancement model comprises a plurality of intermediate processing blocks and a result processing block which are connected in sequence, wherein each intermediate processing block comprises at least two convolution active layers and a jump connection layer which are connected in sequence, and the jump connection layer is adapted to add the input of the first convolution active layer and the output of the last convolution active layer of the intermediate processing block to which it belongs; the result processing block includes a plurality of convolution activation layers; before the first intermediate processing block of the image enhancement model, a convolution activation layer is also included.
Optionally, in the method according to the present invention, the activation function of the convolution activation layer in each intermediate processing block is a ReLU function, and the number of intermediate processing blocks is 4.
Optionally, in the method according to the present invention, the result processing block comprises three convolution active layers, wherein the activation functions of the first two convolution active layers are ReLU functions and the activation function of the third convolution active layer is a Tanh function.
Optionally, in the method according to the present invention, before the step of inputting the image to be processed into the preset image enhancement model, a step of generating the preset image enhancement model by training is further included: acquiring a plurality of training image pairs, wherein each training image pair comprises an input image and a target image, the input image is an image acquired by a single lens reflex, and the target image is an image obtained by adjusting the input image; inputting an input image into a pre-trained image enhancement model, outputting the image after convolution processing for multiple times, calculating a loss value of the output image relative to a target image according to a preset loss function, and updating parameters of the image enhancement model until the loss value meets a preset condition, finishing training, and obtaining the trained image enhancement model as the preset image enhancement model.
Optionally, in the method according to the present invention, the step of inputting the input image to the pre-trained image enhancement model further comprises: for each training image pair, at least one sub-image with a preset size is cut out from the input image to be used as an input sub-image; intercepting each target sub-image from the target image according to the coordinate position of each input sub-image in the input image; and inputting the input subimages into a pre-trained image enhancement model, outputting the subimages after convolution processing for multiple times, calculating the loss value of the output subimages relative to the target subimages according to a preset loss function, updating parameters of the image enhancement model, and finishing training until the loss value meets a preset condition to obtain the trained image enhancement model as the preset image enhancement model.
Alternatively, in the method according to the invention, the preset loss function is represented by the following formula: λ is less1*color_loss+λ2Vgg _ loss, where loss represents the loss value, color _ loss is the first loss, vgg _ loss is the second loss, λ1And λ2Are the corresponding weighting coefficients.
Optionally, in the method according to the present invention, the step of calculating the first loss comprises: respectively carrying out mean value fuzzy processing on the output image and the corresponding target image to obtain a fuzzy output image and a fuzzy target image; and calculating pixel distance values of corresponding pixel points in the blurred output image and the blurred target image as a first loss.
Optionally, in the method according to the present invention, the step of calculating the second loss comprises: respectively inputting the output image and the target image into a preset convolution network to generate respective characteristic graphs; and calculating the pixel distance value of the corresponding pixel point in the characteristic diagram of the output image and the characteristic diagram of the target image as a second loss.
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing any of the methods described above.
According to an aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform any of the methods described above.
According to the image enhancement scheme, the image to be processed is input into the preset image enhancement model, and the image is output after model processing, so that the image to be processed and the output image are converted into the color space which is in accordance with human eye perception to be further fused to finally generate the enhanced image in view of the problems that the image processed by the convolutional network may have color cast and the like. The resulting enhanced image quality is significantly better than the original to-be-processed image and the problems of overexposure and color overflow are well remedied.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention;
FIG. 2 shows a flow diagram of an image enhancement method 200 according to one embodiment of the invention;
FIG. 3 shows a block diagram of an image enhancement model 300 according to one embodiment of the invention;
FIG. 4 shows a block diagram of an intermediate processing block according to one embodiment of the invention;
FIG. 5 shows a block diagram of a result processing block according to one embodiment of the invention;
FIGS. 6A and 6B illustrate a training image pair, where FIG. 6A is an input image and FIG. 6B is a target image, according to one embodiment of the present invention; and
fig. 7A and 7B are diagrams illustrating contrast of enhancement effects according to an embodiment of the present invention, where fig. 7A is an image to be processed, and fig. 7B is an output image processed by a preset image enhancement model.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The image enhancement method of the invention is suitable for being executed in one or a group of computing devices, namely, the enhancement processing process of the input image to be processed is completed in one or a group of computing devices. The computing device may be, for example, a server (e.g., a Web server, an application server, etc.), a personal computer such as a desktop computer and a notebook computer, a portable mobile device such as a mobile phone, a tablet computer, a smart wearable device, etc., but is not limited thereto. According to a preferred embodiment, the image enhancement method of the present invention is performed in a computing device, which may be implemented, for example, as a distributed system of the Parameter Server architecture.
FIG. 1 shows a schematic diagram of a computing device 100, according to one embodiment of the invention. As shown in FIG. 1, in a basic configuration 102, a computing device 100 typically includes a system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some implementations, the application 122 can be arranged to execute instructions on an operating system with program data 124 by one or more processors 104.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In the computing device 100 according to the present invention, the application 122 includes a plurality of program instructions for performing the image enhancement method 200, and the program data 124 may include data such as a training image pair, and parameters of a preset image enhancement model generated by training the training image pair.
FIG. 2 illustrates a flow diagram of a method 200 of image enhancement, the method 200 being suitable for execution in a computing device (e.g., the computing device 100 described above), according to one embodiment of the invention. As shown in fig. 2, the method 200 begins at step S210.
In step S210, the image to be processed is input into a preset image enhancement model, and the image is output after being subjected to a plurality of convolution processes. The image to be processed may be an image photographed by a mobile terminal or an image downloaded through a network. In an embodiment of the present invention, the preset image enhancement model is a full convolution neural network, so that the size of the image to be processed of the input model can be any size, and after the image to be processed of the input model is processed by the preset image enhancement model, the size of the output image is the same as that of the image to be processed of the input model.
It should be noted that the structure of the preset image enhancement model can be set by those skilled in the art according to actual needs, and the present invention is not limited thereto. According to one embodiment, the structure of the image enhancement model comprises a plurality of intermediate processing blocks (B) and a result processing block (C) connected in series. It should be noted that, the number of the intermediate processing blocks included in the model is not limited by the embodiment of the present invention. Each intermediate processing block (B) comprises at least two convolution active layers (a) and a jump connection layer (SKIP) connected in sequence, wherein each convolution active layer comprises a convolution layer (CONV) and an active layer (ACTI), and the active function of the active layer can be set by a person skilled in the art without limitation, for example, the active function can be set as a ReLU function, a Tanh function, a Sigmoid function, and the like. And the jump connection layer adds the input of the first convolution activation layer and the output of the last convolution activation layer of the intermediate processing block to which the jump connection layer belongs and then outputs the sum. The jump connection layer can effectively keep image details, and is beneficial to improving the training efficiency and the accuracy of the model. The result processing block (C) includes one or more convolution activation layers, it should be noted that the present invention does not limit the number of convolution activation layers included in the result processing block, nor does the present invention limit the activation functions employed by each convolution activation layer in the result processing block. In particular, according to one embodiment, before the first intermediate processing block B1, a convolutional active layer a0 is also included, which convolutional active layer a0 still includes convolutional layer CONV0 and active layer ACTI 0.
As described above, FIG. 3 shows a schematic structural diagram of an image enhancement model 300 according to an embodiment of the invention. As shown in FIG. 3, the model 300 includes a convolutional activation layer A0, 4 intermediate processing blocks B1-B4, and a result processing block C, connected in sequence, as shown in FIG. 3. The CNN intermediate processing blocks B1 to B4 are similar in structure, and fig. 4 illustrates the structure of each intermediate processing block in fig. 3 by taking the intermediate processing block B1 as an example. As shown in fig. 4, the intermediate processing block B1 includes a convolution active layer a1, a convolution active layer a2, and a jump connection layer SKIP1, which are connected in sequence, the convolution active layer a1 further includes a convolution layer CONV1 and an active layer ACTI1 using a ReLU function, the convolution active layer a2 further includes a convolution layer CONV2 and an active layer ACTI2 using a ReLU function, and the jump connection layer SKIP1 adds an input of the convolution active layer a1 (i.e., an input of the convolution layer CONV 1) and an output of the convolution active layer a2 (i.e., an output of the active layer ACTI 2) and then outputs the added input.
Fig. 5 shows one configuration of the result processing block C in fig. 3. As shown in fig. 5, the result processing block C includes three convolution active layers a9 to a11, the convolution active layer a9 further includes a convolution layer CONV9 and an active layer ACTI9 using a ReLU function, the convolution active layer a10 further includes a convolution layer CONV10 and an active layer ACTI10 using a ReLU function, and the convolution active layer a11 further includes a convolution layer CONV11 and an active layer ACTI11 using a Tanh function.
It should be noted that after the structure of the image enhancement model 300 is constructed, there are some parameters that need to be set in advance, such as the number and size of convolution kernels (kernel) used for each convolution layer (CONV), the moving step size of the convolution kernels, the number of surrounding filling edges, and the like. The following table shows examples of parameters of the model 300 shown in fig. 3 to 5 (note that each convolution activation layer includes a convolution layer and an activation layer, where it is only necessary to determine which activation function is selected for the activation layer, and no other parameters need to be set in advance, so in the following table, the parameter of each convolution activation layer a is the parameter of the convolution layer CONV in the convolution activation layer):
the image enhancement model 300 is an End-to-End (End to End) convolutional network, so that the output image is the same size as the input image.
The structure of the image enhancement model 300 and the basic parameters of each convolution layer are preset by those skilled in the art to serve as a pre-trained image enhancement model, and then the pre-trained image enhancement model is trained to enable the output of the pre-trained image enhancement model to achieve the expected effect. And training a preset image enhancement model, namely determining model parameters in the model, wherein the model parameters comprise weights at various positions of each convolution kernel, bias parameters and the like. According to an embodiment of the present invention, the method 200 includes a step of generating a preset image enhancement model through training, and specifically includes the following two steps.
In a first step, a plurality of training image pairs are obtained, each training image pair comprising an input image and a target image. In a preferred embodiment according to the present invention, in consideration of the fact that the image captured by the single lens reflex has less noise and rich details, the input image is selected from the images captured by the single lens reflex (for example, 1000 images are captured by the single lens reflex), and meanwhile, a professional performs adjustment processing on all the captured input images (so that 1000 adjusted images can be obtained), so that the input image has better contrast and saturation, and the adjusted image is used as a target image corresponding to the input image. Fig. 6A and 6B illustrate a training image pair, where fig. 6A is an input image and fig. 6B is a target image, according to one embodiment of the present invention.
And secondly, inputting input images in all training image pairs into a pre-trained image enhancement model, outputting the images after convolution processing for multiple times, calculating loss values of the output images relative to a target image according to a preset loss function, updating parameters of the image enhancement model until the loss values meet preset conditions (in the training process of the model, the loss values are generally smaller and smaller along with the increase of the training times, when the loss values are converged, namely the absolute value of the difference between the loss values of two adjacent times of training is smaller than a preset threshold value, the model training is considered to be finished), finishing the training, and obtaining the trained image enhancement model as the preset image enhancement model.
According to another embodiment of the invention, in order to take training effect and training speed into account, the training image pair obtained in the first step is further processed, and the processed training image pair is used for training the image enhancement model. In this embodiment, the further processing of the training image pairs may be to cut out some small-sized images from the training image pairs, for example, for each training image pair, first, at least one sub-image with a predetermined size (e.g., 100 × 100) is cut out from the input image as the input sub-image, and simultaneously, each corresponding target sub-image is cut out from the corresponding coordinate position of the target image according to the coordinate position of each input sub-image in the input image. After all the training image pairs are subjected to interception processing, a training image set with a larger number of samples is obtained. It should be noted that, the embodiment of the present invention does not limit the manner of intercepting the sub-image, and the sub-image with the predetermined size may be intercepted from the original image at any angle and at any position. Then, all the cut input sub-images are input into a pre-trained image enhancement model (note that, a part of all the input sub-images may also be selected for training, which is not limited by the embodiment of the present invention), and the sub-images are output after being subjected to multiple convolution processes. In the training process described above, the loss value of the output sub-image relative to the target sub-image is calculated according to the preset loss function, and the parameters of the image enhancement model are updated until the loss value meets the predetermined condition, and the training is finished to obtain the trained image enhancement model as the preset image enhancement model.
The setting of the preset loss function can influence the training effect of the image enhancement model. According to the implementation mode of the invention, the preset loss function is set by adopting a mode of calculating the mixing loss, and the preset loss function is expressed by the following formula:
loss=λ1*color_loss+λ2*vgg_loss
where loss represents a loss value, color _ loss is a first loss, vgg _ loss is a second loss, λ1And λ2Are the corresponding weighting coefficients. It should be noted that λ1And λ2The value of (c) may be set by one skilled in the art according to the training process, and the present invention is not limited thereto. According to a preferred embodiment, λ1And λ2The values of (A) are respectively as follows: lambda [ alpha ]1=10,λ2=1。
One method of calculating the first loss and the second loss, respectively, is given below.
(a) The step of calculating the first loss comprises: firstly, respectively carrying out mean value fuzzy processing on an output image and a corresponding target image (namely, the target image in a training image pair to which an input image corresponding to the output image belongs) to obtain a fuzzy output image and a fuzzy target image, wherein the fuzzy processing on the images can eliminate the interference of high-frequency information in the images so as to enable a model to learn more color information; and then calculating pixel distance values of corresponding pixel points in the blurred output image and the blurred target image to serve as a first loss. In one embodiment, the mean-value blurring processing of the output image and the target image is implemented by using a mean-value pooling process (i.e., mean-posing) in a convolutional neural network, although other algorithms may be adopted by those skilled in the art to implement the blurring processing of the image, and the embodiment of the present invention is not limited thereto.
The calculation formula of the first loss can be described simply as follows for one frame of the output image and the target image:
in the above formula, W and H are the horizontal and vertical dimensions of the output image and the target image, respectively, (i, j) represents the coordinate position in the image, and rij、gij、bijR, G, B values, r, respectively representing a pixel with coordinates (i, j) in the output imageij′、gi′j、bi′jEach representing an R, G, B value for the pixel with coordinate (i, j) in the target image. And equivalently, calculating corresponding pixel distance values by traversing all pixel points in the image, and then adding all the pixel distance values to obtain a first loss. For N training image pairs, the mean of the first losses of the N output images may be found as the first loss of one training process.
(b) The step of calculating the second loss comprises: firstly, respectively inputting the output image and the target image into a preset convolution network to generate respective feature maps (feature maps of each layer in the preset convolution network can be generated, and feature maps of partial layers can also be extracted from the feature maps, which is not limited in the invention); and then, calculating the pixel distance value of the corresponding pixel point in the characteristic diagram of the output image and the corresponding characteristic diagram of the target image as a second loss. In one embodiment, the preset convolutional network adopts a VGG-19 network initialized by parameters trained on the ImageNet data set, generates a feature map of each layer, calculates a corresponding pixel distance value to serve as the second loss of the layer, and finally, calculates the average value of the second losses of all the layers to serve as the second loss of one training process.
For the feature map of the output image of one frame and the corresponding feature map of the target image, the calculation formula of the second loss can be simply described as follows:
in the above formula, W 'and H' are the horizontal and vertical dimensions of the feature map of the output image and the feature map of the target image, respectively, (i, j) represents the coordinate position in the image, vrij、vgij、vbijR, G, B values, vr, respectively representing a pixel with coordinates (i, j) in the feature map of the output imageij′、vgi′j、vbi′jEach represents R, G, B values of a pixel having coordinates (i, j) in the feature map of the target image. Equivalent to calculating the corresponding pixel distance value by traversing all the pixel points in the imageAnd then all pixel distance values are added as a second penalty. For N training image pairs, the average of N second losses can be found as the second loss of one training process.
It should be noted that, as described above, the input sub-image may also be used to train the image enhancement model, and the loss values of the output sub-image and the target sub-image need to be calculated. Here, only the output image is taken as an example for description, and a person skilled in the art should be able to calculate the loss value from the output sub-image and the target sub-image according to the description herein, and details thereof are not repeated herein.
In conclusion, the image to be processed is input into the image enhancement model trained in advance, and a primary enhanced output image is obtained after convolution processing. Compared with the image to be processed, the output image is improved in terms of resolution, contrast and saturation, but problems of unnatural transition, color shift (such as local display over-bright or gray) and the like exist.
According to the invention, the preliminarily enhanced output image is further improved. Generally, the images to be processed are RGB images, and the RGB color space is designed based on the principle of color emission and does not conform to the visual characteristics of human eyes, so that in order to make the adjusted images more conform to the perception of human eyes on brightness and chromaticity, in the embodiment according to the present invention, the output images are converted into the color space conforming to the perception of human eyes for further processing.
In the subsequent step S220, the image to be processed and the output image are converted into predetermined color spaces, respectively.
According to the embodiment of the invention, the predetermined color space is a Lab color space, and the Lab color space is mainly considered to be more suitable for human perception and not influenced by display equipment, wherein the L channel is mainly expressed as light and shade, the a channel mainly influences red and green colors, and the b channel mainly influences yellow and blue colors. Generally, the images to be processed are both RGB images, and therefore, it is necessary to convert the images to be processed and the output image from the RGB color space to the Lab color space. The specific process of color space conversion is not expanded here, and it should be noted that, in consideration of the required processing effect, the image may also be converted into other color spaces (e.g. a color space with separated bright colors), and is not limited here.
And similarly, converting the output image into the Lab color space to obtain a first output image (namely, an L-channel output image), a second output image (namely, an a-channel output image) and a third output image (namely, a b-channel output image) on the three channels. For convenience of explanation, in the following description, the first to-be-processed image, the second to-be-processed image, and the third to-be-processed image obtained by converting the to-be-processed image into the Lab color space are respectively denoted by l1、a1、b1The first output graph, the second output graph and the third output graph obtained by converting the output image into the Lab color space are respectively denoted by l2、a2、b2。
In Lab color space, the L channel has a value range of 0 to 100, and the a channel and the b channel have a value range of-128 to + 127. In one embodiment of the invention, for facilitating subsequent fusion calculation, normalization processing is performed on the images on the three channels after color space conversion, and the pixel value ranges of the images of the three channels are normalized to be between 0 and 255.
Subsequently in step S230, the image to be processed and the output image are fused in a predetermined color space to generate an enhanced image.
According to the embodiment of the invention, the output image is processed by adopting a fusion algorithm of the channels, and the processing process is respectively explained from the three channels in the following so as to improve the enhancement effect of the output image.
(1) On the L channel, judging the first image L to be processed1The pixel value of the middle pixel point is combined with the first output image l according to the judgment result2A first enhancement map (denoted as l) is generated.
Specifically, when the first image l to be processed1The pixel value of the middle pixel point is less than the first threshold value (in one embodiment according to the present invention)Middle, first threshold takes 50), first output map l is extracted2The pixel value of the corresponding pixel point is used as the pixel value of the corresponding pixel point of the first enhancement image l; when the first image to be processed is1When the pixel value of the middle pixel point is greater than the second threshold (in an embodiment according to the present invention, the second threshold is 200), the first to-be-processed image l is combined1And a first output diagram l2The pixel value of the corresponding pixel point in the first enhancement image l is generated by weighting the pixel value of the corresponding pixel point in the first enhancement image l in a first mode; when the first image to be processed is1When the pixel value of the middle pixel point is not less than the first threshold value and not more than the second threshold value, combining the first image l to be processed1And a first output diagram l2And the pixel value of the corresponding pixel point in the first enhancement image l is generated by weighting the pixel value of the corresponding pixel point in the first enhancement image l in a second mode.
In a preferred embodiment, the procedure of the judgment processing can be represented by the following formula:
in the above formula, l (x, y) is the value of the pixel point at the (x, y) coordinate in the first enhancement map, and l1(x, y) is the value of the pixel point at the (x, y) coordinate in the first graph to be processed, l2(x, y) is the value of the pixel at the (x, y) coordinate in the first output map. It should be noted that, only one preferred embodiment is shown here, and the invention is intended to protect the idea of performing weighting calculation on the first to-be-processed graph and the first output graph in a case-by-case manner according to the pixel value size of the first to-be-processed graph to improve the enhancement effect of the first output graph, and the value of the specific weighting coefficient is not limited.
(2) On the channel a, combining a second graph a to be processed1And a second output diagram a2A second enhancement map (denoted as a) is generated. In one embodiment, the second pending graph a is used1And a second output diagram a2And performing weighting calculation to generate a second enhancement map a. In a preferred embodiment, the calculation formula of the second enhancement map is expressed as follows:
a(x,y)=0.15*a1(x,y)+0.85*a2(x,y)
in the above formula, a (x, y) is the value of the pixel point at the (x, y) coordinate in the second enhancement map, a1(x, y) is the value of the pixel point at the (x, y) coordinate in the second graph to be processed, a2(x, y) is the value of the pixel at the (x, y) coordinate in the second output map. It should be noted that, only one preferred embodiment is illustrated here, and the invention is intended to protect the idea of performing weighting calculation on the second to-be-processed graph and the second output graph to improve the enhancement effect of the second output graph, and the value of the specific weighting coefficient is not limited.
(3) On the b channel, combining a third graph b to be processed1And a third output diagram b2A third enhancement map (denoted b) is generated. In one embodiment, the third pending graph b is used1And a third output diagram b2And performing weighting calculation to generate a third enhancement map b. In a preferred embodiment, the calculation formula of the third enhancement map is expressed as follows:
b(x,y)=0.15*b1(x,y)+0.85*b2(x,y)
in the above formula, b (x, y) is the value of the pixel point at the (x, y) coordinate in the third enhancement map, b1(x, y) is the value of the pixel point at the (x, y) coordinate in the third graph to be processed, b2(x, y) is the value of the pixel at the (x, y) coordinate in the third output map. It should be noted that, only one preferred embodiment is illustrated here, and the invention is intended to protect the idea of performing weighting calculation on the third to-be-processed graph and the third output graph to improve the enhancement effect of the third output graph, and the value of the specific weighting coefficient is not limited.
In summary, the processing on the L-channel image can prevent the dark area of the generated first enhanced image from being too bright and the bright area from being overexposed, and the processing on the a-channel image and the b-channel image can also better avoid color overflow. Finally, the first enhancement image l, the second enhancement image a and the third enhancement image b are fused to generate an enhanced image after the improvement processing. Of course, in some embodiments, the enhanced image after the improvement processing may be further converted from Lab color space to RGB color space, and displayed as the enhanced image.
Fig. 7A and 7B are diagrams illustrating contrast of enhancement effects according to an embodiment of the present invention, where fig. 7A is a to-be-processed image and fig. 7B is an enhanced image. Comparing fig. 7A and 7B, the enhanced image generated by the image enhancement scheme according to the present invention has a quality significantly better than the original image to be processed, and can well repair the problems of overexposure and color overflow.
In addition, the method 200 according to the present invention is based on the deep learning algorithm and the conventional image processing algorithm, and combines the advantages of the deep learning algorithm and the conventional image processing algorithm, thereby not only exerting the strong learning ability of deep learning, learning various enhancement effects in the training image set, but also exerting the advantage of the direct fine tuning effect of the conventional algorithm, and having the advantage of direct and rapid processing. The enhanced image obtained according to the method 200 has smooth and attractive enhancement effect, overcomes the problems of unnatural transition and the like, and has strong practical value.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the invention according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The invention also discloses:
a9, the method as recited in A8, wherein a convolution activation layer is further included before the first intermediate processing block of the image enhancement model.
A10, the method as in A8 or 9, wherein the activation function of the convolutional activation layer in each intermediate processing block is a ReLU function.
A11, the method of any one of A8-10, wherein the number of intermediate processing blocks is 4.
A12, the method as in any one of A8-11, wherein the result processing block comprises three convolution activation layers, wherein the activation functions of the first two convolution activation layers are ReLU functions and the activation function of the third convolution activation layer is a Tanh function.
A13, the method according to any one of a1-12, further comprising the step of generating a preset image enhancement model through training, before the step of inputting the image to be processed into the preset image enhancement model: acquiring a plurality of training image pairs, wherein each training image pair comprises an input image and a target image, the input image is an image acquired by a single lens reflex, and the target image is an image obtained by adjusting the input image; inputting the input image into a pre-trained image enhancement model, outputting the image after convolution processing for multiple times, calculating a loss value of the output image relative to a target image according to a preset loss function, and updating parameters of the image enhancement model until the loss value meets a preset condition, finishing training, and obtaining the trained image enhancement model as a preset image enhancement model.
A14, the method as in a13, wherein the step of inputting the input images to the pre-trained image enhancement model further comprises: for each training image pair, at least one sub-image with a preset size is cut out from the input image to be used as an input sub-image; intercepting each target sub-image from the target image according to the coordinate position of each input sub-image in the input image; and inputting the input subimages into a pre-trained image enhancement model, outputting the subimages after convolution processing for multiple times, calculating the loss value of the output subimages relative to the target subimages according to a preset loss function, and updating parameters of the image enhancement model until the loss value meets a preset condition, finishing training, and obtaining the trained image enhancement model as a preset image enhancement model.
A15, the method of a13 or 14, wherein the predetermined loss function is expressed by the following formula:
loss=λ1*color_loss+λ2*vgg_loss
where loss represents a loss value, color _ loss is a first loss, vgg _ loss is a second loss, λ1And λ2Are the corresponding weighting coefficients.
A16, the method of A15, wherein the step of calculating the first loss comprises: respectively carrying out mean value fuzzy processing on the output image and the corresponding target image to obtain a fuzzy output image and a fuzzy target image; and calculating pixel distance values of corresponding pixel points in the blurred output image and the blurred target image as a first loss.
A17, the method of A15, wherein the step of calculating the second loss comprises: respectively inputting the output image and the target image into a preset convolution network to generate respective characteristic graphs; and calculating the pixel distance value of the corresponding pixel point in the characteristic diagram of the output image and the characteristic diagram of the target image as a second loss.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.
Claims (16)
1. An image enhancement method, the method being performed in a computing device, the method comprising:
inputting an image to be processed into a preset image enhancement model, and outputting the image after carrying out convolution processing for multiple times;
respectively converting the image to be processed and the output image into a preset color space, wherein the preset color space is a Lab color space; and
fusing an image to be processed and an output image in a preset color space to generate an enhanced image;
wherein the step of converting the image to be processed and the output image to the predetermined color space respectively further comprises:
converting the image to be processed into a preset color space to obtain a first image to be processed, a second image to be processed and a third image to be processed on three channels;
converting the output image into a preset color space to obtain a first output image, a second output image and a third output image on three channels;
wherein the step of fusing the image to be processed and the output image in the predetermined color space to generate an enhanced image comprises:
judging the pixel value of a pixel point in the first image to be processed, and generating a first enhancement image by combining the judgment result with the first output image;
combining the second to-be-processed graph and the second output graph to generate a second enhancement graph;
combining the third to-be-processed graph and the third output graph to generate a third enhancement graph; and
and fusing the first enhancement map, the second enhancement map and the third enhancement map to generate an enhanced image.
2. The method of claim 1, wherein generating the first enhancement map in conjunction with the first output map based on the determination comprises:
if the pixel value of the pixel point in the first graph to be processed is smaller than the first threshold value, taking the pixel value of the pixel point corresponding to the first output graph as the pixel value of the pixel point corresponding to the first enhancement graph;
if the pixel value of the pixel point in the first graph to be processed is larger than the second threshold value, the pixel value of the pixel point corresponding to the first enhancement graph is generated in a first mode by combining the pixel values of the corresponding pixel points in the first graph to be processed and the first output graph; and
and if the pixel value of the pixel point in the first image to be processed is not less than the first threshold value nor more than the second threshold value, combining the pixel values of the corresponding pixel points in the first image to be processed and the first output image to generate the pixel value of the corresponding pixel point of the first enhancement image in a second mode.
3. The method of claim 1 or 2, wherein the step of generating a second enhancement map in combination with the second pending map and the second output map comprises:
and performing weighted calculation on the second graph to be processed and the second output graph to generate a second enhancement graph.
4. The method of claim 3, wherein the step of generating a third enhancement map in combination with the third pending map and the third output map comprises:
and performing weighted calculation on the third graph to be processed and the third output graph to generate a third enhancement graph.
5. The method of claim 1, wherein the image enhancement model comprises a plurality of intermediate processing blocks and a result processing block in sequential order, wherein,
each intermediate processing block comprises at least two convolution active layers and a jump connection layer which are connected in sequence, and the jump connection layer is suitable for adding the input of the first convolution active layer and the output of the last convolution active layer of the intermediate processing block to which the jump connection layer belongs;
the result processing block includes a plurality of convolution activation layers.
6. The method of claim 5, further comprising a convolution activation layer prior to the first intermediate processing block of the image enhancement model.
7. The method of claim 6, wherein the activation function of the convolutional activation layer in each intermediate processing block is a ReLU function.
8. The method of claim 7, wherein the number of intermediate processing blocks is 4.
9. The method of claim 8, wherein the result processing block includes three convolutional active layers, wherein the activation functions of the first two convolutional active layers are ReLU functions and the activation function of the third convolutional active layer is a Tanh function.
10. The method as claimed in claim 1, further comprising, before the step of inputting the image to be processed into the preset image enhancement model, the step of generating the preset image enhancement model by training:
acquiring a plurality of training image pairs, wherein each training image pair comprises an input image and a target image, the input image is an image acquired by a single lens reflex, and the target image is an image obtained by adjusting the input image;
inputting the input image into a pre-trained image enhancement model, outputting the image after convolution processing for multiple times, calculating a loss value of the output image relative to a target image according to a preset loss function, and updating parameters of the image enhancement model until the loss value meets a preset condition, finishing training, and obtaining the trained image enhancement model as a preset image enhancement model.
11. The method of claim 10, wherein the step of inputting the input image to the pre-trained image enhancement model further comprises:
for each training image pair, at least one sub-image with a preset size is cut out from the input image to be used as an input sub-image;
intercepting each target sub-image from the target image according to the coordinate position of each input sub-image in the input image; and
inputting the input subimages into a pre-trained image enhancement model, outputting the subimages after convolution processing for multiple times, calculating the loss value of the output subimages relative to the target subimages according to a preset loss function, and updating parameters of the image enhancement model until the loss value meets a preset condition, finishing training, and obtaining the trained image enhancement model as a preset image enhancement model.
12. The method of claim 10 or 11, wherein the preset loss function is represented by the following formula:
loss=λ1*color_loss+λ2*vgg_loss
where loss represents a loss value, color _ loss is a first loss, vgg _ loss is a second loss, λ1And λ2Are the corresponding weighting coefficients.
13. The method of claim 12, wherein calculating the first loss comprises:
respectively carrying out mean value fuzzy processing on the output image and the corresponding target image to obtain a fuzzy output image and a fuzzy target image;
and calculating pixel distance values of corresponding pixel points in the blurred output image and the blurred target image as a first loss.
14. The method of claim 12, wherein calculating the second loss comprises:
respectively inputting the output image and the target image into a preset convolution network to generate respective characteristic graphs;
and calculating the pixel distance value of the corresponding pixel point in the characteristic diagram of the output image and the characteristic diagram of the target image as a second loss.
15. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-14.
16. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-14.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811252617.7A CN109345487B (en) | 2018-10-25 | 2018-10-25 | Image enhancement method and computing device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811252617.7A CN109345487B (en) | 2018-10-25 | 2018-10-25 | Image enhancement method and computing device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109345487A CN109345487A (en) | 2019-02-15 |
| CN109345487B true CN109345487B (en) | 2020-12-25 |
Family
ID=65312412
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811252617.7A Active CN109345487B (en) | 2018-10-25 | 2018-10-25 | Image enhancement method and computing device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109345487B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111161175A (en) * | 2019-12-24 | 2020-05-15 | 苏州江奥光电科技有限公司 | A method and system for removing reflection components of an image |
| CN112200747B (en) * | 2020-10-16 | 2022-06-21 | 展讯通信(上海)有限公司 | Image processing method and device and computer readable storage medium |
| CN113393399A (en) * | 2021-06-22 | 2021-09-14 | 武汉云漫文化传媒有限公司 | Color designation enhancement plug-in for Maya and color enhancement method thereof |
| CN114693548B (en) * | 2022-03-08 | 2023-04-18 | 电子科技大学 | Dark channel defogging method based on bright area detection |
| CN115147310B (en) * | 2022-07-27 | 2025-04-18 | 中国科学院长春光学精密机械与物理研究所 | A fast blind deconvolution method for text images |
| CN115496965A (en) * | 2022-09-23 | 2022-12-20 | 杭州海康威视数字技术股份有限公司 | A method and device for image data enhancement and image recognition |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2063392A1 (en) * | 2007-11-24 | 2009-05-27 | Barco NV | Image processing of medical images |
| CN106683065A (en) * | 2012-09-20 | 2017-05-17 | 上海联影医疗科技有限公司 | Lab space based image fusing method |
| CN107424124B (en) * | 2017-03-31 | 2020-03-17 | 北京臻迪科技股份有限公司 | Image enhancement method and device |
| CN107563984A (en) * | 2017-10-30 | 2018-01-09 | 清华大学深圳研究生院 | A kind of image enchancing method and computer-readable recording medium |
| CN108648163A (en) * | 2018-05-17 | 2018-10-12 | 厦门美图之家科技有限公司 | A kind of Enhancement Method and computing device of facial image |
-
2018
- 2018-10-25 CN CN201811252617.7A patent/CN109345487B/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| CN109345487A (en) | 2019-02-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109345487B (en) | Image enhancement method and computing device | |
| CN109978788B (en) | Convolutional neural network generation method, image demosaicing method and related device | |
| CN112288658B (en) | Underwater image enhancement method based on multi-residual joint learning | |
| CN109584179A (en) | A kind of convolutional neural networks model generating method and image quality optimization method | |
| CN116681636B (en) | Light infrared and visible light image fusion method based on convolutional neural network | |
| CN113034358B (en) | A super-resolution image processing method and related device | |
| CN109934776B (en) | Model generation method, video enhancement method, device and computer-readable storage medium | |
| CN104574291B (en) | A module, method and computer-readable storage medium for chrominance processing | |
| WO2020192483A1 (en) | Image display method and device | |
| CN108038823B (en) | Training method of image morphing network model, image morphing method and computing device | |
| CN109544482A (en) | A kind of convolutional neural networks model generating method and image enchancing method | |
| CN107454284B (en) | Video denoising method and computing device | |
| CN107886516B (en) | Method and computing equipment for computing hair trend in portrait | |
| TWI520101B (en) | Method for making up skin tone of a human body in an image, device for making up skin tone of a human body in an image, method for adjusting skin tone luminance of a human body in an image, and device for adjusting skin tone luminance of a human body in | |
| JP7504120B2 (en) | High-resolution real-time artistic style transfer pipeline | |
| CN114240767A (en) | Image wide dynamic range processing method and device based on exposure fusion | |
| WO2017202244A1 (en) | Method and device for image enhancement and computer storage medium | |
| CN120092246A (en) | Neural network training method and device, image processing method and device | |
| CN110717864B (en) | An image enhancement method, device, terminal equipment and computer-readable medium | |
| CN109840912B (en) | Method for correcting abnormal pixels in image and computing equipment | |
| Zheng et al. | Windowing decomposition convolutional neural network for image enhancement | |
| US9836827B2 (en) | Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images | |
| CN112561822B (en) | Beautifying method and device, electronic equipment and storage medium | |
| WO2016026072A1 (en) | Method, apparatus and computer program product for generation of extended dynamic range color images | |
| CN109978136B (en) | Method for training target network, computing equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |




