CN113963007A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN113963007A CN113963007A CN202111407845.9A CN202111407845A CN113963007A CN 113963007 A CN113963007 A CN 113963007A CN 202111407845 A CN202111407845 A CN 202111407845A CN 113963007 A CN113963007 A CN 113963007A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- brightness
- images
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The application provides an image processing method and device, and relates to the technical field of image processing. Wherein the method comprises the following steps: acquiring an original image; segmenting an original image into at least two sub-images; extracting image information in each sub-image, wherein the image information comprises brightness information in the image and brightness information between the image and other images; and adjusting the brightness of the original image according to the image information in each sub-image. According to the method and the device, the original image is segmented, the brightness value of each sub-image is processed, and brightness compensation is performed on each sub-image, so that the definition of the whole image is improved, and the whole image is closer to a real image.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the development of camera technology, pictures shot by a camera are closer to a real scene. However, the pictures taken by the camera are not only related to the hardware and software of the camera, but also related to the photographing technology of the user, and if the angle of the user during photographing is not right, the exposure phenomenon occurs in part or all of the areas in the pictures taken by the camera, so that the contents in the pictures are blurred, and the current scene cannot be truly reflected. In order to improve the reality of pictures photographed by a camera, each user cannot be required to improve the photographing technology and become a high-level hand for photographing, so that the problem of image exposure caused by the fact that the angle and the direction of the user are not opposite when photographing can be improved only by starting from the aspects of hardware and software of the camera.
Disclosure of Invention
In order to solve the above problem, embodiments of the present application provide an image processing method and apparatus, which implement brightness compensation on each sub-image by segmenting an original image and then processing brightness values of the sub-images, so as to improve the definition of the whole image and make the image closer to a real image.
Therefore, the following technical scheme is adopted in the embodiment of the application:
in a first aspect, an embodiment of the present application provides an image processing method, including: acquiring an original image; segmenting the original image into at least two sub-images; extracting image information in each sub-image, wherein the image information comprises brightness information in the image and brightness information between the image and other images; and adjusting the brightness of the original image according to the image information in each sub-image.
In one embodiment, before said splitting said original image into at least two sub-images, said method further comprises: and inputting the original image into a bilateral filtering algorithm to obtain the smooth original image with reduced noise points.
In one embodiment, the dividing the original image into at least two sub-images comprises: inputting the original image into a brightness channel of an image brightness-chromaticity color space to obtain at least two brightness levels of the original image, wherein each brightness level is divided according to a brightness value; and combining the areas with the same brightness level in the original image to obtain the at least two sub-images.
In one embodiment, the method further comprises: determining a brightness difference value in each sub-image and a brightness difference value at an edge between two adjacent sub-images; and when the brightness difference value at the edge between the two adjacent sub-images is not larger than the brightness difference value in the respective interiors of the two adjacent sub-images, combining the two adjacent sub-images to which the building belongs into one sub-image.
In one embodiment, the extracting image information in each sub-image includes: and inputting the at least two sub-images into a Canny edge operator algorithm to obtain the image edge brightness information in each sub-image.
In one embodiment, when the original image is an underexposed image, before the inputting the at least two sub-images into a Canny edge operator algorithm to obtain image edge brightness information in each sub-image, the method includes: inputting the at least two sub-images into a gamma conversion algorithm, and increasing the gray value of each two sub-images; when the original image is an overexposed image, before the at least two sub-images are input into a Canny edge operator algorithm to obtain the image edge brightness information in each sub-image, the method comprises the following steps: inputting the at least two sub-images into a gamma transformation algorithm will reduce the gray scale value of each of the two sub-images.
In one embodiment, the extracting image information in each sub-image includes: inputting the original image into a Markov random field energy function, and calculating the number of sub-images; inputting the at least two sub-images and the number of the sub-images into a particle swarm optimization algorithm, and calculating the average brightness value of each sub-image; and calculating the brightness variation of each sub-image according to the brightness level interval difference corresponding to each sub-image and the average brightness value of each sub-image.
In one embodiment, the adjusting the brightness of the original image according to the image information in the sub-images includes: inputting the brightness variation of each sub-image into a least square method, and calculating a global mapping curve of the original image; and adjusting the brightness of the original image according to the global mapping curve of the original image.
In one embodiment, the adjusting the brightness of the original image includes: and adjusting the brightness L channel of the original image in the hue saturation brightness HSL channel of the image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the receiving and sending unit is used for acquiring an original image; a processing unit for segmenting the original image into at least two sub-images; extracting image information in each sub-image, wherein the image information comprises brightness information in the image and brightness information between the image and other images; and adjusting the brightness of the original image according to the image information in each sub-image.
In one embodiment, the processing unit is further configured to input the original image to a bilateral filtering algorithm, so as to obtain an original image that is smooth and noise-reduced.
In one embodiment, the processing unit is specifically configured to input the original image into a luminance channel of an image luminance-chrominance color space, to obtain at least two luminance levels of the original image, where each luminance level is divided according to a luminance value; and combining the areas with the same brightness level in the original image to obtain the at least two sub-images.
In one embodiment, the processing unit is further configured to determine a luminance difference value within each sub-image and a luminance difference value at an edge between two adjacent sub-images; and when the brightness difference value at the edge between the two adjacent sub-images is not larger than the brightness difference value in the respective interiors of the two adjacent sub-images, combining the two adjacent sub-images to which the building belongs into one sub-image.
In an embodiment, the processing unit is specifically configured to input the at least two sub-images into a Canny edge operator algorithm to obtain image edge luminance information in each sub-image.
In one embodiment, when the original image is an underexposed image, the processing unit is further configured to input the at least two sub-images into a gamma transformation algorithm, and increase the gray scale values of the two sub-images; when the original image is an overexposed image, the processing unit is further configured to input the at least two sub-images into a gamma conversion algorithm, and reduce the gray scale value of each of the two sub-images.
In an embodiment, the processing unit is specifically configured to input the original image into a markov random field energy function, and calculate the number of sub-images; inputting the at least two sub-images and the number of the sub-images into a particle swarm optimization algorithm, and calculating the average brightness value of each sub-image; and calculating the brightness variation of each sub-image according to the brightness level interval difference corresponding to each sub-image and the average brightness value of each sub-image.
In an embodiment, the processing unit is specifically configured to input the luminance variation of each sub-image into a least square method, and calculate a global mapping curve of the original image; and adjusting the brightness of the original image according to the global mapping curve of the original image.
In one embodiment, the processing unit is specifically configured to adjust the original image in a luminance L channel of a hue saturation luminance HSL channel of the image.
In a third aspect, an embodiment of the present application provides a terminal device, including: at least one transceiver, at least one memory, at least one processor for performing embodiments as various possible implementations of the first aspect.
In a fourth aspect, this application provides in an embodiment a computer-readable storage medium, on which a computer program is stored, which, when executed in a computer, causes the computer to perform various possible implementation examples as in the first aspect.
In a fifth aspect, this application provides a computer program product, which is characterized in that the computer program product stores instructions that, when executed by a computer, cause the computer to implement the embodiments as each possible implementation of the first aspect.
In the embodiment of the present application, the image processing components are: inputting an image, segmenting the image, extracting information in an image region and extracting information of a region, estimating a brightness value between regions, calculating a brightness level change statistic, fitting a brightness level change curve, adjusting details and outputting the image. Firstly, averagely dividing the brightness of an acquired image into ten grades, and dividing the whole image into a plurality of sub-images; and then converting the image into information for acquiring and processing the regions forming the image, defining a Markov random field energy function of the image by taking the edge details of the image as a target according to the brightness values of the regions before and after compensation, and obtaining the optimal average brightness value of each region through a particle swarm optimization algorithm. Then, correcting the brightness value amplification of the image at each brightness level by utilizing the statistical information of the region; and finally, fitting by using a least square method to obtain a brightness mapping curve of the image, and performing compensation adjustment on the brightness of the image in different areas to recover the image details.
Drawings
The drawings that accompany the detailed description can be briefly described as follows.
Fig. 1 is a flow chart of an image processing method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application;
FIG. 3(a) is an under-exposed original image provided in an embodiment of the present application;
FIG. 3(b) is an overexposed original image provided in an embodiment of the present application;
FIG. 4 is a diagram illustrating brightness grading provided in an embodiment of the present application;
fig. 5 is a gray scale diagram of an original image divided into a plurality of sub-images according to pixel points according to the embodiment of the present application;
fig. 6 is a gray scale diagram of an original image of a plurality of sub-images after merging partial pixel blocks provided in the embodiment of the present application;
fig. 7(a) is a gray scale diagram of an original image obtained by dividing a normal image into a plurality of sub-images provided in the embodiment of the present application;
FIG. 7(b) is a gray scale diagram of an original image obtained by dividing an overexposed image into a plurality of sub-images according to an embodiment of the present application;
FIG. 7(c) is a gray scale diagram of an original image obtained by dividing an underexposed image provided in an embodiment of the present application into a plurality of sub-images;
FIG. 8 is a graph of the variation of brightness levels provided in the examples of the present application;
FIG. 9 is a graph of a luminance map provided in an embodiment of the present application;
fig. 10 is an original image obtained by performing brightness adjustment on the original image in fig. 3(b) according to an embodiment of the present application;
FIG. 11 is a block diagram illustrating an architecture of an image processing apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a terminal device in an embodiment of the present application.
Detailed Description
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," and the like, in the description and in the claims herein are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first response message and the second response message, etc. are for distinguishing different response messages, not for describing a specific order of the response messages.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units, or the like; plural elements means two or more elements, and the like.
Fig. 1 and fig. 2 are flowcharts of an image processing method provided in an embodiment of the present application. As shown in the figure, the method is implemented by the following specific steps:
in step S201, an original image is acquired.
In the present application, the obtained original image is generally obtained by photographing by a user, and due to the photographing technology of the user, the obtained original image may have an image as shown in fig. 3(a), which has an overexposure problem, or an image as shown in fig. 3(b), which has an underexposure problem. Whether the original image has the problem of overexposure or underexposure, the real scene cannot be reflected, and the detail problem in the image cannot be reflected. In the following description of the technical solution of the present application, the underexposed image in fig. 3(b) will be taken as an example, the present application is not limited herein, and the solution of the present application is also capable of solving the overexposed image in fig. 3 (a).
Step S202, the original image is divided into at least two sub-images.
The original image needs to be preprocessed before it is segmented. Illustratively, a bilateral filtering method can be used to perform smooth filtering on the original image, so that noise in the original image can be reduced, and the influence on the segmentation effect of the subsequent original image due to excessive noise in the original image is avoided. Because the bilateral filtering of the image combines the spatial proximity of the image and the compromise processing of the pixel point similarity during sampling, and simultaneously considers the spatial domain information and the gray level similarity, the smooth filtering processing is carried out on the original image, so that the smooth image can be achieved, and the edge effect can be kept.
After the original image is processed, the brightness value of the original image can be divided into N levels according to the brightness maximum value and the brightness minimum value of the pixel points on the original image, wherein N is a positive integer greater than 1, so that different sub-images can be divided according to the brightness levels when the image is divided subsequently. Illustratively, in conjunction with fig. 4, the luminance channel Y of the original image input to the image luminance-chrominance (Y-UV) color space may be normalized to be in the range of 0-1, and then the luminance is averagely divided into 10 levels, where Ω is (Ω 1, Ω 2, Ω 3, Ω 4, Ω 5, Ω 6, Ω 7, Ω 8, Ω 9, and Ω 10), and each luminance level Ω corresponds to a luminance value range of 0.1. The lower the brightness level omega is, the smaller the brightness value of the pixel point is, the larger the brightness level omega is, and the larger the brightness value of the pixel point is.
In the application, the mode of dividing the original image into at least two sub-images is divided based on the dissimilarity mode between pixel points. For example, the original image generally exists as a Red Green Blue (RGB) image, and a Minimum Spanning Tree (MST) may be used to merge pixel points with the same or close luminance values in the original image, and merge the pixel points with the same or close luminance values into a pixel block Cn, which is a sub-image. In the present application, the same pixel point merging is performed on the image shown in fig. 3(b), so as to obtain the image shown in fig. 5, and the shadow blocks of different colors represent a sub-image.
Optionally, for one pixel block Cn, the difference value between the luminance values of the pixels at all the edges inside the pixel block Cn may be referred to as intra-class difference; for different pixel blocks Cn, the difference value between the luminance values of the pixels at the edges of two adjacent pixel blocks Cn can be referred to as inter-class difference. If the inter-class difference is smaller than the intra-class difference, two adjacent pixel blocks Cn of the inter-class difference can be merged, so that the number of the pixel blocks Cn can be effectively reduced, the number of sub-images can be effectively reduced, and the workload of subsequent data processing can be reduced. In the present application, the pixel blocks Cn with higher similarity in fig. 4 are merged to obtain the image shown in fig. 6, and the shaded blocks with different colors represent one pixel block Cn, that is, one sub-image.
In step S203, image information in each sub-image is extracted. The image information includes brightness information in the image and brightness information between the image and other images.
Step S204, adjusting the brightness of the original image according to the image information in each sub-image.
In the normally shot image shown in fig. 7(a), if there is no problem of overexposure or underexposure, the contour of each object in the image is clear, and the contour of each object in the image can be directly extracted by using an image contour extraction algorithm, and basically all the contours in the image can be extracted; if the image has the underexposure problem, as shown in fig. 7(b), the number of sub-images with a lower brightness level Ω in the image is large, and the outlines of all objects displayed in the image are blurred or cannot be displayed at all under the condition that the objects are dark, so that the outlines of all the objects cannot be extracted; if the image has an overexposure problem, as shown in fig. 7(c), the number of sub-images with a higher brightness level Ω in the image is larger, and the object displayed in the image is whitened under the condition that the object is highlighted, so that the outline cannot be clearly presented, and at this time, the outlines of all the objects cannot be presented.
In the application, in order to solve the problem that the image is over-exposed or under-exposed, the outlines of all objects in the image are provided, all sub-images can be input into a gamma conversion algorithm, the sub-images can be compensated for the over-exposed or under-exposed areas, all the sub-images can be changed into normal images, and then the outlines of all the objects in all the sub-images are extracted through an image outline extraction algorithm.
Illustratively, when the sub-image is under-exposed, after gamma transformation (γ <1), the narrow-range low gray values of the image are mapped to a wider-range gray values, so that the contrast is enhanced, more low gray details can be displayed, and the number of edges detected by the Canny edge operator is increased than before the transformation. Similarly, when the sub-image is over-exposed, the high gray value in the narrow range of the image can be mapped to the gray value in the wider range by (gamma >1), and then the Canny operator detects more edges of the exposed part.
The sub-image is a normal image, or an underexposed image, or an overexposed image, and may be determined according to the gray scale value in the sub-image. Optionally, if the gray-scale value of one sub-image is smaller than the first set threshold, the sub-image may be considered as an underexposed image; if the gray value of one sub-image is larger than a second set threshold value, the sub-image can be considered as an overexposed image; if the gray scale value of a sub-image is between the first set threshold and the second set threshold, the sub-image can be considered as a normal image.
In order to obtain the statistics of the brightness level variation, the brightness value of the sub-images needs to be estimated first, and the number of the sub-images after the original image is divided can be calculated by utilizing a markov random field energy function. Wherein the Markov random field energy function is:
wherein L isnewRepresenting the number of sub-images of the original image after segmentation.
In order to quickly obtain the optimal solution of the energy equation (1), a particle swarm optimization method can be used for solving the optimal average brightness value of the sub-images. After the optimal sub-image brightness values are found, exposure correction is performed using the optimal mean gray scale values of the respective sub-images as reference points, with the optimized values being mapped directly onto each sub-image. However, the luminance values of some smaller sub-images may be abrupt or even distorted due to the fact that the information amount of the sub-images is not weighted much, and the mapping using the luminance level variation has good global information and is not distorted. Thus, 10 brightness levels Ω can be counted through the statistical imagejLuminance change amount Δ l of each sub-image in the inneriTo obtain each one-luminance level Ω ═ q (Ω)1,Ω2,…,Ω10) The variation of (d) is:
ΔΩ=(ΔΩ1,ΔΩ2,…,ΔΩ10) (2)。
luminance change amount Δ l of sub-imageiThe difference between the initial average brightness value of the sub-image and the optimal average brightness value of the sub-image after re-evaluation, that is:
wherein the variation Δ Ω of the image brightness leveljMeans the variation value of the brightness value in each level range (0.1) within the brightness value 0-1 of the image according to the initial average gray value l of the sub-imageiE (0,1), each sub-image can be determined to belong to the corresponding brightness level range, and the brightness variation sigma delta l of the sub-images contained in the brightness level is countediAnd the number c of sub-images in the brightness leveljThe variation Δ Ω of the brightness level of each image can be calculatedjThe method specifically comprises the following steps:
since Ω is a luminance interval as defined before and not a specific value, in order to validate the data points as the curve to be fitted, we make (Q, Δ Ω) { (Q, Δ Ω)1,ΔΩ1),(Q2,ΔΩ2),…,(Qn,ΔΩn) The data points data of the brightness level variation are calculated, and then each brightness level omega is calculated according to the formula (5)iThe average value of the original luminance of the sub-image in (5) is:
then, each brightness level omega calculated by the formula (5) is calculatediThe average value of the original brightness of the sub-images in the image space is simulatedThen, a luminance gradation change curve as shown in fig. 8 was obtained. Then, in combination with the luminance level variation curve fitted in fig. 8, a global mapping curve of the original image is calculated by formula (6), where formula (6) is:
g(x)=x+gΔ(x) (6)
where g (x) is the brightness of the output image, x is the brightness value of the input image, gΔ(x) Is the variation of the brightness value of the image, and can be obtained from equation (7):
wherein k is2,k3And k is5,k6The correction ranges of the shadow and highlight regions are controlled separately.
Because of gΔ(x) For the brightness level variation curve, it is necessary to find k ═ k (k)1,k2,k3,k4,k5,k6) Six parameters to satisfy the data point (Q, Delta omega) and the target value gΔError e between (Q, k) ═ gΔ(Q,k)-ΔΩ)2And minimum. Thus, g can be fitted by a least squares methodΔ(x) However, the model obtained by simply considering the error minimization may have an overfitting phenomenon or is not easy to solve, i.e. the prediction effect is poor. To solve this problem, it is often considered to add a regularization term to constrain the parameter k in the objective function in order to make the fitted model not too complex. Therefore, least square fitting with regularization constraint can be adopted, and the minimum error can be realized, specifically:
the former term is a punishment term of the data fitting degree, the better the data fitting is, the smaller the term value is, however, the sample data can be fitted too much, so that the model is too complex; the latter term is a penalty term for the complexity of the model, which term has a larger value when the model is more complex.
In order to minimize the objective function f (k), the model can be fitted to the data without being too complex, which is the fact that a regularization term is added to the objective function of the basic least squares method. Therefore, the method combines the regular term and utilizes a Newton iteration method to solve and obtain:
k(t+1)=(JTJ+0.1×I)-1×JT×(ΔΩ-gΔ(Q,k(t))+k(t)) (9)
where t denotes the tth iteration, I is a unit matrix of 6 × 6, the initial value k (0) ═ 0151.60151.6 for the iteration, J is a Jacobian matrix of g Δ (Q, k) versus parameter k, and is:
the iteration times t are set or the iteration is stopped when the error epsilon is smaller than a certain value, and the luminance level change curve g delta (x) of the input image is fitted by using the t-th generation parameter k (t) obtained by the formula (9), so that a global mapping curve g (x) of the image luminance is obtained, as shown in fig. 9.
After a global mapping curve of the image is obtained, the image can be adjusted according to the curve, and since the saturation, hue and the like of the image can be directly influenced by directly adjusting the RGB channel of the image, and finally the image is distorted, the L channel can be independently adjusted in the HSL channel of the image, wherein the L channel is lightness.
In addition, since some high-frequency information of the original image is lost when the image is changed by the mapping curve, in order to keep the details of the original image in the output image, the finally changed image f (x) ═ g (x) + Δ x may be used. Exemplarily, an image as shown in fig. 10, where Δ x ═ x-q (x), q (x) is a bilateral filtered image using preserved edges on the original image.
In the embodiment of the present application, the image processing components are: inputting an image, segmenting the image, extracting information in an image region and extracting information of a region, estimating a brightness value between regions, calculating a brightness level change statistic, fitting a brightness level change curve, adjusting details and outputting the image. Firstly, averagely dividing the brightness of an acquired image into ten grades, and dividing the whole image into a plurality of sub-images; and then converting the image into information for acquiring and processing the regions forming the image, defining a Markov random field energy function of the image by taking the edge details of the image as a target according to the brightness values of the regions before and after compensation, and obtaining the optimal average brightness value of each region through a particle swarm optimization algorithm. Then, correcting the brightness value amplification of the image at each brightness level by utilizing the statistical information of the region; and finally, fitting by using a least square method to obtain a brightness mapping curve of the image, and performing compensation adjustment on the brightness of the image in different areas to recover the image details.
Through the image processed by the technical scheme, the experimental result shows that the algorithm improves the visual display effect of the image, so that the image is more real and the details are clearer. An application experiment aiming at the light intersection image processing shows that the technical scheme can effectively solve the problems of under exposure and over exposure of the image and recover the details of the over exposure and under exposure areas of the image.
Fig. 11 is a schematic diagram illustrating an architecture of an image processing apparatus according to an embodiment of the present application. As shown in fig. 11, the apparatus 1100 includes a transceiver unit 1101 and a processing unit 1102. The cooperative working process among all units is as follows:
the transceiving unit 1101 is configured to acquire an original image; the processing unit 1102 is configured to divide the original image into at least two sub-images; extracting image information in each sub-image, wherein the image information comprises brightness information in the image and brightness information between the image and other images; and adjusting the brightness of the original image according to the image information in each sub-image.
In one embodiment, the processing unit 1102 is further configured to input the original image to a bilateral filtering algorithm to obtain a smooth and noise-reduced original image.
In one embodiment, the processing unit 1102 is specifically configured to input the original image into a luminance channel of an image luminance-chrominance color space, to obtain at least two luminance levels of the original image, where each luminance level is divided according to a luminance value; and combining the areas with the same brightness level in the original image to obtain the at least two sub-images.
In one embodiment, the processing unit 1102 is further configured to determine a luminance difference value within each sub-image and a luminance difference value at an edge between two adjacent sub-images; and when the brightness difference value at the edge between the two adjacent sub-images is not larger than the brightness difference value in the respective interiors of the two adjacent sub-images, combining the two adjacent sub-images to which the building belongs into one sub-image.
In an embodiment, the processing unit 1102 is specifically configured to input the at least two sub-images into a Canny edge operator algorithm, so as to obtain the image edge brightness information in each sub-image.
In one embodiment, when the original image is an underexposed image, the processing unit 1102 is further configured to input the at least two sub-images into a gamma transformation algorithm, so as to increase the gray-level values of the two sub-images; when the original image is an overexposed image, the processing unit is further configured to input the at least two sub-images into a gamma conversion algorithm, and reduce the gray scale value of each of the two sub-images.
In an embodiment, the processing unit 1102 is specifically configured to input the original image into a markov random field energy function, and calculate the number of sub-images; inputting the at least two sub-images and the number of the sub-images into a particle swarm optimization algorithm, and calculating the average brightness value of each sub-image; and calculating the brightness variation of each sub-image according to the brightness level interval difference corresponding to each sub-image and the average brightness value of each sub-image.
In an embodiment, the processing unit 1102 is specifically configured to input the luminance variation of each sub-image into a least square method, and calculate a global mapping curve of the original image; and adjusting the brightness of the original image according to the global mapping curve of the original image.
In one embodiment, the processing unit 1102 is specifically configured to adjust the original image in a luminance L channel of a hue saturation luminance HSL channel of the image.
Fig. 12 is a schematic structural diagram of a terminal device provided in an embodiment of the present application. As shown in fig. 12, the terminal apparatus 1200 includes a transceiver 1201, a memory 1202, a processor 1203, and a bus 1204. The transceiver 1201, the memory 1202, and the processor 1203 are communicatively connected by a bus 1204, respectively, to achieve mutual communication.
The transceiver 1201 can, among other things, enable input (reception) and output (transmission) of signals. For example, the transceiver 1201 may include a transceiver or a radio frequency chip. The transceiver 1201 may also include a communication interface. For example, the terminal device 1200 may receive a control instruction sent by an external device such as a mobile phone, a camera, a cloud, and the like through the transceiver 1201, and may also send an execution instruction to another device through the transceiver 1201.
The memory 1202 may have stored thereon programs (which may also be instructions or code) that are executable by the processor 1202 to cause the processor 1203 to perform the functions shown in fig. 1-10. Optionally, data may also be stored in the memory 1202. For example, the processor 1203 may read data stored in the memory 1202, the data may be stored at the same memory address as the program, or the data may be stored at a different memory address from the program. In this embodiment, the processor 1203 and the memory 1202 may be separately configured, or may be integrated together, for example, integrated on a single board or a System On Chip (SOC).
The processor 1203 may be a general purpose processor or a special purpose processor. For example, the processor 1203 may include a Central Processing Unit (CPU) and/or a baseband processor.
Illustratively, the specific working process of the processor 1203 can be divided into: inputting an image, segmenting the image, extracting information in an image region and extracting information of a region, estimating a brightness value between regions, calculating a brightness level change statistic, fitting a brightness level change curve, adjusting details and outputting the image. The image segmentation divides the whole image into a plurality of sub-regions for analysis and processing, and pixels in the sub-regions should have similar brightness values as much as possible; the image inter-region information extraction and the inter-region information extraction are to realize the intra-region information enhancement of the segmented regions, obtain the detail increment in the regions, and obtain the inter-region contrast relation through the difference of the average brightness value of each region and the adjacent region after the image segmentation; the region brightness value estimation needs to estimate new optimal average brightness values of all regions again, so that the brightness values of the regions approach to 0.5 as much as possible to obtain an optimal brightness average value; the brightness level variation statistics is to obtain the variation of each brightness level by counting the brightness variation of each region in 10 brightness levels of the image; and the brightness level change curve fitting utilizes the data points with brightness level change to obtain the brightness change curve of the whole image by least square fitting, and adjusts the image according to the curve after obtaining the global mapping curve of the image to obtain an output image.
Illustratively, the processor 1203 may divide the image into several regions based on the dissimilarity between the pixels. The bilateral filtering is used for smoothing the image before the image is segmented, the interference of noise points on the segmentation effect is reduced, the spatial domain information and the gray level similarity are considered, and the effects of smoothing the image and keeping edges can be achieved.
Illustratively, the processor 1203 mainly applies gamma transformation and Canny edge operator to process the picture in the aspect of information region extraction, obtains a good picture by adjusting gamma transformation (or) for underexposure and overexposure, and then applies Canny edge operator to extract the image edge.
Illustratively, the processor 1203 mainly uses a markov random field energy function to solve the number of the regions after the image segmentation in the aspect of region brightness value estimation, and then performs the solution of the optimal average brightness value of the regions by using a particle swarm optimization method.
Illustratively, the processor 1203 statistically obtains the variation of each luminance level by counting the variation of the luminance of each region within 10 luminance levels of the image, where the variation of the luminance of the region is the difference between the initial average luminance value of the region and the optimal average luminance value of the region after re-evaluation.
Illustratively, the processor 1203 fits the brightness level variation curve of the image by least squares to find a global mapping curve of the image. When the image is adjusted according to the curve, the L channel is independently adjusted on the HSL channel of the image, wherein the L channel is brightness.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the terminal device 1100. In other embodiments of the present application, terminal device 1100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present application further provides a terminal device, where the terminal device includes a processor, and the processor may execute the technical solutions corresponding to the protection as shown in fig. 1 to fig. 10, so that the terminal device has the technical effect of the technical solution of the protection.
Also provided in an embodiment of the present application is a computer-readable storage medium having a computer program stored thereon, where the computer program is used to make a computer execute any one of the methods described in the above fig. 1-10 and the corresponding description when the computer program is executed in the computer.
Also provided in embodiments of the present application is a computer program product having instructions stored thereon, which when executed by a computer, cause the computer to implement any of the methods set forth above in fig. 1-10 and the corresponding description.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
Moreover, various aspects or features of embodiments of the application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
In the above embodiments, the window management apparatus 1100 of FIG. 11 may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply an order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not limit the implementation processes of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, which essentially or partly contribute to the prior art, may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or an access network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.
Claims (10)
1. An image processing method, comprising:
acquiring an original image;
segmenting the original image into at least two sub-images;
extracting image information in each sub-image, wherein the image information comprises brightness information in the image and brightness information between the image and other images;
and adjusting the brightness of the original image according to the image information in each sub-image.
2. The method of claim 1, wherein prior to said segmenting said original image into at least two sub-images, said method further comprises:
and inputting the original image into a bilateral filtering algorithm to obtain the smooth original image with reduced noise points.
3. The method according to claim 1 or 2, wherein the segmenting the original image into at least two sub-images comprises:
inputting the original image into a brightness channel of an image brightness-chromaticity color space to obtain at least two brightness levels of the original image, wherein each brightness level is divided according to a brightness value;
and combining the areas with the same brightness level in the original image to obtain the at least two sub-images.
4. The method of any one of claims 1-3, further comprising:
determining a brightness difference value in each sub-image and a brightness difference value at an edge between two adjacent sub-images;
and when the brightness difference value at the edge between the two adjacent sub-images is not larger than the brightness difference value in the respective interiors of the two adjacent sub-images, combining the two adjacent sub-images to which the building belongs into one sub-image.
5. The method according to any one of claims 1 to 4, wherein the extracting image information in each sub-image comprises:
and inputting the at least two sub-images into a Canny edge operator algorithm to obtain the image edge brightness information in each sub-image.
6. The method according to claim 5, wherein before said inputting said at least two sub-images into a Canny edge operator algorithm to obtain image edge luminance information in each sub-image when said original image is an underexposed image, comprising:
inputting the at least two sub-images into a gamma conversion algorithm, and increasing the gray value of each two sub-images;
when the original image is an overexposed image, before the at least two sub-images are input into a Canny edge operator algorithm to obtain the image edge brightness information in each sub-image, the method comprises the following steps:
inputting the at least two sub-images into a gamma transformation algorithm will reduce the gray scale value of each of the two sub-images.
7. The method according to any one of claims 1-6, wherein said extracting image information in each sub-image comprises:
inputting the original image into a Markov random field energy function, and calculating the number of sub-images;
inputting the at least two sub-images and the number of the sub-images into a particle swarm optimization algorithm, and calculating the average brightness value of each sub-image;
and calculating the brightness variation of each sub-image according to the brightness level interval difference corresponding to each sub-image and the average brightness value of each sub-image.
8. The method according to any one of claims 1 to 7, wherein the adjusting the brightness of the original image according to the image information in the sub-images comprises:
inputting the brightness variation of each sub-image into a least square method, and calculating a global mapping curve of the original image;
and adjusting the brightness of the original image according to the global mapping curve of the original image.
9. The method according to any one of claim 8, wherein the adjusting the brightness of the original image comprises:
and adjusting the brightness L channel of the original image in the hue saturation brightness HSL channel of the image.
10. An image processing apparatus characterized by comprising:
at least one of the transceivers is provided with at least one transceiver,
at least one memory for storing at least one of the data,
at least one processor configured to execute instructions stored in a memory to perform the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111407845.9A CN113963007B (en) | 2021-11-24 | 2021-11-24 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111407845.9A CN113963007B (en) | 2021-11-24 | 2021-11-24 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113963007A true CN113963007A (en) | 2022-01-21 |
CN113963007B CN113963007B (en) | 2025-02-18 |
Family
ID=79471955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111407845.9A Active CN113963007B (en) | 2021-11-24 | 2021-11-24 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113963007B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012608A (en) * | 2022-12-21 | 2023-04-25 | 爱克斯维智能科技(苏州)有限公司 | Intelligent image brightness optimizing method and device for stone identification |
CN118397460A (en) * | 2024-06-04 | 2024-07-26 | 北京数易科技有限公司 | Complex battlefield situation analysis method, system and medium based on large model technology |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6049626A (en) * | 1996-10-09 | 2000-04-11 | Samsung Electronics Co., Ltd. | Image enhancing method and circuit using mean separate/quantized mean separate histogram equalization and color compensation |
US6822758B1 (en) * | 1998-07-01 | 2004-11-23 | Canon Kabushiki Kaisha | Image processing method, system and computer program to improve an image sensed by an image sensing apparatus and processed according to a conversion process |
CN102726036A (en) * | 2010-02-02 | 2012-10-10 | 微软公司 | Enhancement of images for display on liquid crystal displays |
CN107172364A (en) * | 2017-04-28 | 2017-09-15 | 努比亚技术有限公司 | A kind of image exposure compensation method, device and computer-readable recording medium |
CN110807750A (en) * | 2019-11-14 | 2020-02-18 | 青岛海信电器股份有限公司 | Image processing method and apparatus |
-
2021
- 2021-11-24 CN CN202111407845.9A patent/CN113963007B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6049626A (en) * | 1996-10-09 | 2000-04-11 | Samsung Electronics Co., Ltd. | Image enhancing method and circuit using mean separate/quantized mean separate histogram equalization and color compensation |
US6822758B1 (en) * | 1998-07-01 | 2004-11-23 | Canon Kabushiki Kaisha | Image processing method, system and computer program to improve an image sensed by an image sensing apparatus and processed according to a conversion process |
CN102726036A (en) * | 2010-02-02 | 2012-10-10 | 微软公司 | Enhancement of images for display on liquid crystal displays |
CN107172364A (en) * | 2017-04-28 | 2017-09-15 | 努比亚技术有限公司 | A kind of image exposure compensation method, device and computer-readable recording medium |
CN110807750A (en) * | 2019-11-14 | 2020-02-18 | 青岛海信电器股份有限公司 | Image processing method and apparatus |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012608A (en) * | 2022-12-21 | 2023-04-25 | 爱克斯维智能科技(苏州)有限公司 | Intelligent image brightness optimizing method and device for stone identification |
CN116012608B (en) * | 2022-12-21 | 2024-11-01 | 爱克斯维智能科技(苏州)有限公司 | Image brightness intelligent optimizing method and device for stone identification |
CN118397460A (en) * | 2024-06-04 | 2024-07-26 | 北京数易科技有限公司 | Complex battlefield situation analysis method, system and medium based on large model technology |
Also Published As
Publication number | Publication date |
---|---|
CN113963007B (en) | 2025-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101117882B1 (en) | Luminance correction | |
US9275445B2 (en) | High dynamic range and tone mapping imaging techniques | |
KR102045538B1 (en) | Method for multi exposure image fusion based on patch and apparatus for the same | |
US9311901B2 (en) | Variable blend width compositing | |
JP5251637B2 (en) | Noise reduction device, noise reduction method, noise reduction program, recording medium | |
US9626745B2 (en) | Temporal multi-band noise reduction | |
CN111563908B (en) | Image processing method and related device | |
US20220222792A1 (en) | Method and system for image enhancement | |
CN109214996B (en) | Image processing method and device | |
JP5703255B2 (en) | Image processing apparatus, image processing method, and program | |
KR20140045370A (en) | Automatic exposure correction of images | |
US8594446B2 (en) | Method for enhancing a digitized document | |
Ko et al. | Artifact-free low-light video enhancement using temporal similarity and guide map | |
CN113674193B (en) | Image fusion method, electronic device and storage medium | |
Pei et al. | Effective image haze removal using dark channel prior and post-processing | |
Celebi et al. | Fuzzy fusion based high dynamic range imaging using adaptive histogram separation | |
CN112822413B (en) | Shooting preview method, shooting preview device, terminal and computer readable storage medium | |
CN116485979B (en) | Mapping relation calculation method, color calibration method and electronic equipment | |
CN113963007A (en) | Image processing method and device | |
US12205249B2 (en) | Intelligent portrait photography enhancement system | |
CN111489322A (en) | Method and device for adding sky filter to static picture | |
CN108629738B (en) | Image processing method and device | |
CN110049242A (en) | A kind of image processing method and device | |
Mandal et al. | FuzzyCIE: fuzzy colour image enhancement for low-exposure images | |
WO2008102296A2 (en) | Method for enhancing the depth sensation of an image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |