[go: up one dir, main page]

CN114723613B - Image processing method and device, electronic device, and storage medium - Google Patents

Image processing method and device, electronic device, and storage medium Download PDF

Info

Publication number
CN114723613B
CN114723613B CN202110007717.9A CN202110007717A CN114723613B CN 114723613 B CN114723613 B CN 114723613B CN 202110007717 A CN202110007717 A CN 202110007717A CN 114723613 B CN114723613 B CN 114723613B
Authority
CN
China
Prior art keywords
pixel
image
chromaticity
pixels
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110007717.9A
Other languages
Chinese (zh)
Other versions
CN114723613A (en
Inventor
周群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110007717.9A priority Critical patent/CN114723613B/en
Publication of CN114723613A publication Critical patent/CN114723613A/en
Application granted granted Critical
Publication of CN114723613B publication Critical patent/CN114723613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to an image processing method and device, electronic equipment and storage medium. The method comprises the steps of obtaining an image to be processed in a brightness-chromaticity space domain, carrying out downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed, respectively determining chromaticity weights of first pixels according to brightness difference degrees of the first pixels in the image to be processed and the first neighborhood pixels, wherein the chromaticity weight of each first pixel in the first pixels is positively correlated with the corresponding brightness difference degree, carrying out upsampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and combining the image obtained through the upsampling operation with the image to be processed to obtain a processed image. By the method, the chromaticity value of the image can be adjusted in the image processing process, and the situation that color edges and colors overflow in the processed image is avoided.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The electronic device is inevitably interfered by various factors in the process of collecting, transmitting and processing the images, and noise signals are accompanied in the obtained images. The accompanying noise signals in the image mainly contain two kinds of noise signals, namely luminance noise and color noise (also referred to as: chrominance noise).
In the related art, there is already a mature noise reduction technology for processing luminance noise. However, for color noise in an image, it is still difficult to perform effective noise reduction processing, and particularly at color edges where the color span is large, color overflow or blurring often occurs.
Disclosure of Invention
In view of this, the disclosure provides an image processing method and apparatus, an electronic device, and a storage medium, in which in the process of upsampling a low-frequency image, the chromaticity value of the low-frequency image can be adjusted according to the brightness distribution condition of the high-frequency image, so as to avoid the situation that the color edge and the color overflow occur in the image obtained by upsampling.
In order to achieve the above object, the present disclosure provides the following technical solutions:
according to a first aspect of the present disclosure, there is provided an image processing method including:
Acquiring an image to be processed in a brightness-chromaticity space domain, and performing downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
Determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
And carrying out up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and combining the image obtained through the up-sampling operation with the image to be processed to obtain a processed image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus including:
The acquisition unit acquires an image to be processed in a brightness-chromaticity space domain, and performs downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
The determining unit is used for respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
And the synthesis unit is used for carrying out up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the up-sampling operation with the image to be processed to obtain the processed image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the first aspect by executing the executable instructions.
According to a fourth aspect of the present disclosure there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method according to the first aspect.
In the technical scheme of the disclosure, an image processing method is provided. The method comprises the steps of carrying out downsampling operation on an acquired image to be processed in a brightness-chromaticity space domain to obtain a corresponding low-frequency image, and distributing chromaticity weight to each first pixel according to brightness difference degree of each first pixel in the image to be processed and each first neighborhood pixel, wherein the chromaticity weight of each first pixel is positively correlated with the corresponding brightness difference degree. On the basis, the low-frequency image can be up-sampled based on the chromaticity weight corresponding to each first pixel in the image to be processed, so that an image with the same size as the image to be processed is obtained, and the image is combined with the image to be processed, so that the processed image is obtained.
It will be appreciated that in the image field, color changes in the image have an effect on brightness changes, and that color changes are larger and brightness changes are typically larger. In other words, a change in brightness in an image can reflect a change in color. For any pixel, a larger difference in luminance from a neighboring pixel means that the chromaticity difference between the pixel and the neighboring pixel is generally larger. While the chromaticity weight of any pixel in the present disclosure is positively correlated with the corresponding degree of brightness difference, which is equivalent to assigning a higher chromaticity weight to pixels in regions (at color edges) where brightness changes greatly. On the basis, the up-sampling operation is carried out on the low-frequency image through the determined chromaticity weight, which is equivalent to improving the chromaticity value of the pixel at the color edge in the low-frequency image, so that the color change at the color edge is more prominent, and the problem of color overflow at the color edge in the related technology is further avoided.
In short, the color edge in the image is identified by analyzing the brightness change condition in the image, and the difference between the chromaticity value of the pixel at the color edge and the chromaticity value of the pixel at the color flat position is increased by distributing higher chromaticity weight to the pixel at the color edge, so that the difference between the color edge and the color flat position is highlighted, and the problem of color overflow at the color edge in the related technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of an image processing method shown in an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic illustration of an image to be processed shown in accordance with an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a low frequency image shown in an exemplary embodiment of the present disclosure;
FIG. 4A is a schematic diagram of an image obtained by an upsampling operation according to an exemplary embodiment of the present disclosure;
FIG. 4B is a schematic diagram illustrating a comparison of upsampling operations by bilinear interpolation with joint guided upsampling operations in accordance with an exemplary embodiment of the present disclosure;
fig. 5 is a block diagram of an image processing apparatus shown in an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram of another image processing apparatus shown in an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
The electronic device is inevitably interfered by various factors in the process of collecting, transmitting and processing the images, and noise signals are accompanied in the obtained images. The accompanying noise signals in the image mainly contain two kinds of noise signals, namely luminance noise and color noise (also referred to as: chrominance noise).
In the related art, there is already a mature noise reduction technology for processing luminance noise. However, for color noise in an image, it is still difficult to perform effective noise reduction processing, and particularly at color edges where the color span is large, color overflow or blurring often occurs.
In particular, since color noise is more noticeable in low-frequency images, related art generally uses a multi-scale noise reduction frame to perform noise reduction processing on the images. During processing through the frame, at least one downsampling operation of the image to be processed is required to obtain at least one low frequency image corresponding to the image to be processed. Then, the obtained at least one low-frequency image and the image to be processed can be respectively subjected to noise reduction, and the low-frequency image subjected to noise reduction is subjected to up-sampling operation so as to obtain the final processed image.
Taking the two downsampling operations as an example, in this example, the image to be processed is referred to as a high-frequency image, the image obtained by the first downsampling is referred to as an intermediate-frequency image, and the image obtained by the second downsampling is referred to as a low-frequency image. Specific:
first, a down-sampling operation may be performed on the high-frequency image to obtain a mid-frequency image having a smaller size, and then the obtained mid-frequency image may be down-sampled to obtain a low-frequency image having a smaller size. Then, the high-frequency image, the intermediate-frequency image, and the low-frequency image may be subjected to noise reduction processing, respectively, to perform relatively fine noise reduction processing. After the noise reduction processing is finished, the up-sampling operation can be carried out on the low-frequency image to obtain an image with the same size as the intermediate-frequency image, the image is used for being synthesized with the intermediate-frequency image after the noise reduction processing, further, the up-sampling operation can be carried out on the synthesized image to obtain an image with the same size as the high-frequency image, and the image is used for being synthesized with the high-frequency image after the noise reduction processing to obtain the processed image.
In the process of performing noise reduction processing through the multi-scale noise reduction frame, in the process of upsampling a low-frequency image (including the upsampling operation for the intermediate-frequency image), the upsampling is performed only based on the chromaticity information of the low-frequency image, so that the problem that color overflows or is blurred at the color edge cannot be solved.
More seriously, since the above-mentioned series of operations of "downsampling operation→noise reduction processing→upsampling operation" involves a plurality of image scale changes, the image resolution may be reduced, and even the problem may be caused in the image which does not originally have the problem (for example, the determined interpolation is determined only by the own chromaticity value during upsampling, which may cause the chromaticity difference between adjacent pixels to be reduced, and thus cause a phenomenon of color blurring at the color edge).
Therefore, the present disclosure proposes a method capable of identifying a color edge of an image and adjusting a chromaticity value of a pixel at the color edge, so as to remove color noise at the color edge, thereby avoiding color overflow or blurring at the color edge in the related art.
Fig. 1 is a diagram illustrating an image processing method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method may include the steps of:
Step 102, obtaining an image to be processed in a brightness-chromaticity space domain, and performing downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed.
The technical scheme of the disclosure can be applied to any type of electronic equipment, for example, the electronic equipment can be a mobile terminal such as a smart phone, a tablet personal computer and the like, and also can be a fixed terminal such as a smart television, a PC (personal computer ) and the like. It should be understood that, as the electronic device in the present disclosure, only an electronic device capable of performing image processing may be used, and specifically, which type of electronic device the technical solution of the present disclosure is applied to may be determined by a person skilled in the art according to actual needs, which is not limited by the present disclosure.
It will be appreciated that the phenomenon of color overflow or blurring at the color edges in the image is caused by insufficient difference in chromaticity values at the color edges and at the color flatness. For example, assuming that the junction between the color flat area a and the color flat area B is the color edge C, if the chromaticity value of the pixel X at the color edge is 5 and the chromaticity value of the pixel Y in the color flat area a that is closer to the color edge C is 4, it is obvious that the difference between the chromaticity values of the two pixels is not large, so that the color edge C is not obvious, i.e., color blurring is easy to occur from the visual effect, or the color of the color edge C extends toward the color flat area a. It can be seen that the color overflow phenomenon at the color edges in the image is caused by the small difference in pixel chromaticity values between the color edges and the color flat areas.
In view of this, the disclosure considers the rule of "a region with a large color change in an image, and a large brightness change in general", determines a color edge and a color flat region in the image by brightness values of respective pixels in the image, and further assigns chromaticity weights for adjusting chromaticity values to pixels in different regions. The pixels at the color edge are assigned higher chromaticity weight, and the pixels at the color flat area are assigned relatively lower chromaticity weight, so that the chromaticity value of the pixels at the color edge of the adjusted image is increased from the chromaticity difference value of the pixels at the color flat area, and the problem of color overflow at the color edge is avoided.
Since the present disclosure requires adjustment of the chrominance values of pixels based on their luminance values, it is obviously desirable that the image to be processed is an image in the luminance-chrominance spatial domain. Therefore, when the acquired initial image to be processed is an image of another spatial domain, it is necessary to preferentially convert the image of the other spatial domain into an image of a luminance-chrominance spatial domain. For example, when the initial image to be processed is an image of an RGB space domain, the image of the RGB space domain should be preferentially converted into an image of a luminance-chrominance space domain.
The image to be processed acquired in the present disclosure may be any type of image in the luminance-chrominance spatial domain. The image to be processed may be, for example, an image of the YUV spatial domain, or may be an image of the HIS spatial domain. The specific type of luminance-chrominance spatial domain image to be processed can be determined by those skilled in the art according to the actual situation, and the present disclosure is not limited thereto.
Since what the present disclosure is to remove is color noise in the image to be processed, color noise typically occurs in a low frequency form. Therefore, after the image to be processed in the luminance-chrominance space domain is obtained, the downsampling operation can be preferentially performed on the image to be processed to obtain a low-frequency image of the image to be processed, and then the operation of adjusting the chrominance value is performed on the low-frequency image.
Before describing the technical solution of the present disclosure in detail, it should be stated that, because the concept involved in the present solution is more, for convenience of distinction, pixels in an image to be processed are referred to as "first pixels", neighborhood pixels of all pixels in the image to be processed are referred to as "first neighborhood pixels", window areas obtained from the image to be processed are referred to as "first window areas", pixels in a low-frequency image are referred to as "second pixels", neighborhood pixels of all pixels in the image to be low-frequency are referred to as "second neighborhood pixels", and window areas obtained from the image to be low-frequency are referred to as "second window areas".
In fact, in the related art, in addition to the color overflow or blurring phenomenon at the color edges of an image, color noise is also present in a color flat region. Accordingly, the step of performing noise reduction processing on the image to be processed and/or the low frequency image may be further included in the present disclosure. When the noise reduction processing is performed on the image to be processed, the whole image to be processed can be traversed in a sliding window mode, the chromaticity values of all the first pixels in any one first window area obtained through the sliding window are subjected to weighted calculation, and the calculated first target value corresponding to any one first window area is used as the chromaticity value of the central pixel of the any one first window area.
And the same applies to noise reduction processing of a low-frequency image. The whole low-frequency image can be traversed in a sliding window mode, the chromaticity values of all second pixels in any second window area obtained through the sliding window are subjected to weighted calculation, and a second target value corresponding to the any second window area obtained through calculation is used as the chromaticity value of the central pixel of the any second window area.
In actual operation, noise reduction is mostly performed on both the image to be processed and the low-frequency image, but only one of the two images may be subjected to noise reduction, and a person skilled in the art may determine whether to perform noise reduction on the image to be processed and/or the low-frequency image according to actual requirements, which is not limited in this disclosure.
It should be stated that, the sliding window is a more conventional value-taking manner in the image field, and a window with a fixed size is generally slid by a specific step length, so that the numerical value of the pixels in the image falling into the window is used as the value (including the value of each channel such as the chromaticity value and the luminance value) for operation. In the present disclosure, the entire image may be traversed with a step size of 1 pixel, so as to achieve that each pixel in the image is used as a center pixel to perform chromaticity value adjustment.
In actual calculation, the chromaticity value of the center pixel of any one of the first window regions described above may be calculated in various ways.
In an embodiment, after any first window area is obtained from the image to be processed by means of sliding window, the difference between all the first pixels in any first window area and the center pixel in any first window area may be calculated, so as to allocate noise reduction weights to all the first pixels in any first window area according to the difference, where the noise reduction weights of any first pixels in any first window area are inversely related to the difference corresponding to any first pixels. On the basis, the chroma values of all the first pixels in any first window area can be weighted and averaged based on the noise reduction weight of each first pixel in any first window area, so that the calculated value is used as the chroma value of the central pixel of any first window area.
And after any second window area is obtained from the low-frequency image by means of sliding window, the chromaticity value of the center pixel of any second window area can be adjusted in a similar way. For example, the difference between all the second pixels in the second window area and the center pixel in the second window area may be calculated, so as to allocate noise reduction weights to all the second pixels in the second window area according to the difference, where the noise reduction weights of any second pixels in the second window area are inversely related to the difference corresponding to any second pixels. On the basis, the chroma values of all the second pixels in any second window area can be weighted and averaged based on the noise reduction weight of each second pixel in any second window area, so that the calculated value is used as the chroma value of the central pixel of any second window area.
For example, assuming that the chromaticity value of each first pixel in the image to be processed is as shown in fig. 2, the entire image may be traversed by sliding a window to obtain a plurality of first window areas, and the chromaticity value of the center pixel in any first window area is adjusted by the chromaticity value of each first pixel in the first window area.
Taking the first window area a shown in fig. 2 as an example, how the chromaticity value of the center pixel of any first window area is adjusted in this embodiment is described:
As can be seen from fig. 2, the first window area a includes 9 first pixels, and the chromaticity value of the center pixel X is 102. Then the noise reduction weights for the 9 first pixels in the first window area a are determined on the basis of the chrominance values 102. The noise reduction weight of any first pixel in the first window area a is inversely related to the difference between the any first pixel and 102. For example, the first pixel M in the upper left corner of the first window area a has a chromaticity value of 95 and a difference of 7 from 102, and the first pixel N in the lower left corner has a chromaticity value of 85 and a difference of 17 from 102. Since 7 is smaller than 17, the noise reduction weight of the first pixel M is greater than that of the first pixel N, for example, the noise reduction weight of the first pixel M may be 1.10, and then the noise reduction weight of the first pixel N may be 0.60. Suppose that the noise reduction weights of all the first pixels in the first window area a are determined in this manner as shown in table 1 below:
TABLE 1
Then, on the basis of table 1, the chroma value of the center pixel after the noise reduction process can be calculated according to the noise reduction weights of all the first pixels in the first window area a. For example, the calculation may be performed by a weighted average algorithm, and the calculation process may be:
(95*1.1+94*1.0+85*0.60+83*0.55+91*0.80+93*0.95+99*1.4+108*1.2+102*1.45)/(1.1+1.0+0.60+0.55+0.80+0.95+1.4+1.2+1.45)=96.40.
At this time, 96.40 may be used as the chroma value of the center pixel of the first window area a, that is, the original chroma value 102 of the center pixel of the first window area a is adjusted to 96.40. It should be appreciated that since the sliding window traverses the entire image, all the first pixels in the image are taken as center pixels and undergo the above-described process of adjusting the chrominance values. When the chromaticity values of all the first pixels are adjusted, the noise reduction processing of the graphics to be processed is considered to be completed.
The process of performing chromaticity adjustment on any second pixel in the low-frequency image may refer to the above example, where the processing manner of "the second pixel" and "the second window area" for the "low-frequency image" may be adapted to the processing manner of "the first pixel" and "the first window area" for the "image to be processed", and the example will not be repeated here.
According to the adjustment process, the noise reduction weight of the pixel with the larger difference value between the chrominance value of the central pixel and the chrominance value of the adjacent pixel in the sliding window is smaller, so that the difference value between the adjusted chrominance value of the central pixel and the chrominance value of the adjacent pixel is reduced. From the visual effect, color noise points in the image can be screened out, and the noise reduction effect for the color noise is realized.
However, as can be seen from table 1 above, since the above embodiment determines the noise reduction weights of the respective first pixels in the first window area a based on the chromaticity value of the center pixel, the noise reduction weight of the center pixel itself is always maximum, which in turn makes it difficult to achieve effective noise reduction when noise reduction is performed for isolated color noise in a color flat area. Taking the first window area B in fig. 2 as an example, it is easy to see that in the first window area B, the chroma value of the central pixel is 125, the chroma value is far greater than the chroma values of other first pixels in the first window area B, and the chroma values among a plurality of first neighboring pixels around the central pixel are relatively close. Obviously, from a visual effect, the center pixel belongs to isolated color noise in a color flat region. Since the noise reduction weight of the central pixel is always the greatest in the above embodiment, after the noise reduction processing is performed on the first pixel with the chromaticity value of 125, the chromaticity value of the central pixel is still far greater than the chromaticity value of the first neighboring pixel of the central pixel, that is, the isolated noise in the color flat region cannot be effectively removed.
In view of this, the present disclosure proposes another noise reduction processing method.
In another embodiment, the noise reduction weights of all pixels in any window area are not determined based on the center pixel, but the noise reduction weights are allocated to the pixels in any window area based on the average of the chrominance values of all pixels in the window area.
In the process of carrying out noise reduction processing on an image to be processed, the following operations can be carried out on any first window area, namely, firstly, the chromaticity average value of all first pixels in the any first window area is calculated, according to a first difference value between the chromaticity value of each first pixel in the any first window area and the chromaticity average value, the first noise reduction weight of each first pixel in the any first window area is determined, wherein the first noise reduction weight of any first pixel in the any first window area is in negative correlation with the first difference value corresponding to the any first pixel, then, weighted average calculation can be carried out on the chromaticity value of all first pixels in the any first window area based on the first noise reduction weight of each first pixel in the any first window area, so as to obtain a first target value corresponding to the any first window area, and the first target value is used as the chromaticity value of the central pixel of the any first window area.
In the noise reduction process of the low-frequency image, the following operations are performed on any second window area, namely, firstly, calculating the chromaticity average value of all second pixels in the any second window area, determining the second noise reduction weight of each second pixel in the any second window area according to the second difference value between the chromaticity value of each second pixel in the any second window area and the chromaticity average value, wherein the second noise reduction weight of each second pixel in the any second window area is in negative correlation with the second difference value corresponding to the any second pixel, and then, based on the second noise reduction weight of each second pixel in the any second window area, performing weighted average calculation on the chromaticity value of all second pixels in the any second window area to obtain a second target value corresponding to the any second window area, and taking the second target value as the chromaticity value of the central pixel of the any second window area.
Taking the first window area B in the image to be processed as shown in fig. 2 as an example, how to adjust the chromaticity value of the center pixel of any first window area in this embodiment is described:
The chromaticity mean of all the first pixels in the first window area B is calculated first (98+86+99+89+85+91+66+84+125)/9=91.44 (91 is taken as an example hereinafter for ease of calculation). And then calculating a first difference value between the chromaticity value of each first pixel in the first window area B and the chromaticity mean value, so as to allocate noise reduction weight to the corresponding first pixel according to the first difference value corresponding to each first pixel. For example, the assigned noise reduction weights may be as shown in table 2 below:
TABLE 2
Then, on the basis of table 2, the chroma value of the center pixel after the noise reduction process can be calculated according to the noise reduction weights of all the first pixels in the first window area B. For example, the calculation may be performed by a weighted average algorithm, and the calculation process may be:
(98*1.1+86*1.3+99*1.0+89*1.35+85*1.20+91*1.45+66*0.40+84*1.10+125*0.30)/(1.1+1.3+1.0+1.35+1.20+1.45+0.40+1.10+0.30)=90.11.
At this time, 90.11 is taken as the chroma value of the center pixel, i.e. the original chroma value 125 of the center pixel in the first window area B is adjusted to 90.11. As in the previous embodiment, since the sliding window traverses the entire image to be processed, all the first pixels in the image are used as the center pixels, and the above-mentioned process of adjusting the chrominance values is further performed. When the chromaticity values of all the first pixels are adjusted, the noise reduction processing of the image to be processed is considered to be completed.
The process of chromaticity adjustment for any second pixel in the low-frequency image according to the present embodiment may also refer to the above example, where the processing manners of the "second pixel" and the "second window area" for the "low-frequency image" may be adapted to the processing manners of the "to-be-processed image" and the "first pixel" and the "first window area" for the reference, and the examples will not be repeated here.
As can be seen from table 2, in the present embodiment, the weight is not allocated to each first pixel in the first window area B based on the chromaticity value 125 of the central pixel, so that the first pixel with the largest weight is not usually the central pixel, for example, in table 2, since the obtained chromaticity mean value is 91, it is obvious that the noise reduction weight of the first pixel with the pixel value of 91 is the largest in the window area B, and thus the technical problem that the isolated color noise in the color flat area cannot be removed due to the always largest weight value of the central pixel in the previous embodiment is avoided.
It will be appreciated that in the above embodiment, since the noise reduction weight is assigned to each pixel on a center pixel basis, the noise reduction weight of the center pixel is always the largest. Obviously, if the central pixel is isolated color noise (i.e. the chromaticity value is greatly different from the chromaticity values of the surrounding pixels) in the color flat area, after the chromaticity value of the central pixel is adjusted on the basis, the chromaticity value of the central pixel is still greatly different from the chromaticity value of the surrounding pixels, so that the color noise cannot be removed.
In this embodiment, the noise reduction weight of the center pixel is not usually the maximum because the weight is allocated based on the chromaticity average value of all the pixels in the window area. Conversely, if the center pixel is isolated color noise in the color flat area, the difference between the chrominance values of the neighboring pixels of the center pixel is not large, which results in a larger chrominance difference between the obtained chrominance mean value and the center pixel, and a smaller chrominance difference between the chrominance mean value and the neighboring pixels of the center pixel, so that the noise reduction weight of the center pixel is smaller, for example, in the first window area B of the example, the noise reduction weight of the center pixel is minimum. Obviously, after the noise reduction processing is performed on the image by the embodiment, isolated color noise in the color flat area can be effectively eliminated. After the chromaticity value of the center pixel of the window area B is adjusted, the adjusted chromaticity value 90.11 is obviously closer to the chromaticity value of most of the first pixels in the window area B, and from the visual effect, no isolated color noise exists in the color flat area.
It should be noted that the values of the noise reduction weights of the first pixels in the foregoing tables 1 and 2 are only illustrative, and are merely used to indicate that the noise reduction weight of any first pixel is inversely related to the difference between the center pixel (or the chromaticity average) and the arbitrary first pixel. How to determine the noise reduction weight of each first pixel can be determined by those skilled in the art according to the actual situation, which is not limited by the present disclosure. For example, the corresponding relation between the difference and the noise reduction weight can be set to be used as a basis for determining the noise reduction weight of any first pixel, for example, the first pixels can be ordered according to the difference corresponding to the first pixels, so as to distribute the noise reduction weight according to the order of the arrangement. The noise reduction processing is similar to that of the low-frequency image, and will not be described in detail here.
Step 104, determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree.
As is clear from the above, the color overflow or blurring occurs at the color edge in the image due to the too small difference between the chromaticity value of the pixel at the color edge and the chromaticity value of the pixel in the flat region. Therefore, in the present disclosure, the color edge and the color flat area in the image are determined based on the luminance value of each pixel in the image, and a higher chromaticity weight is allocated to the pixel at the color edge and a relatively lower chromaticity weight is allocated to the pixel at the color flat area, so that the difference between the chromaticity value of the pixel at the color edge and the chromaticity value of the pixel at the color flat area is increased, and the problem of color overflow or blurring at the color edge is further solved.
It should be stated that, after the downsampling operation is performed on the image to be processed to obtain the low-frequency image, the present disclosure assigns chromaticity weights to the first pixels in the image to be processed according to the luminance values of the first pixels in the image to be processed. And adjusting the chromaticity value of each second pixel in the low-frequency image in the process of upsampling the low-frequency image by means of the corresponding relation between each first pixel in the image to be processed and each second pixel in the low-frequency image. In other words, the chromaticity value of each second pixel in the low-frequency image is adjusted based on the brightness distribution in the image to be processed.
The low-frequency image is obtained based on the image to be processed, the brightness distribution conditions of the low-frequency image and the image to be processed are almost consistent, and the image to be processed can be regarded as being larger in size and more in brightness detail relative to the high-frequency image of the low-frequency image, and the chromaticity weight of each first pixel is determined through the image to be processed, so that the chromaticity weight of each first pixel is more accurate.
And 106, performing up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each pixel, and combining the image obtained through the up-sampling operation with the image to be processed to obtain a processed image.
In actual operation, the first neighborhood pixels for acquiring each first pixel may be determined by a sliding window manner, so as to determine the chromaticity weight of each first pixel according to the brightness difference degree between each first pixel and each first neighborhood pixel. Taking the first window area a shown in fig. 2 as an example, all the first pixels except the center pixel can be regarded as the first neighboring pixels of the center pixel (of course, the center pixel can also be regarded as the first neighboring pixels of the center pixel in the calculation process). Of course, the first window area a is only exemplified by the sliding window including 9 first pixels, and if the sliding window includes 25 first pixels, the number of first neighboring pixels is also relatively increased. The manner in which the first neighborhood pixels are specifically determined may be determined by one skilled in the art based on actual needs, and this disclosure is not limited thereto.
In one embodiment, when assigning a chromaticity weight to any first pixel, the chromaticity weight may be assigned in conjunction with all first neighboring pixels of the any first pixel. For example, the luminance difference value between any one of the first pixels and any one of the first neighboring pixels may be preferentially calculated, and the weight parameter corresponding to any one of the first neighboring pixels may be calculated based on the luminance difference value. After the weight parameters corresponding to all the first neighborhood pixels of any first pixel are calculated by the method, chromaticity weight can be allocated to any first pixel based on the calculated weight parameters.
Taking the first window area a shown in fig. 2 as an example, assuming that the value of each first pixel shown in fig. 2 is the luminance value of the corresponding first pixel, then, for any first pixel in the first window area a, taking the first pixel L with the luminance value of 99 (which is a first neighboring pixel of the central pixel X in the first window area a and can be considered as any first neighboring pixel), a luminance difference value between the first pixel L and the central pixel X is calculated to be 3, and a weight parameter is allocated to the first pixel L based on the luminance difference value 3. And after the above operation is completed for all the first neighborhood pixels of the central pixel X in the first window area a, the chromaticity weight of the central pixel X in the first window area a can be determined based on the obtained all the weight parameters after obtaining the weight parameters of each first neighborhood pixel corresponding to the central pixel X.
It should be noted that, in this embodiment, although the weight parameter is assigned to each first neighboring pixel based on the luminance difference between any first pixel and its first neighboring pixel. In practice, however, any pixel may be considered as a neighboring pixel of itself, i.e., a weight parameter is also assigned to any first pixel itself. In other words, when the weight parameters are assigned to the first neighboring pixel of the center pixel X, 9 weight parameters are finally assigned.
After the chromaticity weights of all the first pixels in the image to be processed are obtained in the above manner, the chromaticity values of the second pixels in the low-frequency image can be adjusted in the process of up-sampling the low-frequency image.
For example, in calculating the chromaticity value of any pixel in the image obtained by the upsampling operation, the luminance value of the first pixel corresponding to the any pixel in the image to be processed and the chromaticity value of the second pixel corresponding to the any pixel in the low-frequency image may be calculated as the basis. Since the image obtained by the up-sampling operation is identical to the image to be processed in size, it can also be expressed that the chromaticity value of the target pixel corresponding to any one of the first pixels in the image obtained by the up-sampling operation is calculated according to the luminance value of any one of the first pixels in the image to be processed and the chromaticity value of the second pixel corresponding to any one of the pixels in the low-frequency image.
In actual operation, after the weight parameter of any one first neighborhood pixel of any one first pixel in the image to be processed is obtained in the above manner, a second pixel corresponding to the any one first pixel may be determined in the low-frequency image, and further, a second neighborhood pixel corresponding to any one first neighborhood pixel is determined from the second neighborhood pixels of the second pixel. On the basis, the distance between the second neighborhood pixels and the second pixels can be calculated, the chromaticity reference value corresponding to any one of the first neighborhood pixels is determined based on the distance and the weight parameter of the first neighborhood pixels, and after a plurality of chromaticity reference values of all the first neighborhood pixels corresponding to any one of the first pixels are determined, weighted average calculation can be carried out on the plurality of chromaticity reference values so as to obtain the chromaticity value of the target pixel corresponding to any one of the first pixels in the image obtained through up-sampling operation.
For example, calculating the chromaticity value of any pixel in the image obtained by the upsampling operation may refer to the following formula:
Wherein, The method comprises the steps of obtaining a chrominance value of any pixel in an image through up-sampling operation, wherein p is a first pixel corresponding to any pixel in the image to be processed, q is any first neighborhood pixel of the first pixel, I p is a brightness value of the first pixel p, I q is a brightness value of the first neighborhood pixel q, p is a second pixel (actually the coordinate of the second pixel) corresponding to the first pixel p in a low-frequency image, q is a second neighborhood pixel (actually the coordinate of the second pixel) corresponding to the first neighborhood pixel q in a low-frequency image for convenience of subsequent description, q is the second pixel, and II p -q is the distance between the second neighborhood pixel q and the second pixel p ; for the chrominance value of the second pixel corresponding to the first pixel p in the low frequency image, Ω is a second window area with the second pixel p as the center pixel, k p is a normalization constant, f () is a coefficient function for calculating the chrominance reference value according to the distance, g (|i p-Iq |) is a weight function for calculating the weight parameter according to the luminance difference value, for calculating the weight parameter, Is the above-mentioned chrominance reference value.
For ease of understanding, taking the image to be processed shown in fig. 2 as an example, assume that each numerical value in fig. 2 is a luminance value corresponding to each first pixel. It is assumed that a low-frequency image obtained by performing a downsampling operation via the chromaticity diagram corresponding to fig. 2 is shown in fig. 3. Obviously, there is a certain correspondence between the first pixel in fig. 2 and the second pixel in fig. 3. For example, the size of the image to be processed shown in fig. 2 is 8×8, and the size of the low-frequency image shown in fig. 3 is 4*4, which means that any one second pixel in the low-frequency image corresponds to 4 first pixels in the image to be processed, and the correspondence is related to the position of the pixel in the image. For example, the second pixel in the low frequency image having a top left chrominance value of 56 should correspond to the 4 first pixels in the image to be processed having top left luminance values of 65, 56, 77, 58, respectively.
It is currently known that the image obtained by upsampling the low frequency image shown in fig. 3 must have the size shown in fig. 4A. All that is required during the upsampling process is to know the chrominance values of the individual pixels in the image shown in fig. 4A. Taking the chromaticity value of the pixel x″ in fig. 4A as an example, it is obvious that the pixel x″ corresponds to the center pixel of the first window area a in fig. 2 (i.e., the first pixel X in fig. 2), and the first pixel X belongs to one of the 4 pixels in the top right corner in fig. 2, and then the second pixel corresponding to the first pixel X in fig. 3 is the second pixel X' with the top right role degree value of 91 in fig. 3. Similar to the first window area a, a second window area with the second pixel X' as a center pixel may be determined by sliding a window in the low frequency image. Assuming that a weight parameter is currently required to be assigned to the first neighboring pixel Y of the first pixel X, the weight parameter of the first neighboring pixel Y may be determined based on the luminance difference value between the first pixel X and the first neighboring pixel Y. After the weight parameters of the first neighborhood pixels Y are determined, the second neighborhood pixels Y ' corresponding to the first neighborhood pixels Y can be determined from a plurality of second neighborhood pixels of the second pixels X ', and further, the chromaticity reference value of the first neighborhood pixels Y (certainly, the chromaticity reference value of the second neighborhood pixels Y ') is determined based on the distance between the second neighborhood pixels Y ' and the second pixels X ' and the weight parameters of the first neighborhood pixels Y. The particular formulas for calculating the chromaticity reference values may be determined by one skilled in the art based on actual needs, and this disclosure is not limited thereto.
It is to be noted that, in the second window region corresponding to the second pixel X' shown in fig. 3, a region beyond the low-frequency image is included. In this partial region, the chromaticity value of a second pixel (which may be referred to as a blank pixel) that does not exist in the original image may be complemented by a complement means. The complement technique is well established in the art, and the blank pixels in the second window area may be complemented in any manner, which is not limited by the present disclosure. For example, for any blank pixel, the chromaticity value of the second pixel of the closest blank pixel may be taken as its chromaticity value.
After obtaining the chrominance reference values of all the first neighborhood pixels of the first pixel X in this way, a weighted average calculation can be performed on the plurality of chrominance reference values to obtain a chrominance value of the pixel X″ corresponding to the first pixel X.
To embody the difference of the present disclosure from the related art, reference may be made to fig. 4B. Fig. 4B is a diagram (a) of the image to be processed, a diagram (B) of the luminance of the image to be processed, a diagram (c) of the low-frequency image obtained by downsampling the diagram (a), a diagram (d) of the processed image obtained by upsampling by bilinear interpolation in the related art, and a diagram (e) of the processed image obtained by upsampling by the method of the present disclosure. Since the present disclosure utilizes a luminance map of an image to be processed to upsample a low frequency image, the upsampling operation may also be referred to as joint guided upsampling.
Comparing the image (d) with the image (e), the image (e) obtained by the combined guide up-sampling has a chromaticity value of 136 (dark color 136 in the image (d)) and a chromaticity value of 132 for the neighboring pixel on the left side, and a chromaticity difference value of 4, and the image (d) obtained by the bilinear interpolation method has a chromaticity value of 135 for the pixel corresponding to the "pixel with the chromaticity value of 136", and a chromaticity value of 133 for the region pixel on the left side, and the difference value of 2. Obviously, the color difference value between pixels at the color edge of the processed image obtained by the combined guide up-sampling is larger, and compared with the related art, the color difference value between pixels at the color edge can be improved, so that the problems of color overflow and blurring at the color edge are avoided.
As can be seen from the above description, in the present embodiment, the determination of the chromaticity weight and the adjustment of the chromaticity value by the chromaticity weight are performed in the process of performing the upsampling operation. From the computing point of view of the electronic device, the above steps are accomplished by one formula. Of course, the above steps may be completed in a stepwise manner, in addition to being completed during the up-sampling operation of the low frequency image.
In another embodiment, the luminance average value of any first pixel and all the first neighboring pixels in the image to be processed and the second difference value between the luminance value of any first pixel and the luminance average value may be preferentially calculated, and then the chromaticity weight corresponding to any first pixel is determined based on the second difference value, for example, the product of the second difference value and the predefined coefficient may be used as the chromaticity weight of any first pixel. The chromaticity weight of all the first pixels in the image to be processed can be obtained in the same manner.
Accordingly, a conventional upsampling operation may also be performed on the low frequency image. For example, the low frequency image may be calculated by an up-sampling interpolation algorithm to obtain a to-be-weighted image that is consistent with the size of the to-be-processed image. The upsampling interpolation algorithm may be any type of interpolation algorithm, such as a two-dimensional interpolation algorithm and a linear interpolation algorithm, which may be used as the interpolation algorithm in this embodiment. After the image to be weighted is obtained, the corresponding pixels in the image to be weighted can be weighted based on the chromaticity weight of each first pixel in the image to be processed, and the image obtained by the weighted calculation is used as the image obtained by the up-sampling operation.
Taking the image to be processed shown in fig. 2 as an example, assuming that the chromaticity weight of the first pixel X needs to be determined currently, the luminance average value of all the first pixels in the first window area a may be calculated, where the luminance value may be obtained as (95+94+85+83+91+93+99+108+102)/9=94.4, and the calculated second difference value corresponding to the first pixel X is 7.6, where the chromaticity weight of the first pixel X may be determined based on the second difference value 7.6. The chromaticity weight of other first pixels in the image to be processed can be obtained in the same way.
On the other hand, it is still assumed that the image shown in fig. 3 is a low-frequency image obtained by downsampling the chromaticity diagram of the image to be processed shown in fig. 2, wherein the numerical values shown in fig. 3 represent the chromaticity values of the respective second pixels in the low-frequency image. In this embodiment, the low-frequency image may be subjected to a conventional upsampling operation to obtain an image to be weighted having a size consistent with that of the image to be processed, for example, as shown in fig. 4A, it should be stated that although the chromaticity values of the respective pixels are not shown in fig. 4A, the chromaticity values of the respective pixels of the image to be weighted obtained in this embodiment are determined. On this basis, the chromaticity value of the corresponding pixel in the image to be weighted can be adjusted based on the chromaticity weight of each first pixel in the image to be processed, for example, the chromaticity value of the pixel x″ can be adjusted based on the chromaticity weight of the first pixel X.
According to the technical scheme, the downsampling operation is performed on the acquired to-be-processed image in the brightness-chromaticity space domain to obtain a corresponding low-frequency image, and on the other hand, chromaticity weights are distributed to the included first pixels according to the brightness difference degree of the first pixels in the to-be-processed image and the first neighborhood pixels, wherein the chromaticity weight of each first pixel is positively correlated with the corresponding brightness difference degree. On the basis, the low-frequency image can be up-sampled based on the chromaticity weight corresponding to each first pixel in the image to be processed, so that an image with the same size as the image to be processed is obtained, and the image is combined with the image to be processed, so that the processed image is obtained.
It will be appreciated that in the image field, changes in color in the image will have an effect on the change in brightness, and that in areas where the color changes more, the change in brightness is typically also greater. Therefore, for any pixel, if the brightness difference degree from the neighboring pixel is larger, it means that the pixel is in a region with larger color change, i.e. at the color edge. While the chromaticity weight of any pixel in the present disclosure is positively correlated with the corresponding chromaticity weight, which is equivalent to assigning a higher chromaticity weight to a pixel in a region (at the color edge) where the luminance variation is large. On the basis, the up-sampling operation is carried out on the low-frequency image through the determined chromaticity weight, which is equivalent to improving the chromaticity value of the pixel at the color edge in the low-frequency image, so that the color change at the color edge is more prominent, and the problem of color overflow at the color edge in the related technology is further avoided.
In short, the color edge in the image is identified by analyzing the brightness change condition in the image, and the difference between the chromaticity value of the pixel at the color edge and the chromaticity value of the pixel at the color flat position is increased by distributing higher chromaticity weight to the pixel at the color edge, so that the difference between the color edge and the color flat position is highlighted, and the problem of color overflow at the color edge in the related technology is solved.
Optionally, the present disclosure may also perform noise reduction processing on the image to be processed and/or the low frequency image. For example, the image to be processed and/or the low-frequency image may be traversed by means of a sliding window, and the chromaticity values of all pixels in any window area acquired by means of the sliding window may be weighted, so as to take the obtained value as the chromaticity value of the central pixel of the any window area. In the process of weighting calculation, a noise reduction weight can be allocated to each pixel according to the average value of all pixels in any window area, and the noise reduction weight of any pixel is inversely related to the difference value between the pixel and the average value.
Since the average value of all the pixels in any window area is used as a standard to allocate noise reduction weights to the respective pixels, the noise reduction weight of the center pixel is not necessarily the largest. When the isolated color noise in the color flat area is removed, the difference between the chromaticity difference between the central pixel corresponding to the color noise and the adjacent pixel is larger, so that the obtained chromaticity mean value is smaller than the chromaticity value of the adjacent pixel and larger than the chromaticity value of the central pixel, and the noise reduction weight of the central pixel is minimized. Therefore, by the method, the difference between the chromaticity value of the pixel corresponding to the color noise and the chromaticity value of the neighboring pixel can be effectively reduced, and the isolated color noise in the color flat area can be effectively removed.
The present disclosure also provides an embodiment of an image processing apparatus corresponding to the foregoing embodiment of the image processing method.
Fig. 5 is a block diagram of an image processing apparatus shown in an exemplary embodiment of the present disclosure. Referring to fig. 5, the apparatus includes an acquisition unit 501, a determination unit 502, and a synthesis unit 503.
An obtaining unit 501, configured to obtain an image to be processed in a luminance-chrominance spatial domain, and perform a downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
a determining unit 502, configured to determine, according to a luminance difference degree between each first pixel in the image to be processed and each first neighboring pixel, a chromaticity weight of each first pixel, where the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding luminance difference degree;
And a synthesizing unit 503, configured to perform an up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesize the image obtained through the up-sampling operation with the image to be processed, so as to obtain a processed image.
Optionally, the determining unit 502 is further configured to:
calculating the brightness difference value of any first pixel and any first neighborhood pixel thereof, and calculating a weight parameter corresponding to any first neighborhood pixel based on the brightness difference value;
and determining the chromaticity weight of any first pixel based on the calculated weight parameters of all first neighborhood pixels of the any first pixel.
Optionally, the synthesizing unit 503 is further assembled to:
Acquiring second pixels corresponding to any first pixel in the low-frequency image, and determining second neighborhood pixels corresponding to any first neighborhood pixel in all second neighborhood pixels of the second pixels;
Calculating the distance between the second neighborhood pixels and the second pixels, and determining a chromaticity reference value corresponding to any one of the first neighborhood pixels based on the distance and the weight parameter of the first neighborhood pixels;
Performing weighted average calculation on the determined multiple chromaticity reference values of all the first neighborhood pixels corresponding to any one first pixel to obtain a chromaticity value of a target pixel corresponding to the any one first pixel;
And after obtaining the chromaticity values of all target pixels corresponding to all first pixels in the image to be processed, generating the image obtained through the up-sampling operation based on the chromaticity values of all target pixels.
Optionally, the determining unit 502 is further configured to:
Calculating the brightness average value of any first pixel and all first neighborhood pixels thereof, and a second difference value between the brightness value of any first pixel and the brightness average value;
Taking the product of the second difference value and a predefined coefficient as the chromaticity weight of any one first pixel.
Optionally, the synthesizing unit 503 is further assembled to:
calculating the low-frequency image through an up-sampling interpolation algorithm to obtain an image to be weighted, wherein the size of the image to be weighted is consistent with that of the image to be processed;
Weighting calculation is carried out on the chromaticity value of the corresponding pixel in the image to be weighted based on the chromaticity weight of each first pixel in the image to be processed;
and taking the image obtained by the weighting calculation as the image obtained by the up-sampling operation.
As shown in fig. 6, fig. 6 is a block diagram of another image processing apparatus according to an exemplary embodiment of the present disclosure, which further includes a noise reduction unit 504 on the basis of the embodiment shown in fig. 5 described above.
Optionally, the method further comprises:
The noise reduction unit 504 is configured to traverse the image to be processed in a sliding window manner, perform weighted calculation on the chromaticity values of all the first pixels in any one of the first window regions acquired in the sliding window manner to obtain a first target value corresponding to the any one of the first window regions, take the first target value as the chromaticity value of the central pixel in any one of the first window regions, and/or traverse the low-frequency image in a sliding window manner, perform weighted calculation on the chromaticity values of all the second pixels in any one of the second window regions acquired in the sliding window manner to obtain a second target value corresponding to any one of the second window regions, and take the second target value as the chromaticity value of the central pixel in any one of the second window regions.
Optionally, the noise reduction unit 504 is further assembled to:
Calculating the average value of the chromaticity of all the first pixels in any first window area, determining the first noise reduction weight of each first pixel according to the first difference value between the chromaticity value of each first pixel in any first window area and the average value of the chromaticity, wherein the noise reduction weight of any first pixel in any first window area is in negative correlation with the first difference value corresponding to any first pixel, carrying out weighted average calculation on the chromaticity value of all the first pixels in any first window area based on the noise reduction weight of each first pixel in any first window area to obtain a first target value corresponding to any first window area, and/or,
Calculating the chromaticity average value of all second pixels in any second window area, determining the second noise reduction weight of each second pixel according to a second difference value between the chromaticity value of each second pixel in any second window area and the chromaticity average value, wherein the noise reduction weight of any second pixel in any second window area is in negative correlation with the second difference value corresponding to any second pixel, and carrying out weighted average calculation on the chromaticity value of all second pixels in any second window area based on the noise reduction weight of each second pixel in any second window area to obtain a second target value corresponding to any second window area.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Correspondingly, the disclosure further provides an image processing device, which comprises a processor and a memory for storing executable instructions of the processor, wherein the processor is configured to implement the image processing method according to any one of the above embodiments, for example, the method can comprise the steps of obtaining an image to be processed in a brightness-chromaticity space domain, performing downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed, and respectively determining chromaticity weights of first pixels according to brightness difference degrees of the first pixels and the first neighborhood pixels in the image to be processed, wherein the chromaticity weight of each first pixel is positively correlated with the corresponding brightness difference degree, and performing upsampling operation on the low-frequency image based on the chromaticity weight corresponding to the first pixel, and combining the image obtained through the upsampling operation with the image to be processed to obtain a processed image.
Correspondingly, the disclosure further provides an electronic device, which comprises a memory, and one or more programs, wherein the one or more programs are stored in the memory, and are configured to be executed by one or more processors, the one or more programs include instructions for implementing the image processing method according to any one of the embodiments, for example, the method may include obtaining a to-be-processed image in a luminance-chrominance spatial domain, performing a downsampling operation on the to-be-processed image to obtain a low-frequency image of the to-be-processed image, determining a chrominance weight of each first pixel according to a luminance difference degree between each first pixel and each first neighborhood pixel in the to-be-processed image, wherein the chrominance weight of each first pixel and the corresponding luminance difference degree are positively correlated, performing an upsampling operation on the low-frequency image based on the chrominance weight corresponding to each first pixel, and synthesizing the to-be-processed image obtained through the upsampling operation with the to-be-processed image. FIG. 7 is a block diagram illustrating an apparatus 700 for implementing a process scheduling method, according to an example embodiment. For example, apparatus 700 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to FIG. 7, an apparatus 700 may include one or more of a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the apparatus 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on the apparatus 700, contact data, phonebook data, messages, pictures, videos, and the like. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 700.
The multimedia component 708 includes a screen between the device 700 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 700 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, the sensor assembly 714 may detect an on/off state of the device 700, a relative positioning of the components, such as a display and keypad of the device 700, a change in position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, an orientation or acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the apparatus 700 and other devices in a wired or wireless manner. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G LTE, 5G NR (New Radio), or a combination thereof. In one exemplary embodiment, the communication component 716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 704, including instructions executable by processor 720 of apparatus 700 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (10)

1. An image processing method, comprising:
Acquiring an image to be processed in a brightness-chromaticity space domain, and performing downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
Determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
And carrying out up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and combining the image obtained through the up-sampling operation with the image to be processed to obtain a processed image.
2. The method of claim 1, further comprising, prior to upsampling the low frequency image:
Traversing the image to be processed in a sliding window mode, carrying out weighted calculation on the chromaticity values of all the first pixels in any first window area acquired through the sliding window to obtain a first target value corresponding to any first window area, taking the first target value as the chromaticity value of the central pixel of any first window area, and/or,
Traversing the low-frequency image in a sliding window mode, carrying out weighted calculation on the chromaticity values of all the second pixels in any second window area obtained through the sliding window to obtain a second target value corresponding to the any second window area, and taking the second target value as the chromaticity value of the central pixel of the any second window area.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
The step of performing weighted calculation on the chromaticity values of all the first pixels in any first window area obtained through the sliding window to obtain a first target value corresponding to the any first window area, includes:
Calculating the chromaticity average value of all the first pixels in any first window area, and determining the first noise reduction weight of each first pixel according to a first difference value between the chromaticity value of each first pixel in any first window area and the chromaticity average value; the noise reduction weight of any first pixel in any first window area is inversely related to the first difference value corresponding to any first pixel; based on the noise reduction weight of each first pixel in any first window area, carrying out weighted average calculation on the chromaticity values of all the first pixels in any first window area to obtain a first target value corresponding to any first window area;
the step of performing weighted calculation on the chromaticity values of all the second pixels in any second window area obtained through the sliding window to obtain a second target value corresponding to any second window area, includes:
Calculating the chromaticity average value of all second pixels in any second window area, determining the second noise reduction weight of each second pixel according to a second difference value between the chromaticity value of each second pixel in any second window area and the chromaticity average value, wherein the noise reduction weight of any second pixel in any second window area is in negative correlation with the second difference value corresponding to any second pixel, and carrying out weighted average calculation on the chromaticity value of all second pixels in any second window area based on the noise reduction weight of each second pixel in any second window area to obtain a second target value corresponding to any second window area.
4. The method of claim 1, wherein determining the chromaticity weight of each first pixel in the image to be processed based on the degree of difference in luminance between the first pixel and the first neighboring pixel, respectively, comprises:
calculating the brightness difference value of any first pixel and any first neighborhood pixel thereof, and calculating a weight parameter corresponding to any first neighborhood pixel based on the brightness difference value;
and determining the chromaticity weight of any first pixel based on the calculated weight parameters of all first neighborhood pixels of the any first pixel.
5. The method of claim 4, wherein upsampling the low frequency image based on the chroma weights corresponding to the respective first pixels comprises:
Acquiring second pixels corresponding to any first pixel in the low-frequency image, and determining second neighborhood pixels corresponding to any first neighborhood pixel in all second neighborhood pixels of the second pixels;
Calculating the distance between the second neighborhood pixels and the second pixels, and determining a chromaticity reference value corresponding to any one of the first neighborhood pixels based on the distance and the weight parameter of the first neighborhood pixels;
Performing weighted average calculation on the determined multiple chromaticity reference values of all the first neighborhood pixels corresponding to any one first pixel to obtain a chromaticity value of a target pixel corresponding to the any one first pixel;
And after obtaining the chromaticity values of all target pixels corresponding to all first pixels in the image to be processed, generating the image obtained through the up-sampling operation based on the chromaticity values of all target pixels.
6. The method of claim 1, wherein determining the chromaticity weight of each first pixel in the image to be processed based on the degree of difference in luminance between the first pixel and the first neighboring pixel, respectively, comprises:
Calculating the brightness average value of any first pixel and all first neighborhood pixels thereof, and a second difference value between the brightness value of any first pixel and the brightness average value;
Taking the product of the second difference value and a predefined coefficient as the chromaticity weight of any one first pixel.
7. The method of claim 1, wherein upsampling the low frequency image based on the chroma weights corresponding to the respective first pixels comprises:
calculating the low-frequency image through an up-sampling interpolation algorithm to obtain an image to be weighted, wherein the size of the image to be weighted is consistent with that of the image to be processed;
Weighting calculation is carried out on the chromaticity value of the corresponding pixel in the image to be weighted based on the chromaticity weight of each first pixel in the image to be processed;
and taking the image obtained by the weighting calculation as the image obtained by the up-sampling operation.
8. An image processing apparatus, comprising:
The acquisition unit acquires an image to be processed in a brightness-chromaticity space domain, and performs downsampling operation on the image to be processed to obtain a low-frequency image of the image to be processed;
The determining unit is used for respectively determining the chromaticity weight of each first pixel according to the brightness difference degree of each first pixel and each first neighborhood pixel in the image to be processed, wherein the chromaticity weight of each first pixel in each first pixel is positively correlated with the corresponding brightness difference degree;
And the synthesis unit is used for carrying out up-sampling operation on the low-frequency image based on the chromaticity weight corresponding to each first pixel, and synthesizing the image obtained through the up-sampling operation with the image to be processed to obtain the processed image.
9. An electronic device, comprising:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to implement the method of any of claims 1-7 by executing the executable instructions.
10. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any of claims 1-7.
CN202110007717.9A 2021-01-05 2021-01-05 Image processing method and device, electronic device, and storage medium Active CN114723613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007717.9A CN114723613B (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007717.9A CN114723613B (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN114723613A CN114723613A (en) 2022-07-08
CN114723613B true CN114723613B (en) 2025-03-18

Family

ID=82234628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007717.9A Active CN114723613B (en) 2021-01-05 2021-01-05 Image processing method and device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114723613B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218047A (en) * 2023-09-18 2023-12-12 杭州微影软件有限公司 Image fusion method and device, electronic equipment and storage medium
CN119863726B (en) * 2025-03-21 2025-06-10 浙江水利水电学院 A method and system for automatically identifying bare soil based on real-time images from unmanned aerial vehicles

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100809346B1 (en) * 2006-07-03 2008-03-05 삼성전자주식회사 Edge Compensation Device and Method
ITVI20110243A1 (en) * 2011-09-09 2013-03-10 Stmicroelectronics Grenoble 2 REDUCTION OF CHROMA NOISE OF AN IMAGE
CN104123699A (en) * 2013-04-26 2014-10-29 富士通株式会社 Method of reducing image noise and device
GB2542858A (en) * 2015-10-02 2017-04-05 Canon Kk Encoder optimizations for palette encoding of content with subsampled colour component
CN109191406B (en) * 2018-09-19 2021-03-09 浙江宇视科技有限公司 Image processing method, device and equipment
CN111784603B (en) * 2020-06-29 2024-01-26 珠海全志科技股份有限公司 RAW domain image denoising method, computer device and computer readable storage medium

Also Published As

Publication number Publication date
CN114723613A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
JP7136956B2 (en) Image processing method and device, terminal and storage medium
CN107798654B (en) Image buffing method and device and storage medium
CN104380727B (en) Image processing apparatus and image processing method
CN110958401A (en) A method, device and electronic device for color correction of super night scene images
CN105574834B (en) Image processing method and device
KR102082365B1 (en) Method for image processing and an electronic device thereof
CN114723613B (en) Image processing method and device, electronic device, and storage medium
US11756167B2 (en) Method for processing image, electronic device and storage medium
CN110728180B (en) Image processing method, device and storage medium
CN113160038A (en) Image style migration method and device, electronic equipment and storage medium
JP2024037722A (en) Content-based image processing
CN114511450B (en) Image denoising method, image denoising device, terminal and storage medium
CN107613210B (en) Image display method and device, terminal and storage medium
CN107507128B (en) Image processing method and apparatus
CN111625213A (en) Picture display method, device and storage medium
US10951816B2 (en) Method and apparatus for processing image, electronic device and storage medium
CN105654470B (en) Image choosing method, apparatus and system
CN107730443B (en) Image processing method and device and user equipment
CN105472228B (en) Image processing method and device and terminal
CN107563957B (en) Eye image processing method and device
CN110807745B (en) Image processing method and device and electronic equipment
CN111275641A (en) Image processing method and device, electronic equipment and storage medium
CN118674646A (en) Image smoothing processing method, device, electronic device, chip and medium
CN112465721B (en) Image correction method and device, mobile terminal and storage medium
CN118214950A (en) Image stitching method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant