[go: up one dir, main page]

CN113643198A - Image processing method, device, electronic device and storage medium - Google Patents

Image processing method, device, electronic device and storage medium Download PDF

Info

Publication number
CN113643198A
CN113643198A CN202110828277.3A CN202110828277A CN113643198A CN 113643198 A CN113643198 A CN 113643198A CN 202110828277 A CN202110828277 A CN 202110828277A CN 113643198 A CN113643198 A CN 113643198A
Authority
CN
China
Prior art keywords
image block
neighborhood
pixel
distance
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110828277.3A
Other languages
Chinese (zh)
Other versions
CN113643198B (en
Inventor
胥立丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Haining Eswin IC Design Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Haining Eswin IC Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd, Haining Eswin IC Design Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202110828277.3A priority Critical patent/CN113643198B/en
Publication of CN113643198A publication Critical patent/CN113643198A/en
Application granted granted Critical
Publication of CN113643198B publication Critical patent/CN113643198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种图像处理方法、装置、电子设备及存储介质,图像处理方法包括:获取图像的至少一个区域中的中心图像块和邻域图像块,邻域图像块与中心图像块相邻;计算邻域图像块与中心图像块的距离;从预设表中查找出距离对应的权重,并作为邻域图像块的权重,预设表用于表征不同距离与权重的对应关系;基于邻域图像块的权重、邻域图像块的像素值、中心图像块的权重以及中心图像块的像素值对中心图像块中的像素进行滤波处理,能够简化图像降噪计算的复杂度,进而降低硬件电路的成本。

Figure 202110828277

The present application provides an image processing method, device, electronic device and storage medium. The image processing method includes: acquiring a center image block and a neighborhood image block in at least one area of an image, and the neighborhood image block is adjacent to the center image block; Calculate the distance between the neighborhood image block and the center image block; find the weight corresponding to the distance from the preset table and use it as the weight of the neighborhood image block. The preset table is used to represent the correspondence between different distances and weights; based on the neighborhood The weight of the image block, the pixel value of the neighboring image block, the weight of the center image block, and the pixel value of the center image block are used to filter the pixels in the center image block, which can simplify the complexity of image noise reduction calculation, thereby reducing the hardware circuit. the cost of.

Figure 202110828277

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of semiconductor chips in digital image processing technology, people can take pictures by various shooting devices (such as digital cameras, mobile phones and the like) to obtain high-resolution pictures or videos. Among various photographing devices, a Complementary Metal Oxide Semiconductor (CMOS) image sensor is mainly used to acquire a picture or video with high resolution.
Due to the inherent hardware limitation of the CMOS image sensor, the pictures shot by the shooting equipment in many occasions have serious brightness and chrominance noise. In order to improve the quality of the picture, in general, noise of an original picture (i.e., a Bayer image) inside the photographing apparatus is directly suppressed. Thus, a higher quality image can be obtained. For example: the Bayer image is denoised by a Non-Local mean filtering (NLM) method to obtain a higher quality image.
However, the conventional image noise reduction algorithm has a large computational complexity. Accordingly, when the ASIC is implemented by an Application Specific Integrated Circuit (ASIC), the hardware requirement for the ASIC is high, which will undoubtedly increase the cost of the ASIC.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, and a storage medium, so as to simplify the operation complexity of image noise reduction, thereby reducing the cost of an ASIC.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
a first aspect of the present application provides an image processing method, including: acquiring a central image block and a neighborhood image block in at least one region of an image, wherein the neighborhood image block is adjacent to the central image block; calculating the distance between the neighborhood image block and the center image block; finding out the weight corresponding to the distance from a preset table, wherein the preset table is used for representing the corresponding relation between different distances and weights, and is used as the weight of the neighborhood image block; and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
A second aspect of the present application provides an image processing apparatus, the apparatus comprising: the image processing device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring a central image block and a neighborhood image block in at least one region of an image, and the neighborhood image block is adjacent to the central image block; the calculation module is used for calculating the distance between the neighborhood image block and the center image block; the searching module is used for searching out the weight corresponding to the distance from a preset table, and the preset table is used for representing the corresponding relation between different distances and weights and is used as the weight of the neighborhood image block; and the filtering module is used for filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
A third aspect of the present application provides an electronic device comprising: a processor, a memory, a bus; the processor and the memory complete mutual communication through the bus; the processor is for invoking program instructions in the memory for performing the method of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium comprising: a stored program; wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the method of the first aspect.
Compared with the prior art, according to the image processing method provided by the first aspect of the present application, after the central image block and the neighborhood image blocks in at least one region of the image are obtained, the distances between the neighborhood image blocks and the central image block are calculated, the weights of the neighborhood image blocks are further found from the preset table based on the distances between the neighborhood image blocks and the central image block, and finally, the pixels in the central image block are filtered based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image blocks and the pixel values of the central image block. When the corresponding weight of each image block is determined based on the distance between the image blocks, the weight of each image block is calculated one by one based on the Gaussian function, and the weight corresponding to the distance of each image block is directly found out from a preset table in a table look-up mode. Because the table lookup is simpler than the calculation mode of the function, the complexity of image noise reduction calculation can be simplified, and the cost of a hardware circuit is further reduced.
The image processing apparatus provided by the second aspect, the electronic device provided by the third aspect, and the computer-readable storage medium provided by the fourth aspect of the present application have the same or similar advantageous effects as the image processing method provided by the first aspect.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a first schematic diagram of each image block in the embodiment of the present application;
FIG. 3 is a second schematic diagram of each image block in the embodiment of the present application;
FIG. 4 is a first diagram of a default table in an embodiment of the present application;
FIG. 5 is a second flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a sliding window not completely located within an image according to an embodiment of the present application;
fig. 7 is a schematic diagram of a certain neighborhood image block and a center image block in the embodiment of the present application;
FIG. 8 is a first diagram illustrating several pixel location templates in an embodiment of the present application;
FIG. 9 is a second diagram of a default table in the embodiment of the present application;
FIG. 10 is a third schematic diagram of each image block in the embodiment of the present application;
FIG. 11 is a fourth schematic diagram of each image block in the embodiment of the present application;
FIG. 12 is a second drawing illustrating several pixel location templates in an embodiment of the present application;
FIG. 13 is a first schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 14 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
In the prior art, when the image needs to be subjected to noise reduction processing, generally, an image noise reduction algorithm is adopted to process the image so as to obtain a high-quality image. However, the operation process of the image noise reduction algorithm is complicated. Moreover, the image noise reduction algorithm is finally realized by depending on a hardware circuit. This results in a need for higher performance hardware circuitry to support image noise reduction processing by the image noise reduction algorithm, thereby increasing the cost of the hardware circuitry.
The applicant finds, through a great deal of research, that the reason that the operation process of the existing image noise reduction algorithm, especially the NLM algorithm, is complex is that: in calculating the distance between image blocks in an image, the euclidean distance is used. The calculation of the euclidean distance is cumbersome and requires multiple multiplications, which undoubtedly requires enhancing the performance of the hardware circuit. And adopting a Gaussian function when calculating the corresponding weight of each image block based on the distance between the image blocks. The gaussian function involves exponential calculation, which inevitably requires enhancing the performance of the hardware circuit, which results in an increase in the cost of the hardware circuit.
In view of this, embodiments of the present application provide an image processing method, when determining weights corresponding to image blocks based on distances between the image blocks, the weights corresponding to the image blocks are obtained by looking up a table instead of calculating the weights of the image blocks one by using a gaussian function. In this table, the corresponding weights are calculated in advance based on the respective distances. In the actual process of denoising an image, when the weight corresponding to a certain image block needs to be obtained, the weight corresponding to the distance of the image block can be searched from the table. Because the table lookup is simpler than the function calculation, the image processing method provided by the embodiment of the application can simplify the complexity of the image noise reduction calculation, and further reduce the cost of a hardware circuit.
It should be noted here that, in practical applications, the image processing method provided by the embodiment of the present application is mainly applied to processing of a Bayer image. Of course, the image processing method provided by the embodiment of the application can also be applied to processing other images. The type of other images is not limited herein.
Next, the image processing method provided in the embodiments of the present application will be described in detail.
Fig. 1 is a first schematic flowchart of an image processing method in an embodiment of the present application, and referring to fig. 1, the method may include:
s101: a central image block and a neighborhood image block in at least one region of an image are acquired.
Wherein the neighborhood image block is adjacent to the central image block.
When the image needs to be subjected to noise reduction processing, first, an image to be processed is acquired. Then, at least one region in the image is determined. When performing noise reduction processing on an image, processing is not performed on all regions of the image together, but processing is performed on each of a plurality of regions, and therefore, at least one region in the image needs to be determined so as to perform processing on each region. And finally, acquiring a central image block and a neighborhood image block in at least one region. Since each area is processed subsequently, each area needs to be divided, and each divided area includes a plurality of image blocks, i.e., a central image block and a neighborhood image block.
FIG. 2 is a schematic diagram of each image block in the embodiment of the present application, referring to FIG. 2, in an image X, an area Y is determined by using a sliding window1. The size of the sliding window is 7 x 7, i.e. the area Y1Is also 7 × 7. In the region Y1In the size of 3 × 3, the region Y is divided into1Divided into 9 image blocks, i.e. image block A0、A1、A2、A3、A4、A5、A6、A7、A8. Wherein the image blocks adjacent in the horizontal direction and in the vertical direction have an overlap of 1 pixel. Here, the image block A4I.e. the central image block, image block a0、A1、A2、A3、A5、A6、A7、A8Is the neighborhood image block. Obtaining each image block is equivalent to obtaining a pixel value of each pixel point in each image block, and the pixel value may be a gray value. Of course, the pixel values may be those corresponding to Red (R), Green (G) and Blue (B) channels. The specific category of pixel values is not limited herein.
In practical applications, for the number of central image blocks and neighborhood image blocks in a region of an image, the number of central image blocks is 1, and the number of neighborhood image blocks may be 1 or more. The number of the neighborhood image blocks and the positions of the neighborhood image blocks relative to the central image block are not specifically limited, and may be set according to actual requirements. In FIG. 2, a neighborhood image block A0、A1、A2、A3、A5、A6、A7、A8Is 8 and is uniformly distributed in the central image block A4Outside of (a).
S102: and calculating the distance between the neighborhood image block and the central image block.
That is, it is equivalent to calculating the similarity between the pixel values in the neighborhood image block and the pixel values in the center image block.
Taking the calculation of the similarity between a certain neighborhood image block and the central image block as an example, since there is not only one pixel but a plurality of pixels in the neighborhood image block and the central image block, the similarity between the neighborhood image block and the central image block needs to be determined based on the similarity between each pixel in the neighborhood image block and the pixel in the corresponding position in the central image block.
FIG. 3 is a schematic diagram of each image block A in the embodiment of the present application, referring to FIG. 3, in calculating a neighborhood image block A0And the central image block A4When the similarity is within the range of (1), the neighborhood image block A0The middle bag contains 9 pixel points, namely a pixel point q0、q1、q2、q3、q4、q5、q6、q7、q8Center image block A4Also contains 9 pixels, i.e. pixel p0、p1、p2、p3、p4、p5、p6、p7、p8. Thus, pixel point q is calculated0And pixel point p0Calculating the difference value of the pixel values of the pixel points q1And pixel point p1And so on. Summing the squares of the difference values, and taking the average number to obtain the neighborhood image block A0And the central image block A4Similarity of, i.e. neighborhood image block A0And the central image block A4The distance of (c). Computing neighborhood image block A1、A2、A3、A5、A6、A7、A8Adjacent to the central image block A4The similarity between the image block A and the neighboring image block0The calculation method is the same, and the description is omitted here.
The above calculation of the distance between the neighborhood image block and the central image block adopts a calculation manner of the square of the euclidean distance, and a specific calculation formula is shown as the following formula (1):
Figure BDA0003174458560000061
wherein d represents the Euclidean distance, p represents a central image block, q represents a neighborhood image block, and k represents a pixel point in the image block.
Of course, other distance calculation methods may be adopted to calculate the distance between the neighborhood image block and the central image block. The specific distance calculation method is not limited herein, and the distance between the neighborhood image block and the central image block is calculated.
S103: and finding out the weight corresponding to the distance from the preset table, and using the weight as the weight of the neighborhood image block.
The preset table is used for representing the corresponding relation between different distances and weights.
The primary purpose of calculating the distance between the neighborhood image block and the central image block is to determine the weight of the neighborhood image block relative to the central image block, and therefore, after the distance between the neighborhood image block and the central image block is calculated, the weight of the neighborhood image block needs to be determined according to the distance between the neighborhood image block and the central image block.
In the prior art, based on the distance between a neighborhood image block and a central image block, a gaussian function is adopted to calculate the weight of the neighborhood image block. The specific calculation formula is shown in the following formula (2):
Figure BDA0003174458560000062
wherein w represents the weight, i represents the neighborhood image block, and d represents the distance between the neighborhood image block and the central image block. σ represents the standard deviation of the noise, and σ needs to be calibrated with a standard image under different illumination for different image sensors. That is, the image to be processed is obtained by which image sensor, and σ in the above formula is represented by which image sensor corresponds to σ. h denotes a filter coefficient, and is positively correlated with σ, that is, h ═ k σ, and k is generally a coefficient between (0.3, 1).
According to the formula, the smaller the distance between the neighborhood image block and the central image block is, the larger the weight of the neighborhood image block relative to the central image block is. When the square of the distance between the neighborhood image block and the central image block is less than or equal to 2 sigma2The weight of the neighborhood image block relative to the center image block is 1. And the corresponding weight of the central image block is also 1.
In the prior art, the weights of the neighborhood image blocks relative to the central image block are calculated one by one based on the distances between the neighborhood image blocks and the central image block, and the calculation process is complex because the weights include exponential calculation. Therefore, in the embodiment of the present application, the weights of the neighborhood images are not calculated one by one based on the distances between the neighborhood image blocks and the central image block, but the weights corresponding to the distances between the neighborhood image blocks and the central image block are directly searched in a preset table in which the weights corresponding to various distances are stored in advance, and are used as the weights of the neighborhood images. The table look-up is simpler than the exponential calculation.
Fig. 4 is a first schematic diagram of a preset table in the embodiment of the present application, and referring to fig. 4, in the preset table, a plurality of distances d are stored1、d2、……、dnWith a corresponding weight w1、w2、……、wnThe corresponding relationship of (1). Obtaining the distance d between the neighborhood image block and the central image block2Then, the weight w of the neighborhood image block can be obtained by looking up a preset table2. The index calculation of one time is avoided, and the calculation process of the weight is simplified.
It should be noted that the weights corresponding to the various distances in the preset table are calculated in advance and stored. The calculation of the weight by the distance may adopt various existing manners of calculating the weight of the image block, such as: gaussian function, etc. The specific calculation method of the weight is not limited here.
S104: and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
After the weight of the neighborhood image block relative to the central image block is obtained, the pixel value of the neighborhood image block, the pixel value of the central image block and the weight (generally 1) are combined, and the pixel value of the target pixel point in the central image block can be obtained in a weighted average mode, namely, the filtering processing of the target pixel point is realized. The specific calculation formula is shown in the following formula (3):
Figure BDA0003174458560000071
wherein u' (A)n) A pixel value, u (A), representing the filtered central image blocki) Representing pixel values, w, of a central image block before filtering and of all neighborhood image blocksiAnd representing the weights corresponding to the central image block and all the neighborhood image blocks. C ═ Σ wiAnd is a weight normalization value.
As can be seen from the above, in the image processing method provided in this embodiment of the present application, after a central image block and a neighborhood image block in at least one region of an image are obtained, a distance between the neighborhood image block and the central image block is calculated, a weight of the neighborhood image block is further found from a preset table based on the distance between the neighborhood image block and the central image block, and finally, a pixel in the central image block is filtered based on the weight of the neighborhood image block, a pixel value of the neighborhood image block, the weight of the central image block, and a pixel value of the central image block. When the corresponding weight of each image block is determined based on the distance between the image blocks, the weight of each image block is calculated one by one based on the Gaussian function, and the weight corresponding to the distance of each image block is directly found out from a preset table in a table look-up mode. Because the table lookup is simpler than the calculation mode of the function, the complexity of image noise reduction calculation can be simplified, and the cost of a hardware circuit is further reduced.
Further, as a refinement and an extension of the method shown in fig. 1, an embodiment of the present application further provides an image processing method. Fig. 5 is a second flowchart of an image processing method in an embodiment of the present application, and referring to fig. 5, the method may include:
s501: a target region of the image is determined.
When performing noise reduction processing on an image, noise reduction processing is not directly performed on the entire image, but noise reduction processing is performed on each region in the image, and therefore, it is first necessary to identify one region in the image and perform noise reduction processing on the region. When the noise reduction processing is completed for all the regions in the image, the noise reduction processing of the image is completed.
The target area may specifically be determined in the image by means of a sliding window. Still referring to FIG. 2, in image X, region Y1A target area is determined using a sliding window. Of course, continuing to move the sliding window, it is also possible to determine the region Y2、Y3And the like. The specific number of target areas can be determined according to the size of the image and the size of the sliding window, as long as all the determined target areas can cover the image. In order to process the image quickly and comprehensively, the target areas determined by the sliding window may not be overlapped.
When the pixel to be filtered is located at the image boundary, that is, when the pixel to be filtered is located in the center of the sliding window and the sliding window is not located on the image completely, in order to continue to perform noise reduction on the image in the sliding window, the missing image in the sliding window may be filled according to the entire image, and then the target region to be processed is obtained.
Fig. 6 is a schematic diagram of a sliding window not completely located in an image in the embodiment of the present application, and referring to fig. 6, a region 6011 in the sliding window 601 is located in the image X, and a region 6012 in the sliding window 601 is not located in the image X. In order to perform noise reduction processing on the target region in the sliding window 601, it is necessary to fill in the image missing in the region 6012.
In practical application, the area which is not on the image in the sliding window can be filled in a mirror image mode, namely, the edge of the image in the sliding window is used as a turnover shaft to turn over the image, and the missing part in the sliding window is filled. Of course, other ways of filling in areas of the sliding window that are not on the image are also possible. The specific filling manner is not limited herein.
S502: and acquiring a central image block and a neighborhood image block in the target area.
Step S502 is the same as step S101, and is not described herein again.
S503: judging whether the target area is a texture detail area; if yes, executing S504; if not, the target area can be considered as a flat area, then S505 is executed.
Of course, in other embodiments, it may also be determined whether the target area is a flat area; if yes, go to S505; if not, the target area can be considered as the texture detail area, then S504 is executed.
Different types of regions in the image have different emphasis points when denoising is performed, and for texture detail regions, detail information in the image needs to be reserved while denoising is performed, so that the denoising strength is not suitable to be too large. For the flat area, the detail information is not much, so the de-noising can be emphasized.
When determining whether the target region belongs to the flat region or the texture detail region, still referring to fig. 2, the specific steps are as follows:
the method comprises the following steps: determining a central image block A in a target area4Domain image block A0、A1、A2、A3、A5、A6、A7、A8A pixel value of (a);
step two: selecting a maximum pixel value max _ val and a minimum pixel value min _ val from the 9 pixel values;
step three: calculating the difference diff between the maximum pixel value max _ val and the minimum pixel value min _ val as max _ val-min _ val;
step four: comparing the difference diff with a preset value diff _ threshold; if the difference is smaller than the preset value, namely diff is smaller than diff _ threshold, determining that the target area is a flat area; and if the difference is greater than or equal to a preset value, namely diff is greater than or equal to diff _ threshold, determining that the target area is a texture detail area.
The method for judging the type of the target area is simple and convenient. Of course, other ways may also be used to determine whether the target region belongs to the flat region or the texture detail region, and for the specific determination way, this is not limited here.
In the central image block, only one pixel point is not included, but a plurality of pixel points are generally included. The central image block contains a plurality of pixel points, so that the central image block has a plurality of central pixel points. Similarly, in the neighborhood image block, only one pixel point is not included, but a plurality of pixel points are included, and the number and distribution of the pixel points in the neighborhood image block are generally the same as those in the central image block. Therefore, the neighborhood image block contains many pixel points, and thus, the neighborhood image block has many neighborhood pixel points.
S504: and calculating pixel differences between the central pixel points and the neighborhood pixel points corresponding to the pixel positions.
That is, when the target area is a texture detail area, detail preservation of the image needs to be considered. Therefore, when the distance between the central image block and the neighborhood image block is calculated, the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block needs to be calculated. As for the reason why the texture detail information in the central image block can be retained when denoising the central image block by calculating the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block, the specific reason will be explained later (chebyshev formula).
Fig. 7 is a schematic diagram of a certain neighborhood image block and a center image block in the embodiment of the present application, and as shown in fig. 7, the neighborhood image block includes 9 pixels, each of which is a neighborhood pixel q0、q1、q2、q3、q4、q5、q6、q7、q8. In the central image block, there are 9 pixels, which are the central pixel p0、p1、p2、p3、p4、p5、p6、p7、p8. When the distance between the neighborhood image block and the central image block is calculated, since the target region where the neighborhood image block is located is a texture detail region, the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block needs to be calculated. I.e. calculate p0-q0、p1-q1、p2-q2、p3-q3、p4-q4、p5-q5、p6-q6、p7-q7、p8-q8
S505: and calculating the pixel difference between the N central pixel points and the neighborhood pixel points corresponding to the pixel positions.
And the numerical value of N is less than the total number of the central pixel points in the central image block.
That is, when the target region is a flat region, the detail preservation of the image may not be excessively considered, and the center of gravity may be focused on denoising. Therefore, when the distance between the central image block and the neighborhood image block is calculated, the pixel difference between a limited number of central pixel points in the central image block and neighborhood pixel points at corresponding pixel positions in the neighborhood image block is calculated. As for why the central image block can be denoised well by calculating the pixel difference between a limited number of central pixel points in the central image block and the neighborhood pixel points at the corresponding pixel positions in the neighborhood image block, the specific reason will be explained later (chebyshev's formula).
Still referring to fig. 7, when calculating the distance between the neighborhood image block and the central image block, since the target region where the target region is located is a flat region, the pixel difference between a plurality of central pixel points in the central image block and the neighborhood pixel points at the corresponding pixel positions in the neighborhood image block may be calculated. For example: calculating p1-q1、p3-q3、p4-q4、p5-q5、p7-q7
Specifically, the selection of which central pixel points are selected from the central image block may have a variety of different selections, which is not limited herein.
Fig. 8 is a first schematic diagram of several pixel position templates in the embodiment of the present application, and as shown in fig. 8, an image block includes 9 pixels, that is, 3 × 3, as an example. In 8a, all the pixels (pixels where shadows are located) in the image block are selected to participate in the distance operation. 8a applies to step S504. In 8b and 8c, it is selected that some pixels (pixels where shadows are located) in the image block participate in distance calculation. 8b and 8c are applied to step S505.
Of course, the positions of the partial pixel points participating in the operation are not limited to 8b and 8c, and may also be a combination of other positions, for example: pixel points at four apex angles, 3 pixel points in the first column, and the like.
S506: and determining a target pixel difference with the maximum pixel difference from the pixel differences between at least one central pixel point and the neighborhood pixel points corresponding to the pixel positions, and taking the target pixel difference as the distance between the neighborhood image block and the central image block.
In the prior art, the euclidean distance is used to calculate the distance between the neighborhood image block and the central image block. While in euclidean distance more multiplications are involved. For example: still referring to fig. 2 and 3, to calculate the neighborhood image block a0And the central image block A4The distance of (2) is needed to calculate the neighborhood pixel point q respectively0And the central pixel point p0The Euclidean distance of the neighboring pixel point q is calculated1And the central pixel point p1… …, calculating the neighborhood pixel point q8And the central pixel point p8This involves 9 multiplications. And in the target area Y1In total, there are 8 field image blocks, i.e. the field image block A0、A1、A2、A3、A5、A6、A7、A8. Thus, a total of 72 multiplications of 9 × 8 is involved. This is also only to calculate the distance between the neighborhood image block and the central image block in one area of the image X, and there are multiple such areas in the image X, and the number of multiplications can be said to be enormous.
In order to simplify the calculation amount of the Distance, in the embodiment of the present application, the euclidean Distance is discarded from being used, and the Chebyshev Distance (Chebyshev Distance) is used to approximate the euclidean Distance. That is to say, a pixel difference with the largest numerical value is selected from the calculated pixel differences between the central pixel points and the neighborhood pixel points corresponding to the pixel positions, and the selected pixel difference is used as the distance between the neighborhood image block and the central image block. The specific calculation formula is shown in the following formula (4):
dChebyshev=maxk|pk-qk| (4)
wherein d isChebyshevRepresenting the chebyshev distance, i.e. the distance of the neighborhood image block from the central image block. p represents a central image block, q represents a neighborhood image block, and k represents the serial number of a pixel point in the image block.
Here, based on the calculation formula of the euclidean distance and the chebyshev distance, the relationship between the two can be derived as shown in the following formula (5):
Figure BDA0003174458560000111
where i denotes an image block.
Assuming that the noise follows a Gaussian distribution, in a flat area of the image, the above-obtained 9 pixel differences | pk-qkIn | most of the values are much smaller than dChebyshevCan be approximated as shown in the following formula (6):
Figure BDA0003174458560000121
that is to say, the Chebyshev distance approximation is adopted to replace the Euclidean distance to determine the distance between the neighborhood image block and the central image block, the error is not large, and the denoising accuracy is not reduced.
As is clear from the chebyshev distance calculation formula, the distance between image blocks depends on the pixel position where the pixel difference is the largest. And the fewer the pixels participating in the calculation, the greater the chance that the image block distance obtains a smaller value, the greater the obtained weight, and the greater the noise reduction strength. Therefore, in the flat area, part of pixel points in the image block can be selected for distance operation, the obtained image block distance is relatively small, the corresponding weight is relatively large, and a good noise reduction effect is achieved. In the detail texture region, all pixel points in the image block are selected to perform distance operation, the obtained image block distance is relatively large, the corresponding weight is relatively small, and then noise reduction to a large extent is avoided, so that the texture detail in the image block is ensured not to be lost. Therefore, the image processing method provided by the embodiment of the application can reduce the complexity of calculation, further reduce the cost of a hardware circuit, and balance denoising and texture information retaining, further obtain better image quality.
S507: and determining the index number of the neighborhood image block based on the distance between the neighborhood image block and the central image block.
Compared with the square of the euclidean distance, the range of the chebyshev distance is much smaller, that is, the range of the chebyshev distance is limited. Therefore, all the Chebyshev distances which can be foreseen can be calculated in advance, and then the weight corresponding to each Chebyshev distance is calculated respectively, so that after the distances between the neighborhood image block and the central image block are obtained, the corresponding weight is obtained directly based on the obtained distances, and the substitution index is searched for and calculated.
In order to further reduce the length of the preset table, the preset table may store the corresponding relationship between the index number and the weight instead of the corresponding relationship between the distance and the weight. After the distance between the neighborhood image block and the central image block is obtained, the index number corresponding to the distance is calculated through an index formula, and then the weight corresponding to the distance is found out in a preset table according to the index number.
Specifically, after the distance between the neighborhood image block and the center image block is obtained, first, the distance between the neighborhood image block and the center image block is subtracted from a preset distance, so as to obtain a distance difference. Wherein the preset distance is determined based on a noise standard deviation of an image sensor acquiring the image data and a noise reduction intensity input by a user. And then, obtaining the index number of the neighborhood image block according to the distance difference of the right shift of the preset digit, wherein the preset digit is determined based on the noise standard deviation and the maximum length of the preset table. The index formula may be specifically represented by the following formula (7):
idx=(dChebyshev- dthr)>>rshift (7)
wherein idx denotes an index number, dChebyshevThe distance of the chebyshev is expressed,
Figure BDA0003174458560000131
Figure BDA0003174458560000132
beta represents the noise reduction intensity, which can be input by a user, the user can adjust the noise reduction intensity of the image by inputting the beta, sigma represents the standard deviation of noise, and > rshift represents the right shift of preset digits, wherein the right shift is the digits of pixel values, rshift is used for reducing the value range of indexes, and can be determined according to the standard deviation of the noise and the maximum length of a preset table.
For example, assume dthr110010, rshift 4. In determining dChebyshevAfter 1100100, according to the index formula, 1100100-. And 11 is the index number corresponding to 1100100.
S508: and finding out the weight corresponding to the index number of the neighborhood image block from the preset table.
Fig. 9 is a second schematic diagram of a preset table in the embodiment of the present application, and referring to fig. 9, an index number idx is stored in the preset table1、idx2、……、idxnAnd its corresponding weight w1、w2、……、wn. Obtaining index number idx of image block in neighborhood2Then, through the preset table, the weight w of the neighborhood image block can be found out2
Since the index numbers and the weights corresponding to the index numbers stored in the preset table are calculated in advance, each weight in the preset table needs to be prepared before the image is subjected to the noise reduction processing. The specific calculation steps of the weight are as follows:
the method comprises the following steps: and calculating initial weights corresponding to different distances by adopting a Gaussian function.
Step two: and adjusting the initial weight based on the noise reduction intensity input by the user to obtain weights corresponding to different distances, and storing the weights in a preset table.
In fact, when calculating the weight corresponding to the distance, the above steps one and two may be performed simultaneously. That is, the noise reduction strength is introduced into the gaussian function, and the weight corresponding to each distance is calculated by the gaussian function introduced with the noise reduction strength. The specific calculation formula is shown in the following formula (8):
Figure BDA0003174458560000133
wherein w represents the weight, i represents the neighborhood image block, dChebyshevAnd expressing the Chebyshev distance between the neighborhood image block and the central image block, beta expressing the denoising strength input by a user, sigma expressing the standard deviation of noise, and h expressing a filter coefficient.
S509: and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
After the weights of all neighborhood image blocks are obtained, the pixel values of all neighborhood image blocks, the weight (default to 1) of the central image block and the pixel value of the central image block are combined, and the central pixel point of the central image block can be filtered by adopting a weighted average calculation mode. The specific calculation method is described in detail in step S104, and is not described herein again.
The above is described for the filtering process of the central pixel of the central image block, i.e. the pixel p in fig. 34. However, the image processing method provided in the embodiment of the present application does not only perform filtering processing on the central pixel point of the central image block, but also performs filtering processing on a pixel point at any position in the central image block, and may even perform filtering processing on a plurality of pixel points in the central image block at the same time. The whole processing procedure can be seen in the above steps S501 to S509. Only the differences will be described below.
Fig. 10 is a third schematic diagram of each image block in the embodiment of the present application, and referring to fig. 10, the size of the sliding window is 7 × 6. In the sliding window, pixel A is included0、B0、A1、B1、A2、B2、A3、B3、A4、B4、A5、B5、A6、B6、A7、B7、A8、B8And pixels not marked around it.
Fig. 11 is a fourth schematic diagram of each image block in the embodiment of the present application, and referring to fig. 11, when an image in the sliding window of fig. 10 needs to be divided into 3 × 3 image blocks, the size of each image block is 2 × 3. The image blocks are not overlapped in the horizontal direction, and 1 pixel is overlapped in the vertical direction. Thus, A0、B0And the 4 pixels above and below the image block constitute an image block, and so on, for a total of 9 image blocks. In the 9 image blocks, A4、B4The central image block is composed of 4 pixels above and below the central image block, and for convenience of reference, the central image block can be called as a central image block A4B4. Accordingly, the aforementioned central image block A4Does not merely mean that only one pixel a is present in the central image block4And also includes 8 pixels around it. And A is0、B0And 4 pixels above and below it, A1、B1And 4 pixels above and below it, … …, A8、B8And the 4 pixels above and below it constitute 8 corresponding neighborhood image blocks.
Within the sliding window, A can be simultaneously paired4And B4And (6) carrying out filtering processing. The specific calculation formulas are shown in the following formulas (9) and (10):
Figure BDA0003174458560000141
Figure BDA0003174458560000151
wherein u (A)4) And u (B)4) Respectively represent A after filtering4And B4Pixel value of (A)iAnd BiRespectively represent A4、B4Pixel values, w, of all neighborhood image blocksiRepresenting all neighborhood image blocks relative to A4、B4Weight of (A)iAnd BiUsing the same weight, this saves computation, C ═ Σ wiAnd is a weight normalization value.
FIG. 12 is a second schematic diagram of several pixel location templates in the embodiment of the present application, referring to FIG. 12. Since the pixel blocks divided in fig. 11 are not 3 × 3 image blocks but 2 × 3 image blocks, after the target area is determined to belong to a flat area, when the distance between the neighboring image block and the central image block in the target area is calculated, the adopted pixel position template will slightly change, but the calculation idea is not changed, and part of the pixel points in the image blocks are still used for distance calculation. In 12a, all the pixels (pixels where shadows are located) in the image block are selected to participate in the distance operation. 12a are adapted for distance calculation of image blocks in the detail texture region. In 12b and 12c, it is selected that some pixels (pixels where shadows are located) in the image block participate in the distance operation. 12b and 12c are suitable for distance calculation of image blocks in flat areas.
Based on the same inventive concept, as an implementation of the method, the embodiment of the application further provides an image processing device. Fig. 13 is a schematic structural diagram of an image processing apparatus in an embodiment of the present application, and referring to fig. 13, the apparatus may include:
the receiving module 1301 is configured to obtain a central image block and a neighborhood image block in at least one region of an image, where the neighborhood image block is adjacent to the central image block.
A calculating module 1302, configured to calculate distances between the neighborhood image block and the center image block.
The searching module 1303 is configured to search out a weight corresponding to the distance from a preset table, where the preset table is used to represent corresponding relationships between different distances and weights, and the weight is used as the weight of the neighborhood image block.
A filtering module 1304, configured to perform filtering processing on the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weight of the central image block, and the pixel value of the central image block.
Further, as a refinement and an extension of the apparatus shown in fig. 13, an embodiment of the present application also provides an image processing apparatus. Fig. 14 is a schematic structural diagram of an image processing apparatus in an embodiment of the present application, and referring to fig. 14, the apparatus may include:
the storage module 1401, comprising:
a first calculating unit 1401a, configured to calculate initial weights corresponding to different distances by using a gaussian function.
And a storage unit 1401b, configured to adjust the initial weight based on the noise reduction strength input by the user, obtain weights corresponding to different distances, and store the weights in the preset table.
A first determining module 1402 comprising:
a sliding window unit 1402a for determining a target area in the image using a sliding window.
A filling unit 1402b, configured to, when the target region is not completely located in the image, fill a region located outside the image in the target region based on the image, so as to obtain the at least one filled region.
A receiving module 1403, configured to obtain a central image block and a neighborhood image block in at least one region of an image, where the neighborhood image block is adjacent to the central image block.
The central image block comprises a plurality of central pixel points, and the neighborhood image block comprises a plurality of neighborhood pixel points.
The second determining module 1404 includes:
a first determining unit 1404a configured to determine pixel values of the central image block and pixel values of the neighborhood image blocks.
A selecting unit 1404b configured to select a maximum pixel value and a minimum pixel value from the pixel values of the central image block and the pixel values of the neighborhood image blocks.
A second calculating unit 1404c for calculating a difference value between the maximum pixel value and the minimum pixel value.
A second determining unit 1404d for determining the at least one region as a flat region when the difference is smaller than a preset value; and when the difference value is greater than or equal to a preset value, determining that the at least one area is a detail texture area.
A calculation module 1405, comprising:
the third calculating unit 1405a is configured to calculate a pixel difference between at least one central pixel point and a neighboring pixel point of the corresponding pixel position.
The third calculating unit 1405a is specifically configured to calculate, when the at least one region is a flat region, pixel differences between N central pixel points and neighborhood pixel points at corresponding pixel positions, where a value of the N is smaller than a total number of the central pixel points in the central image block; and when the at least one region is a texture detail region, calculating pixel differences between the central pixel points and neighborhood pixel points corresponding to the pixel positions.
A third determining unit 1405b, configured to determine, from pixel differences between the at least one central pixel point and a neighboring pixel point at a corresponding pixel position, a target pixel difference with a largest pixel difference, and use the target pixel difference as a distance between the neighboring image block and the central image block.
The preset table comprises a corresponding relation between a plurality of index numbers and weights, and one index number in the preset table corresponds to at least one distance.
The lookup module 1406, comprising:
a fourth determining unit 1406a, configured to determine the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block.
The fourth determining unit 1406a is specifically configured to subtract a preset distance from a distance between the neighboring image block and the center image block to obtain a distance difference, where the preset distance is determined based on a noise standard deviation of an image sensor that acquires the image data and a noise reduction strength input by a user; and shifting the distance difference to the right according to a preset digit to obtain the index number of the neighborhood image block, wherein the preset digit is determined based on the noise standard deviation and the maximum length of the preset table.
The searching unit 1406b is configured to search the weight corresponding to the index number of the neighborhood image block from the preset table.
A filtering module 1407, configured to perform filtering processing on the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block, and the pixel values of the central image block.
Fig. 14 shows the respective modules and the signal flow direction between the respective units in the modules.
It is to be noted here that the above description of the embodiments of the apparatus, similar to the description of the embodiments of the method described above, has similar advantageous effects as the embodiments of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 15 is a schematic structural diagram of an electronic device in an embodiment of the present application, and referring to fig. 15, the electronic device may include: a processor 1501, memory 1502, bus 1503; the processor 1501 and the memory 1502 communicate with each other via a bus 1503; the processor 1501 is used to call program instructions in the memory 1502 to perform the methods in one or more of the embodiments described above.
It is to be noted here that the above description of the embodiments of the electronic device, similar to the description of the embodiments of the method described above, has similar advantageous effects as the embodiments of the method. For technical details not disclosed in the embodiments of the electronic device of the present application, refer to the description of the embodiments of the method of the present application for understanding.
Based on the same inventive concept, the embodiment of the present application further provides a computer-readable storage medium, where the storage medium may include: a stored program; wherein the program controls the device on which the storage medium is located to execute the method in one or more of the above embodiments when the program runs.
It is to be noted here that the above description of the storage medium embodiments, like the description of the above method embodiments, has similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1.一种图像处理方法,其特征在于,所述方法包括:1. an image processing method, is characterized in that, described method comprises: 获取图像的至少一个区域中的中心图像块和邻域图像块,所述邻域图像块与所述中心图像块相邻;acquiring a center image block and a neighborhood image block in at least one area of the image, the neighborhood image block being adjacent to the center image block; 计算所述邻域图像块与所述中心图像块的距离;calculating the distance between the neighborhood image block and the center image block; 从预设表中查找出所述距离对应的权重,并作为所述邻域图像块的权重,所述预设表用于表征不同距离与权重的对应关系;Find out the weight corresponding to the distance from a preset table, and use it as the weight of the neighborhood image block, and the preset table is used to represent the correspondence between different distances and weights; 基于所述邻域图像块的权重、所述邻域图像块的像素值、所述中心图像块的权重以及所述中心图像块的像素值对所述中心图像块中的像素进行滤波处理。The pixels in the center image block are filtered based on the weight of the neighborhood image block, the pixel value of the neighborhood image block, the weight of the center image block, and the pixel value of the center image block. 2.根据权利要求1所述的方法,其特征在于,所述预设表中包含有多个索引号与权重的对应关系,所述预设表中的一个索引号对应至少一个距离;所述从预设表中查找出所述距离对应的权重,包括:2. The method according to claim 1, wherein the preset table includes a plurality of correspondences between index numbers and weights, and an index number in the preset table corresponds to at least one distance; the Find out the weight corresponding to the distance from the preset table, including: 基于所述邻域图像块与所述中心图像块的距离确定所述邻域图像块的索引号;Determine the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block; 从所述预设表中查找出所述邻域图像块的索引号对应的权重。The weight corresponding to the index number of the neighborhood image block is searched out from the preset table. 3.根据权利要求2所述的方法,其特征在于,所述基于所述邻域图像块与所述中心图像块的距离确定所述邻域图像块的索引号,包括:3. The method according to claim 2, wherein the determining the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block comprises: 将所述邻域图像块与所述中心图像块的距离与预设距离相减,得到距离差,所述预设距离基于获取所述图像数据的图像传感器的噪声标准差和用户输入的降噪强度确定;The distance between the neighborhood image block and the center image block is subtracted from a preset distance to obtain a distance difference, and the preset distance is based on the noise standard deviation of the image sensor that acquired the image data and the noise reduction input by the user strength is determined; 按照预设位数右移所述距离差,得到所述邻域图像块的索引号,所述预设位数基于所述噪声标准差和所述预设表的最大长度确定。The distance difference is shifted to the right according to a preset number of bits to obtain the index number of the neighborhood image block, and the preset number of bits is determined based on the noise standard deviation and the maximum length of the preset table. 4.根据权利要求1所述的方法,其特征在于,在所述从预设表中查找出所述距离对应的权重之前,所述方法还包括:4. The method according to claim 1, wherein before finding out the weight corresponding to the distance from the preset table, the method further comprises: 采用高斯函数计算不同距离对应的初始权重;Use Gaussian function to calculate the initial weights corresponding to different distances; 基于用户输入的降噪强度调整所述初始权重,得到不同距离对应的权重,并存储在所述预设表中。The initial weight is adjusted based on the noise reduction intensity input by the user, and weights corresponding to different distances are obtained and stored in the preset table. 5.根据权利要求1至4中任一项所述的方法,其特征在于,所述中心图像块包括多个中心像素点,所述邻域图像块包括多个邻域像素点;所述计算所述邻域图像块与所述中心图像块的距离,包括:5. The method according to any one of claims 1 to 4, wherein the center image block includes a plurality of center pixels, and the neighborhood image block includes a plurality of neighborhood pixels; the calculating The distance between the neighborhood image block and the center image block, including: 计算至少一个中心像素点与对应像素位置的邻域像素点的像素差;Calculate the pixel difference between at least one central pixel point and the neighboring pixel points of the corresponding pixel position; 从所述至少一个中心像素点与对应像素位置的邻域像素点的像素差中确定出像素差最大的目标像素差,并将所述目标像素差作为所述邻域图像块与所述中心图像块的距离。Determine the target pixel difference with the largest pixel difference from the pixel difference between the at least one central pixel point and the adjacent pixel points at the corresponding pixel position, and use the target pixel difference as the neighborhood image block and the central image block distance. 6.根据权利要求5所述的方法,其特征在于,所述计算至少一个中心像素点与对应的邻域像素点的像素差,包括:6. The method according to claim 5, wherein the calculating the pixel difference between at least one central pixel point and a corresponding neighborhood pixel point comprises: 当所述至少一个区域为平坦区域时,计算N个中心像素点与对应像素位置的邻域像素点的像素差,所述N的数值小于所述中心图像块中的中心像素点的总数量;When the at least one area is a flat area, calculate the pixel difference between the N central pixel points and the adjacent pixel points corresponding to the pixel position, where the value of N is less than the total number of central pixel points in the central image block; 当所述至少一个区域为纹理细节区域时,计算所述多个中心像素点与对应像素位置的邻域像素点的像素差。When the at least one area is a texture detail area, the pixel difference between the plurality of center pixels and neighboring pixels at corresponding pixel positions is calculated. 7.根据权利要求6所述的方法,其特征在于,在所述计算至少一个中心像素点与对应的邻域像素点的像素差之前,所述方法还包括:7. The method according to claim 6, wherein before calculating the pixel difference between the at least one central pixel point and the corresponding neighborhood pixel point, the method further comprises: 确定所述中心图像块的像素值和所述邻域图像块的像素值;determining the pixel value of the central image block and the pixel value of the neighborhood image block; 从所述中心图像块的像素值和所述邻域图像块的像素值中选择出最大像素值和最小像素值;selecting the maximum pixel value and the minimum pixel value from the pixel values of the central image block and the pixel values of the neighboring image blocks; 计算所述最大像素值与所述最小像素值的差值;calculating the difference between the maximum pixel value and the minimum pixel value; 当所述差值小于预设值时,确定所述至少一个区域为平坦区域;When the difference is less than a preset value, determining that the at least one area is a flat area; 当所述差值大于或等于预设值时,确定所述至少一个区域为细节纹理区域。When the difference value is greater than or equal to a preset value, the at least one area is determined to be a detail texture area. 8.根据权利要求1至4中任一项所述的方法,其特征在于,在所述获取图像的至少一个区域中的中心图像块和邻域图像块之前,所述方法还包括:8. The method according to any one of claims 1 to 4, characterized in that, before acquiring the central image block and the neighborhood image block in at least one area of the image, the method further comprises: 在所述图像中采用滑动窗口确定目标区域;In the image, a sliding window is used to determine the target area; 当所述目标区域不完全位于所述图像内时,基于所述图像对所述目标区域中位于所述图像外的区域进行填充,得到填充后的所述至少一个区域。When the target area is not completely within the image, filling the target area outside the image based on the image to obtain the filled at least one area. 9.一种图像处理装置,其特征在于,所述装置包括:9. An image processing device, wherein the device comprises: 接收模块,用于获取图像的至少一个区域中的中心图像块和邻域图像块,所述邻域图像块与所述中心图像块相邻;a receiving module, configured to acquire a center image block and a neighborhood image block in at least one area of the image, where the neighborhood image block is adjacent to the center image block; 计算模块,用于计算所述邻域图像块与所述中心图像块的距离;a calculation module for calculating the distance between the neighborhood image block and the central image block; 查找模块,用于从预设表中查找出所述距离对应的权重,并作为所述邻域图像块的权重,所述预设表用于表征不同距离与权重的对应关系;a search module, configured to find the weight corresponding to the distance from a preset table, and use it as the weight of the neighborhood image block, and the preset table is used to represent the correspondence between different distances and weights; 滤波模块,用于基于所述邻域图像块的权重、所述邻域图像块的像素值、所述中心图像块的权重以及所述中心图像块的像素值对所述中心图像块中的像素进行滤波处理。A filtering module, configured to determine pixel values in the center image block based on the weight of the neighborhood image block, the pixel value of the neighborhood image block, the weight of the center image block, and the pixel value of the center image block Perform filtering. 10.一种电子设备,其特征在于,包括:处理器、存储器、总线;10. An electronic device, comprising: a processor, a memory, and a bus; 其中,所述处理器、所述存储器通过所述总线完成相互间的通信;所述处理器用于调用所述存储器中的程序指令,以执行如权利要求1至8中任一项所述的方法。Wherein, the processor and the memory communicate with each other through the bus; the processor is configured to call program instructions in the memory to execute the method according to any one of claims 1 to 8 . 11.一种计算机可读存储介质,其特征在于,包括:存储的程序;其中,在所述程序运行时控制所述存储介质所在设备执行如权利要求1至8中任一项所述的方法。11. A computer-readable storage medium, comprising: a stored program; wherein, when the program runs, a device where the storage medium is located is controlled to execute the method according to any one of claims 1 to 8 .
CN202110828277.3A 2021-07-22 2021-07-22 Image processing method, device, electronic device and storage medium Active CN113643198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110828277.3A CN113643198B (en) 2021-07-22 2021-07-22 Image processing method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110828277.3A CN113643198B (en) 2021-07-22 2021-07-22 Image processing method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113643198A true CN113643198A (en) 2021-11-12
CN113643198B CN113643198B (en) 2024-07-16

Family

ID=78417959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110828277.3A Active CN113643198B (en) 2021-07-22 2021-07-22 Image processing method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113643198B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066765A (en) * 2021-11-23 2022-02-18 上海闻泰信息技术有限公司 Image denoising method, image denoising device, electronic device, medium, and program product
CN114723620A (en) * 2022-03-01 2022-07-08 北京奕斯伟计算技术有限公司 Image processing method and device, electronic equipment and storage medium
CN115439347A (en) * 2022-08-11 2022-12-06 广东技术师范大学 Image denoising system, method, device and storage medium
CN115619805A (en) * 2022-10-21 2023-01-17 黑芝麻智能科技(成都)有限公司 Image filtering method, device, controller and readable storage medium
CN116630179A (en) * 2023-04-20 2023-08-22 浙江大华技术股份有限公司 An image enhancement method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262810B1 (en) * 2014-09-03 2016-02-16 Mitsubishi Electric Research Laboratories, Inc. Image denoising using a library of functions
US20160086317A1 (en) * 2014-09-23 2016-03-24 Intel Corporation Non-local means image denoising with detail preservation using self-similarity driven blending
WO2018134128A1 (en) * 2017-01-19 2018-07-26 Telefonaktiebolaget Lm Ericsson (Publ) Filtering of video data using a shared look-up table
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN111882504A (en) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 Method and system for processing color noise in image, electronic device and storage medium
CN112435156A (en) * 2020-12-08 2021-03-02 烟台艾睿光电科技有限公司 Image processing method, device, equipment and medium based on FPGA
CN112508810A (en) * 2020-11-30 2021-03-16 上海云从汇临人工智能科技有限公司 Non-local mean blind image denoising method, system and device
CN112884667A (en) * 2021-02-04 2021-06-01 湖南兴芯微电子科技有限公司 Bayer domain noise reduction method and noise reduction system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262810B1 (en) * 2014-09-03 2016-02-16 Mitsubishi Electric Research Laboratories, Inc. Image denoising using a library of functions
US20160086317A1 (en) * 2014-09-23 2016-03-24 Intel Corporation Non-local means image denoising with detail preservation using self-similarity driven blending
CN107004255A (en) * 2014-09-23 2017-08-01 英特尔公司 The non-local mean image denoising retained with details of mixing is driven using self-similarity
WO2018134128A1 (en) * 2017-01-19 2018-07-26 Telefonaktiebolaget Lm Ericsson (Publ) Filtering of video data using a shared look-up table
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN111882504A (en) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 Method and system for processing color noise in image, electronic device and storage medium
CN112508810A (en) * 2020-11-30 2021-03-16 上海云从汇临人工智能科技有限公司 Non-local mean blind image denoising method, system and device
CN112435156A (en) * 2020-12-08 2021-03-02 烟台艾睿光电科技有限公司 Image processing method, device, equipment and medium based on FPGA
CN112884667A (en) * 2021-02-04 2021-06-01 湖南兴芯微电子科技有限公司 Bayer domain noise reduction method and noise reduction system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066765A (en) * 2021-11-23 2022-02-18 上海闻泰信息技术有限公司 Image denoising method, image denoising device, electronic device, medium, and program product
CN114723620A (en) * 2022-03-01 2022-07-08 北京奕斯伟计算技术有限公司 Image processing method and device, electronic equipment and storage medium
CN115439347A (en) * 2022-08-11 2022-12-06 广东技术师范大学 Image denoising system, method, device and storage medium
CN115619805A (en) * 2022-10-21 2023-01-17 黑芝麻智能科技(成都)有限公司 Image filtering method, device, controller and readable storage medium
CN116630179A (en) * 2023-04-20 2023-08-22 浙江大华技术股份有限公司 An image enhancement method, device and storage medium
CN116630179B (en) * 2023-04-20 2025-07-25 浙江大华技术股份有限公司 Image enhancement method, device and storage medium

Also Published As

Publication number Publication date
CN113643198B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
CN113643198A (en) Image processing method, device, electronic device and storage medium
KR102675217B1 (en) Image signal processor for processing images
CN112381743B (en) Image processing methods, apparatus, devices and storage media
JP6469678B2 (en) System and method for correcting image artifacts
US9558543B2 (en) Image fusion method and image processing apparatus
CN111563552B (en) Image fusion method and related equipment and device
US20180007337A1 (en) Hardware-Based Convolutional Color Correction in Digital Images
KR101526031B1 (en) Techniques for reducing noise while preserving contrast in an image
US8284271B2 (en) Chroma noise reduction for cameras
EP3284060A1 (en) Convolutional color correction
WO2018082185A1 (en) Image processing method and device
CN111784605A (en) Image denoising method based on region guidance, computer device and computer readable storage medium
TWI703872B (en) Circuitry for image demosaicing and enhancement
CN111510691A (en) Color interpolation method and device, equipment and storage medium
CN111183630B (en) A kind of photo processing method and processing device of intelligent terminal
CN115564694A (en) Image processing method and device, computer readable storage medium and electronic device
CN113168669A (en) Image processing method and device, electronic equipment and readable storage medium
CN111354058B (en) Image coloring method and device, image acquisition equipment and readable storage medium
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
JP5076186B2 (en) Image enlargement method
JP2008113222A (en) Image processing apparatus, photographing apparatus, image processing method therefor, and program causing computer to execute the method
CN116017172A (en) A noise reduction method for Raw domain images and its device, camera and terminal
CN111242087B (en) Object recognition method and device
CN117745563B (en) A dual-camera combined tablet computer enhanced display method
CN114723613A (en) Image processing method and device, electronic device, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 263, block B, science and technology innovation center, 128 Shuanglian Road, Haining Economic Development Zone, Haining City, Jiaxing City, Zhejiang Province, 314400

Applicant after: Haining yisiwei IC Design Co.,Ltd.

Applicant after: Beijing ESWIN Computing Technology Co.,Ltd.

Address before: Room 263, block B, science and technology innovation center, 128 Shuanglian Road, Haining Economic Development Zone, Haining City, Jiaxing City, Zhejiang Province, 314400

Applicant before: Haining yisiwei IC Design Co.,Ltd.

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CB02 Change of applicant information

Country or region after: China

Address after: 314400 Building 1, Juanhu Science and Technology Innovation Park, No. 500 Shuiyueting East Road, Xiashi Street, Haining City, Jiaxing City, Zhejiang Province (self declared)

Applicant after: Haining Yisiwei Computing Technology Co.,Ltd.

Applicant after: Beijing ESWIN Computing Technology Co.,Ltd.

Address before: Room 263, block B, science and technology innovation center, 128 Shuanglian Road, Haining Economic Development Zone, Haining City, Jiaxing City, Zhejiang Province, 314400

Applicant before: Haining yisiwei IC Design Co.,Ltd.

Country or region before: China

Applicant before: Beijing ESWIN Computing Technology Co.,Ltd.

CB02 Change of applicant information