CN116012268A - Image filtering method and device, chip and module equipment - Google Patents
Image filtering method and device, chip and module equipment Download PDFInfo
- Publication number
- CN116012268A CN116012268A CN202211590245.5A CN202211590245A CN116012268A CN 116012268 A CN116012268 A CN 116012268A CN 202211590245 A CN202211590245 A CN 202211590245A CN 116012268 A CN116012268 A CN 116012268A
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixels
- image
- chromaticity
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 98
- 238000001914 filtration Methods 0.000 title claims abstract description 77
- 238000012545 processing Methods 0.000 claims description 31
- 238000004891 communication Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 230000000750 progressive effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000036548 skin texture Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The application discloses an image filtering method, an image filtering device, a chip and module equipment, wherein the method comprises the following steps: determining a first pixel in the image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained after de-interlacing, a region corresponding to a first pixel comprises color burr lines, and the color burr lines refer to a boundary between two adjacent color blocks and are in a zigzag shape; and filtering the first pixel. The method is beneficial to avoiding filtering the image area without color burr lines.
Description
Technical Field
The present invention relates to the field of communications, and in particular, to an image filtering method, an image filtering device, a chip, and a module device.
Background
The images can be generally classified into interlaced images and progressive images. The display displays interlaced images usually by an interlaced scanning method, i.e. each time half of the scanning lines are displayed, in particular even-line or odd-line scanning lines are interlaced. Due to persistence of vision, the human eye does not notice only half of the scan lines, but sees a complete frame. The display displays progressive images by a non-interlaced scanning method, i.e. all scanning lines continuously display images. When the display for playing the non-interlaced image plays the interlaced image, the brightness is reduced and serious flicker occurs because only half of the scan lines corresponding to the interlaced image need to be displayed at a time, i.e. the scan lines of the even lines and the odd lines are displayed in an interlaced manner. Therefore, the non-interlaced image needs to be de-interlaced.
De-interlacing, which may also be referred to as "de-interlacing," refers to a method of converting interlaced images into progressive images. However, since the interlaced image displays only half of the scan lines at a time, half of the information is less than the progressive image. For example, there is an interlaced digital camera that captures sixty fields per second, the first field being captured at 1/60 of a second and the second field being captured at 2/60 of a second, but if the object being captured moves during the capturing process, then the image frames obtained by de-interlacing the two fields will produce a color fringe line, where the color fringe line refers to the saw-tooth shape of the boundary between two adjacent color patches in the image, resulting in color blurring. As shown in fig. 1, the first field corresponds to an odd line scan line for displaying pixels corresponding to the odd line scan line, the second field corresponds to an even line scan line for displaying pixels corresponding to the even line scan line, the first field and the second field are photographed by an interlaced camera on two consecutive time units, assuming that the time unit is 1/60 seconds, the first field is photographed at 1/60 seconds, and the second field is photographed at 2/60 seconds. The image corresponding to the first field and the second field shows that the color block corresponding to the gray pixel is shifted to the left by two pixels. The image frame a is an image photographed by a non-interlaced camera at the same time as the first field, and a boundary between a color block corresponding to a gray pixel and a color block corresponding to a white pixel is a straight line. The image frame B is an image obtained by de-interlacing the first field and the second field, and since the object is moving during photographing, the boundary between the gray pixel and the color block corresponding to the white pixel in the image frame B obtained by de-interlacing is not a straight line, but a zigzag line, which may be referred to as a color burr line.
Since color fringing lines affect the image effect, it is now common to filter the entire image by a filter, but such a method results in that the image area without color fringing lines is affected, resulting in color distortion. Therefore, how to avoid filtering the image area without color flash lines is a problem to be solved.
Disclosure of Invention
The application provides an image filtering method, an image filtering device, a chip and module equipment, which are beneficial to avoiding filtering an image area without color burr lines.
In a first aspect, the present application provides an image filtering method, the method comprising: determining a first pixel in the image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained after de-interlacing, a region corresponding to a first pixel comprises color burr lines, and the color burr lines refer to a boundary between two adjacent color blocks and are in a zigzag shape; and filtering the first pixel.
Based on the method described in the first aspect, since the first pixel includes the color fringe lines, only the first pixel is subjected to the filtering process, which can be understood as only the image area of the color fringe lines generated after the de-interlacing is subjected to the filtering process, which is beneficial to avoiding color distortion caused by that the image area without the color fringe lines is also subjected to the unnecessary filtering process.
In one possible implementation manner, the target parameter is chromaticity or saturation, and the first pixel in the image to be processed is determined based on the target parameter of each pixel in the image to be processed, which is specifically implemented as follows: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; wherein, the first condition is: the absolute value of the difference between the target parameters of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the target parameters of the second pixel and the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to each other, the target parameters of the second pixel are the maximum value or the minimum value of the target parameters of the second pixel, the target parameters of the third pixel and the target parameters of the fourth pixel, and the second pixel, the third pixel and the fourth pixel are the pixels in the same column; the filtering processing is carried out on the first pixel, and the specific implementation mode is as follows: and filtering the target parameter of the first pixel. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color burr line is jagged, if the absolute value of the difference between the chromaticity or saturation of three consecutive pixels is greater than the first threshold value, and the chromaticity or saturation of the middle pixel of the three consecutive pixels is the minimum or maximum value of the three pixels, the area corresponding to the three pixels can be determined to be the color burr line. Based on this implementation, it is advantageous to determine more accurate color flash lines.
In one possible implementation manner, the first pixel in the image to be processed is determined based on the target parameter of each pixel in the image to be processed, and the specific implementation manner is as follows: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; the target parameters include chromaticity and saturation, wherein the first condition includes one or more of the following conditions: the absolute value of the difference between the chromaticities of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the chromaticities of the second pixel and the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to each other, and the chromaticity of the second pixel is the maximum value or the minimum value of the chromaticity of the second pixel, the chromaticity of the third pixel and the chromaticity of the fourth pixel; the absolute value of the difference between the saturation of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the saturation of the second pixel and the saturation of the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to each other, and the saturation of the second pixel is the maximum value or the minimum value of the saturation of the second pixel, the saturation of the third pixel and the saturation of the fourth pixel; the filtering processing is carried out on the first pixel, and the specific implementation mode is as follows: and filtering the saturation and the chromaticity of the first pixel. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color burr line is jagged, if the absolute value of the difference between the chromaticity and/or saturation of three consecutive pixels is greater than the first threshold value, and the chromaticity and/or saturation of the middle pixel of the three consecutive pixels is the smallest or largest value of the three pixels, the area corresponding to the three pixels can be determined to be the color burr line. Based on this implementation, a more accurate color flash line is facilitated.
In one possible implementation manner, if there are M second pixels in the image to be processed, where the second pixels satisfy the first condition, the M second pixels are determined to be the first pixels, and the specific implementation manner is as follows: if the second pixels in the M second pixels in the image to be processed meet the first condition and none of the M second pixels meet the second condition, determining the M second pixels as the first pixels, wherein the second condition is that the absolute value of the difference of brightness between the second pixels and the third pixels is larger than a third threshold value, the absolute value of the difference of brightness between the second pixels and the fourth pixels is larger than the third threshold value, and the brightness of the second pixels is the maximum value or the minimum value of the brightness of the second pixels, the brightness of the third pixels and the brightness of the fourth pixels. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color flash line is jagged, if the absolute value of the difference between the chromaticity or saturation of three consecutive pixels is greater than the first threshold, and the chromaticity or saturation of the middle pixel of the three consecutive pixels is the smallest or largest of the three pixels, the area corresponding to the three pixels can be determined to be the color flash line. The color fringe lines are mainly due to the fact that errors occur in the chromaticity and the saturation of part of pixels when the pixels are de-staggered, and brightness possibly interferes with the chromaticity or the saturation, so that the situation that the brightness corresponds to the sawtooth-shaped lines is eliminated, and the color fringe lines can be determined more accurately.
In one possible implementation manner, determining M second pixels as the first pixels is specifically implemented as follows: determining a target area based on M second pixels, wherein the target area comprises N pixels, the M second pixels are positioned in the target area, and N is an integer larger than M; n pixels are determined to be the first pixel. Based on this implementation, since the color flash line generally appears in a single area, the position of the color flash line can be completely determined with this implementation.
In one possible implementation manner, the target parameter is chromaticity, and the filtering processing is performed on the target parameter of the first pixel, which is specifically implemented as follows: the chromaticity of the first pixel is filtered based on the following formula:
wherein Bu 2 Representing the chromaticity of the first pixel before filtering, bu 1 The chromaticity of the first pixel after filtering is represented, au represents a sixth pixel adjacent to the first pixel, cu represents a seventh pixel adjacent to the first pixel, and Alpha is a value set by a register.
In a second aspect, the present application proposes an apparatus comprising a determining unit and a processing unit, wherein the determining unit is configured to determine a first pixel in an image to be processed based on a target parameter of each pixel in the image to be processed, the target parameter comprising one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained by de-interlacing a first image and a second image, the first image and the second image are two adjacent frames of images, the first image is used for displaying pixels of an odd line, the second image is used for displaying pixels of an even line, a region corresponding to the first pixel comprises color burr lines, and the color burr lines refer to a boundary between two adjacent color blocks to be in a saw-tooth shape; the processing unit is used for filtering the first pixel.
In a third aspect, the present application provides a chip comprising a processor configured to cause the chip to perform the method of the first aspect or any one of its possible implementations.
In a fourth aspect, the present application provides a module apparatus, the module apparatus comprising a communication module, a power module, a storage module, and a chip, wherein: the power supply module is used for providing electric energy for the module equipment; the storage module is used for storing data and instructions; the communication module is used for carrying out internal communication of the module equipment or carrying out communication between the module equipment and external equipment; the chip is for performing the method of the first aspect described above or any one of its possible implementations.
In a fifth aspect, an embodiment of the invention discloses an electronic device comprising a memory for storing a computer program comprising program instructions, and a processor configured to invoke the program instructions to perform the method of the first aspect or any of its possible implementations.
In a sixth aspect, the present application provides a computer readable storage medium having stored therein computer readable instructions which, when run on an apparatus, cause the apparatus to perform the method of the first aspect or any one of the possible implementations thereof.
In a seventh aspect, the present application provides a computer program or computer program product comprising code or instructions which, when run on a computer, cause the computer to perform the method as in the first aspect or any one of its possible implementations.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of de-interlacing provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an averaging filter provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a weighting filter according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an image filtering method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a second pixel according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a target area provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a communication device according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of still another communication device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a module device according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items.
It should be noted that, in the description and claims of the present application and in the above figures, the terms "first," "second," "third," etc. are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
The following is a description of the background of embodiments of the present application:
the emerging high definition television standards are of both interlaced and non-interlaced types, with interlaced type displays being used to support the playback of interlaced images and non-interlaced type displays being used to support the playback of progressive images. Most broadcast video is interlaced video. An interlaced image is made up of two fields that make up each image frame. Each field contains half the number of horizontal lines in a frame, the upper field (or field 1) contains all odd lines, and the lower field (or field 2) contains all even lines. The interlaced image display (such as a television) firstly draws all lines in one scan field and then draws all lines in the other scan field, so that when displaying a frame of image, two scan fields are displayed in an interlaced manner, for example, the scan field corresponding to the even-numbered scan lines is displayed first and then the scan field corresponding to the odd-numbered scan lines is displayed. The progressive image does not need to distinguish two scan fields, and the progressive display displays the progressive image by drawing all horizontal lines from top to bottom, so that the two scan fields forming one video frame are displayed simultaneously.
De-interlacing, which may also be referred to as "de-interlacing," refers to a method of converting interlaced images into progressive images. Since only one of the two scan fields in the interlaced image is displayed at the same time, the direct playing of the interlaced image on the non-interlaced display device generates serious flicker phenomenon, and the brightness is reduced by half. Because of these problems, all new display devices using progressive scanning require de-interlacing.
The general digital camera can not continuously shoot progressive images due to the limitation of hardware speed and buffer memory size, and the general digital camera shoots interlaced images, and the interlaced images are less than the progressive images by half of information amount, so that the requirements of the hardware speed and the buffer memory size close to half can be reduced. But the times at which each field is photographed are not the same, so the corresponding images in the two fields are not necessarily identical. For example, a digital camera shooting sixty fields per second, the first field is shot at 1/60 second, the second field is shot at 2/60 second, the two fields are combined, and if the shot object does not move, the combined image frames do not generate color burr lines; however, if the object to be photographed is moving, the combined image will produce a "saw tooth" effect, i.e. a colored burr line will appear. For this, a Low Pass Filter (LPF) is typically added to filter out these high frequency components. But this will make the image appear blurred.
The low pass filter may be an average filter (average filter), i.e. the low frequency part is kept and the high frequency part is reduced or eliminated. The high frequency part refers to a short distance in which the gray-scale value change value is large (e.g., edge or noise). The low frequency part is the part with small gray scale value change value (such as background or skin texture) in short distance. The averaging filter is implemented by summing all the adjacent gray-scale values of each pixel and replacing the gray-scale value of the pixel with the average value. The effect will be to reduce the pixel value with larger gray level variation, so the sharpness variation of the image will be reduced. For example, as shown in fig. 2, 9 pixels are included in fig. 2, and each pixel has a weight of 1, so that the gray-scale value of each pixel is added up and divided by 9, thereby eliminating the high-frequency portion.
The low pass filter may be a weighting filter (Weighted average filter) whose strength is determined by the control weights. The weighting filter is implemented by multiplying all the adjacent gray-scale values of each pixel by the corresponding weighting values, adding up the multiplied values, and replacing the gray-scale values of the pixel with the average value. If the weight is controlled to maintain the low frequency component, the effect will be to reduce the pixel value with larger gray scale variation, so the sharpness variation of the image will be reduced. For example, as shown in fig. 3, 9 pixels are included in fig. 3, wherein 4 pixels are weighted 1,4 pixels are weighted 2, and 1 pixel is weighted 4, so that the gray scale value of each pixel is added up and divided by 16, thereby eliminating the high frequency part.
The above described method filters the entire image through a filter, but such a method results in that the image area without color fringing lines is affected, resulting in color distortion. Therefore, how to avoid filtering the image area without color flash lines is a problem to be solved.
The image filtering method provided in the embodiment of the present application is described in detail below:
referring to fig. 4, fig. 4 is a flowchart of an image filtering method according to an embodiment of the present application. The method execution subject shown in fig. 4 may be an electronic device, or the subject may be a chip in the electronic device, and the subject of execution of the method is not limited in this application. Fig. 4 illustrates an example of an execution body of the method of the electronic device. The main execution principle of the image filtering method shown in other drawings of the embodiment of the present application is the same, and will not be described in detail later. The image filtering method shown in fig. 4 includes steps 401 to 402.
401. Determining a first pixel in the image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: brightness, chromaticity, or saturation.
402. And filtering the first pixel.
In this embodiment of the present application, the image to be processed refers to an image obtained after de-interlacing, and specifically refers to an image obtained after de-interlacing of two adjacent fields captured by a camera, where one field is used for displaying scan lines of odd lines and the other field is used for displaying scan lines of even lines. Brightness, chromaticity and saturation refer to Y, U and V values in YUV color coding, and specifically, brightness (luminence or Luma) refers to Y values, which may also be referred to as gray scale values, chromaticity (Chroma) refers to U values, and saturation (Chroma) refers to V values. The area corresponding to the first pixel determined by the electronic equipment comprises color burr lines, and as the object shot by the camera in the two fields corresponding to the de-interlacing moves, shooting contents of shooting times corresponding to the two fields are different, color burr lines are generated, and the color burr lines refer to a boundary between two adjacent color blocks to be in a zigzag shape. The electronic device determines a first pixel in the image to be processed based on the target parameters of each pixel in the image to be processed, which can be understood that the electronic device determines the position of the color burr line in the image to be processed according to the target parameters of each pixel in the image to be processed, so as to perform filtering processing on the pixel at the position of the color burr line. Based on the method described in the application, since the first pixel includes the color fringe lines, only the first pixel is subjected to the filtering treatment, that is, only the image area with the color fringe lines generated after the de-interlacing is subjected to the filtering treatment, which is favorable for avoiding color distortion caused by that the image area without the color fringe lines is subjected to the unnecessary filtering treatment.
Three methods for determining the correspondence of the first pixel in the image to be processed based on the target parameters of the respective pixels in the image to be processed will be mainly described below:
the first method, the target parameter is chromaticity or saturation, and the specific method for determining the first pixel is as follows: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; wherein, the first condition is: the absolute value of the difference between the target parameters of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the target parameters of the second pixel and the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to each other, the target parameters of the second pixel are the maximum value or the minimum value of the target parameters of the second pixel, the target parameters of the third pixel and the target parameters of the fourth pixel, and the second pixel, the third pixel and the fourth pixel are the pixels in the same column. Optionally, the M pixels are consecutive pixels of the same column. Wherein, M and the first threshold value can be preset by the electronic device. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color burr line is jagged, if the absolute value of the difference between the chromaticity or saturation of three consecutive pixels is greater than the first threshold value, and the chromaticity or saturation of the middle pixel of the three consecutive pixels is the minimum or maximum value of the three pixels, the area corresponding to the three pixels can be determined to be the color burr line. Based on this implementation, it is advantageous to determine more accurate color flash lines.
Taking fig. 5 as an example, the image to be processed includes 2 second pixels, which are respectively a pixel B and a pixel C. If one of the pixels B and C satisfies the first condition, the pixel B and the pixel C may be determined to be the first pixel. Wherein the third pixel and the fourth pixel refer to two pixels which are in the same column as the second pixel and are adjacent to the second pixel. For example, the pixel B and the pixel C are both second pixels. For the pixel B, the pixel a and the pixel C are two pixels adjacent to each other in the same column, and thus the pixel a and the pixel C can be regarded as a third pixel and a fourth pixel of the pixel B. For the pixel C, the pixel B and the pixel D are two pixels adjacent to each other in the same column, and thus the pixel B and the pixel D can be regarded as a third pixel and a fourth pixel of the pixel C.
Determining whether the pixel B satisfies the first condition may be understood as determining whether the pixel B satisfies the following condition: the absolute value of the difference between the target parameters of the pixel B and the pixel a is larger than the first threshold, the absolute value of the difference between the target parameters of the pixel B and the pixel C is larger than the first threshold, the pixel a and the pixel C are adjacent to the pixel B, and the target parameter of the pixel B is the maximum value or the minimum value of the target parameters of the pixel B, the target parameters of the pixel a and the target parameters of the pixel C.
Determining whether the pixel C satisfies the first condition may be understood as determining whether the pixel C satisfies the following condition: the absolute value of the difference between the target parameters of pixel C and pixel B is greater than the first threshold, the absolute value of the difference between the target parameters of pixel C and pixel D is greater than the first threshold, and the target parameter of pixel B is the maximum or minimum of the target parameters of pixel B, the target parameters of pixel a, and the target parameters of pixel C.
Alternatively, when the target pixel is chromaticity, the pixel B satisfies the first condition, which may mean that the pixel B satisfies the following formula (1), and the pixel C satisfies the first condition, which may mean that the pixel C satisfies the following formula (2)
(Au-Bu>th1)&(Cu-Bu>th1)||(Bu-Au>th1)&(Bu-Cu>th1)(1)
(Bu-Cu>th1)&(Du-Cu>th1)||(Cu-Bu>th1)&(Cu-Du>th1)(2)
Where Au denotes the chromaticity of the pixel a, bu denotes the chromaticity of the pixel B, cu denotes the chromaticity of the pixel C, du denotes the chromaticity of the pixel D, and th1 denotes the first threshold value. The mathematical symbol "||" represents or, and the mathematical symbol "≡represents and.
Alternatively, when the target pixel is the saturation, the pixel B satisfies the first condition, which may mean that the pixel B satisfies the following formula (3), and the pixel C satisfies the first condition, which may mean that the pixel C satisfies the following formula (4)
(Av-Bv>th1)&(Cv-Bv>th1)||(Bv-Av>th1)&(Bv-Cv>th1)(3)
(Bv-Cv>th1)&(Dv-Cv>th1)||(Cv-Bv>th1)&(Cv-Dv>th1)(4)
Where Av represents the saturation of pixel a, bv represents the saturation of pixel B, cv represents the saturation of pixel C, dv represents the saturation of pixel D, and th1 represents the first threshold.
Optionally, when the target parameter is chromaticity, the filtering of the first pixel is specifically implemented by filtering the chromaticity of the first pixel. Wherein the chromaticity of the first pixel may be filtered based on the following equation (5):
wherein Bu 2 Representing the chromaticity of the first pixel before filtering, bu 1 The chromaticity of the first pixel after filtering is represented, au represents a sixth pixel adjacent to the first pixel, cu represents a seventh pixel adjacent to the first pixel, the sixth pixel and the seventh pixel are all in the same column, and Alpha is a value set by a register.
Optionally, when the target parameter is saturation, the specific implementation manner of filtering the first pixel is to filter the saturation of the first pixel. Wherein the saturation of the first pixel may be filtered based on the following formula (6):
wherein Bv 2 Representing the saturation of the first pixel before filtering, bv 1 The saturation of the first pixel after filtering is represented, av represents a sixth pixel adjacent to the first pixel, cv represents a seventh pixel adjacent to the first pixel, the sixth pixel and the seventh pixel are all in the same column, and Alpha is a value set by a register.
The second method, the specific method for determining the first pixel, wherein the target parameter is chromaticity or saturation, comprises the following steps: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; target parameters include chromaticity and saturation; wherein the first condition comprises one or more of the following conditions: the condition 1, the absolute value of the difference in chromaticity between the second pixel and the third pixel is greater than the first threshold, the absolute value of the difference in chromaticity between the second pixel and the fourth pixel is greater than the first threshold, the third pixel and the fourth pixel are all adjacent to the second pixel, and the chromaticity of the second pixel is the maximum value or the minimum value of the chromaticity of the second pixel, the chromaticity of the third pixel and the chromaticity of the fourth pixel; the absolute value of the difference in saturation between the second pixel and the third pixel is greater than the first threshold, the absolute value of the difference in saturation between the second pixel and the fourth pixel is greater than the first threshold, the third pixel and the fourth pixel are both adjacent to the second pixel, and the saturation of the second pixel is the maximum value or the minimum value of the saturation of the second pixel, the saturation of the third pixel, and the saturation of the fourth pixel. The second pixel satisfies the first condition, and it is understood that the second pixel satisfies either one of the condition 1 and the condition 2. Optionally, the M pixels are consecutive pixels of the same column. Wherein, M and the first threshold value can be preset by the electronic device. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color burr line is jagged, if the absolute value of the difference between the chromaticity and/or saturation of three consecutive pixels is greater than the first threshold value, and the chromaticity and/or saturation of the middle pixel of the three consecutive pixels is the smallest or largest value of the three pixels, the area corresponding to the three pixels can be determined to be the color burr line. Based on this implementation, a more accurate color flash line is facilitated.
As above, taking fig. 5 as an example, if one pixel of the pixels B and C satisfies the first condition, the pixels B and C may be determined as the first pixel. Wherein the first condition comprises condition 1 and/or condition 2. The pixel B satisfies the condition 1, may refer to the pixel B satisfying the above formula (1), the pixel C satisfying the condition 1, may refer to the pixel C satisfying the above formula (2), the pixel B satisfying the condition 2, may refer to the pixel B satisfying the above formula (3), the pixel C satisfying the condition 2, and may refer to the pixel C satisfying the above formula (4), and a specific description may refer to the method one, and the embodiments of the present application will not be repeated here.
Optionally, in the method, a specific implementation manner of the filtering processing of the first pixel is that the filtering processing is performed on both saturation and chromaticity of the first pixel. The filtering may be performed on the chromaticity and the saturation of the first pixel at the same time, or may be performed on the chromaticity and the saturation of the first pixel according to a sequence, which is not limited in the embodiment of the present application. Further optionally, the chromaticity of the first pixel may be filtered based on the above formula (5), and the saturation of the first pixel may be filtered based on the above formula (6), and a detailed description may refer to a method one, which is not described herein.
The third method, the specific method for determining the first pixel, wherein the target parameter is chromaticity or saturation, comprises the following steps: if the second pixels in the M second pixels in the image to be processed meet the first condition and none of the M second pixels meet the second condition, determining the M second pixels as the first pixels, wherein the second condition is that the absolute value of the brightness difference between the second pixels and the third pixels is larger than a second threshold value, the absolute value of the brightness difference between the second pixels and the fourth pixels is larger than a second threshold value, and the brightness of the second pixels is the maximum value or the minimum value of the brightness of the second pixels, the brightness of the third pixels and the brightness of the fourth pixels. The first condition may refer to the first condition in the first method or the second method, and the embodiments of the present application are not described herein. Optionally, the M pixels are consecutive pixels of the same column. Wherein, M and the first threshold value can be preset by the electronic device. The electronic device may employ the method to traverse all pixels in the entire image to be processed to determine the first pixel. Since the color flash line is jagged, if the absolute value of the difference between the chromaticity or saturation of three consecutive pixels is greater than the first threshold, and the chromaticity or saturation of the middle pixel of the three consecutive pixels is the smallest or largest of the three pixels, the area corresponding to the three pixels can be determined to be the color flash line. Since the color fringe lines are mainly due to the fact that errors occur in the chromaticity and saturation of part of pixels during de-interlacing, and brightness may interfere with the chromaticity or saturation, the situation that the brightness itself corresponds to the jagged lines needs to be eliminated, and based on the implementation mode, the color fringe lines can be determined more accurately.
Taking fig. 5 as an example, if one of the pixels B and C satisfies the first condition and neither the pixel B nor the pixel C satisfies the second condition, it may be determined that the pixel B and the pixel C are the first pixels. Neither pixel B nor pixel C satisfies the second condition, it being understood that pixel B and pixel C do not satisfy the following formula (7):
(Ay-By>th2)&(Cy-By>th2)&(Cy-Dy>th2)||(By-Ay>th2)&(By-Cy>th2)&(Dy-Cy>th2)(7)
ay represents the brightness of pixel a, by represents the brightness of pixel B, cy represents the brightness of pixel C, dy represents the brightness of pixel D, and th2 represents the second threshold.
The method for filtering the first pixel is the same as the first method and the second method, and reference may be made to the descriptions in the first method and the second method, which are not repeated herein.
In summary, a specific method of the three corresponding methods may be determined by setting corresponding parameters through a register in the electronic device, which is not limited herein. Based on the above three methods, the first pixel including the color fringe line can be determined, and only the first pixel is filtered, that is, only the image area of the color fringe line generated after de-interlacing is filtered, so that the method is beneficial to making the image area without the color fringe line also perform unnecessary filtering to cause color distortion.
In combination with the possible implementation manners, determining the specific implementation manner of the first pixel includes determining a target area based on M second pixels, where the target area includes N pixels, the M second pixels are located in the target area, and N is an integer greater than M; n pixels are determined to be the first pixel. Alternatively, the N pixels may include pixels at X pixel positions spaced left and right from the M second pixels, and an area formed by pixels at Y pixel positions spaced up and down from the M second pixels, where N satisfies a value that may be n= (2×x+1) × (m+2×y). Wherein, X and Y can be preset by the electronic device. The target area may also have different forms, and embodiments of the present application are not described herein. Based on this implementation, since the color flash line generally appears in a single area, the position of the color flash line can be completely determined with this implementation.
As shown in fig. 6, the target area is included in fig. 6, where both the pixel B and the pixel C are the second pixels, and when the pixel B and the pixel C satisfy the condition and are determined to be the first pixels, all the pixels in the target area are determined to be the first pixels by adopting the first, second or third method described above. In connection with the above description, the pixel included in the target area includes a pixel spaced apart from the 2 second pixels by 1 pixel position and a pixel spaced apart from the 2 pixels by 2 pixel positions above and below the 2 pixels, and thus n= (2×1+1) × (2+2×2) =18 is calculated.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present invention, where the apparatus may be an electronic device or an apparatus (e.g. a chip) with an electronic device function. The apparatus may perform the steps associated with the electronic device in the method embodiments described above. Specifically, as shown in fig. 7, the apparatus 700 includes a determining unit 701 and a processing unit 702.
The determining unit 701 is configured to determine a first pixel in the image to be processed based on a target parameter of each pixel in the image to be processed, where the target parameter includes one or more of the following: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained after de-interlacing, a region corresponding to a first pixel comprises color burr lines, and the color burr lines refer to a boundary between two adjacent color blocks and are in a zigzag shape; the processing unit 702 is configured to perform filtering processing on the first pixel.
In one possible implementation manner, when the target parameter is chromaticity or saturation, the determining unit 701 is specifically configured to, when determining the first pixel in the image to be processed based on the target parameter of each pixel in the image to be processed: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; wherein, the first condition is: the absolute value of the difference between the target parameters of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the target parameters of the second pixel and the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to the second pixel, the target parameters of the second pixel are the maximum value or the minimum value of the target parameters of the second pixel, the target parameters of the third pixel and the target parameters of the fourth pixel, and the second pixel, the third pixel and the fourth pixel are the pixels in the same column; when the processing unit 702 performs filtering processing on the first pixel, the processing unit is specifically configured to: and filtering the target parameter of the first pixel.
In one possible implementation manner, when the determining unit 701 determines the first pixel in the image to be processed based on the target parameter of each pixel in the image to be processed, the determining unit is specifically configured to: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; target parameters include chromaticity and saturation; wherein the first condition comprises one or more of the following conditions: the absolute value of the difference between the chromaticities of the second pixel and the third pixel is larger than the first threshold value, the absolute value of the difference between the chromaticities of the second pixel and the fourth pixel is larger than the first threshold value, the third pixel and the fourth pixel are adjacent to the second pixel, and the chromaticity of the second pixel is the maximum value or the minimum value of the chromaticity of the second pixel, the chromaticity of the third pixel and the chromaticity of the fourth pixel; the absolute value of the difference between the saturation of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the saturation of the second pixel and the saturation of the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to the second pixel, and the saturation of the second pixel is the maximum value or the minimum value of the saturation of the second pixel, the saturation of the third pixel and the saturation of the fourth pixel; when the processing unit 702 performs filtering processing on the first pixel, the processing unit is specifically configured to: and filtering the saturation and the chromaticity of the first pixel.
In one possible implementation, when the processing unit 702 performs filtering processing on the first pixel, the processing unit is specifically configured to:
in one possible implementation manner, the determining unit 701 is configured to determine that M second pixels in the image to be processed are first pixels if there are second pixels satisfying the first condition, and is specifically configured to: if the second pixels in the M second pixels in the image to be processed meet the first condition and none of the M second pixels meet the second condition, determining the M second pixels as the first pixels, wherein the second condition is that the absolute value of the brightness difference between the second pixels and the third pixels is larger than a second threshold value, the absolute value of the brightness difference between the second pixels and the fourth pixels is larger than a second threshold value, and the brightness of the second pixels is the maximum value or the minimum value of the brightness of the second pixels, the brightness of the third pixels and the brightness of the fourth pixels.
In one possible implementation manner, when the determining unit 701 determines that the M second pixels are the first pixels, the determining unit is specifically configured to: determining a target area based on M second pixels, wherein the target area comprises N pixels, the M second pixels are positioned in the target area, and N is an integer larger than M; n pixels are determined to be the first pixel.
In one possible implementation, when the target parameter is chromaticity and the processing unit 702 performs filtering processing on the target parameter of the first pixel, the processing unit is specifically configured to: the chromaticity of the first pixel is filtered based on the following formula:
wherein Bu 2 Representing the chromaticity of the first pixel before filtering, bu 1 The chromaticity of the first pixel after filtering is represented, au represents a sixth pixel adjacent to the first pixel, cu represents a seventh pixel adjacent to the first pixel, and Alpha is a value set by a register.
The embodiment of the application also provides a chip which can execute the relevant steps of the electronic equipment in the embodiment of the method. The chip includes a processor.
The processor is configured to cause the chip to perform the following operations: determining a first pixel in the image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained after de-interlacing, a region corresponding to a first pixel comprises color burr lines, and the color burr lines refer to a boundary between two adjacent color blocks and are in a zigzag shape; and filtering the first pixel.
In a possible implementation, the target parameter is chromaticity or saturation, and the processor is configured to cause the chip to perform the following operations when determining the first pixel in the image to be processed based on the target parameter of each pixel in the image to be processed: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; wherein, the first condition is: the absolute value of the difference between the target parameters of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the target parameters of the second pixel and the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to the second pixel, the target parameters of the second pixel are the maximum value or the minimum value of the target parameters of the second pixel, the target parameters of the third pixel and the target parameters of the fourth pixel, and the second pixel, the third pixel and the fourth pixel are the pixels in the same column; the processor is configured to cause the chip to perform filtering processing on the first pixel, and in particular to perform the following operations: and filtering the target parameter of the first pixel.
In a possible implementation manner, the processor is configured to cause the chip to determine the first pixel in the image to be processed based on the target parameter of each pixel in the image to be processed, specifically to perform the following operations: if the second pixels in the M second pixels in the image to be processed meet the first condition, determining the M second pixels as the first pixels, wherein M is an integer greater than or equal to 1; target parameters include chromaticity and saturation; wherein the first condition comprises one or more of the following conditions: the absolute value of the difference between the chromaticities of the second pixel and the third pixel is larger than the first threshold value, the absolute value of the difference between the chromaticities of the second pixel and the fourth pixel is larger than the first threshold value, the third pixel and the fourth pixel are adjacent to the second pixel, and the chromaticity of the second pixel is the maximum value or the minimum value of the chromaticity of the second pixel, the chromaticity of the third pixel and the chromaticity of the fourth pixel; the absolute value of the difference between the saturation of the second pixel and the third pixel is larger than the first threshold, the absolute value of the difference between the saturation of the second pixel and the saturation of the fourth pixel is larger than the first threshold, the third pixel and the fourth pixel are adjacent to the second pixel, and the saturation of the second pixel is the maximum value or the minimum value of the saturation of the second pixel, the saturation of the third pixel and the saturation of the fourth pixel; the processor is configured to cause the chip to perform filtering processing on the first pixel, and in particular, to perform the following operations: and filtering the saturation and the chromaticity of the first pixel.
In a possible implementation manner, the processor is configured to enable the chip to determine that M second pixels in the image to be processed are first pixels if the second pixels exist in the M second pixels and satisfy the first condition, and specifically is configured to perform the following operations: if the second pixels in the M second pixels in the image to be processed meet the first condition and none of the M second pixels meet the second condition, determining the M second pixels as the first pixels, wherein the second condition is that the absolute value of the brightness difference between the second pixels and the third pixels is larger than a second threshold value, the absolute value of the brightness difference between the second pixels and the fourth pixels is larger than a second threshold value, and the brightness of the second pixels is the maximum value or the minimum value of the brightness of the second pixels, the brightness of the third pixels and the brightness of the fourth pixels.
In a possible implementation manner, the processor is configured to cause the chip to determine that the M second pixels are the first pixels, specifically to perform the following operations: determining a target area based on M second pixels, wherein the target area comprises N pixels, the M second pixels are positioned in the target area, and N is an integer larger than M; n pixels are determined to be the first pixel.
In a possible implementation manner, the target parameter is chromaticity, and the processor is configured to cause the chip to perform the following operations when performing the filtering processing on the target parameter of the first pixel: the chromaticity of the first pixel is filtered based on the following formula:
wherein Bu 2 Representation ofChroma of first pixel before filtering, bu 1 The chromaticity of the first pixel after filtering is represented, au represents a sixth pixel adjacent to the first pixel, cu represents a seventh pixel adjacent to the first pixel, and Alpha is a value set by a register.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device 800 may include a memory 801, a processor 802. Optionally, the electronic device may also include a communication interface 803. The memory 801, processor 802, and communication interface 803 are connected by one or more communication buses. Wherein the communication interface 803 is under the control of the processor 802 to transmit and receive information.
The communication interface 803 is used to receive or transmit data.
The processor 802 may be a central processing unit (central processing unit, CPU), the processor 802 may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor, but in the alternative, the processor 802 may be any conventional processor or the like. Wherein:
a memory 801 for storing program instructions.
A processor 802 for invoking program instructions stored in memory 801.
The processor 802 invokes program instructions stored in the memory 801 to cause the electronic device 800 to perform the method performed by the electronic device in the method embodiments described above.
As shown in fig. 9, fig. 9 is a schematic structural diagram of a module device according to an embodiment of the present application. The module apparatus 900 may perform the steps related to the electronic device in the foregoing method embodiment, where the module apparatus 900 includes: communication module 901, power module 902, storage module 903, and chip 904.
Wherein the power module 902 is configured to provide power to the module device; the storage module 903 is used for storing data and instructions; the communication module 901 is used for performing internal communication of the module device or for communicating between the module device and an external device; chip 904 is used to perform the methods performed by the electronic device in the method embodiments described above.
It should be noted that, in the embodiments corresponding to fig. 8 and fig. 9, details of implementation of each step and details of implementation of each step may be referred to the embodiment shown in fig. 1 and the foregoing, and will not be described herein again.
The present application also provides a computer readable storage medium having instructions stored therein, which when run on a processor, implement the method flows of the method embodiments described above.
The present application also provides a computer program product, which when run on a processor, implements the method flows of the above method embodiments.
With respect to each of the apparatuses and each of the modules/units included in the products described in the above embodiments, it may be a software module/unit, a hardware module/unit, or a software module/unit, and a hardware module/unit. For example, each module/unit included in each device or product applied to or integrated in the chip may be implemented in hardware such as a circuit, or at least part of the modules/units may be implemented in software program, where the software program runs on an integrated processor inside the chip, and the rest (if any) of the modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same piece (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal, the included modules/units may all be implemented in hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least some modules/units may be implemented in a software program, where the software program runs on a processor integrated inside the terminal, and the remaining (if any) some modules/units may be implemented in hardware such as a circuit.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some acts may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
The descriptions of the embodiments provided in the present application may be referred to each other, and the descriptions of the embodiments are focused on, and for the part that is not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments. For convenience and brevity of description, for example, reference may be made to the related descriptions of the method embodiments of the present application for the functions and operations performed by the devices and apparatuses provided by the embodiments of the present application, and reference may also be made to each other, combined or cited between the method embodiments, and between the device embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
Claims (11)
1. A method of image filtering, the method comprising:
determining a first pixel in an image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the brightness, chromaticity or saturation, wherein the image to be processed is an image obtained after de-interlacing, the region corresponding to the first pixel comprises color fringe lines, and the color fringe lines refer to a boundary line between two adjacent color blocks in a zigzag shape;
and filtering the first pixel.
2. The method of claim 1, wherein the target parameter is chromaticity or saturation, and wherein the determining the first pixel in the image to be processed based on the target parameter for each pixel in the image to be processed comprises:
if the second pixels in M second pixels in the image to be processed meet the first condition, determining that the M second pixels are the first pixels, wherein M is an integer greater than or equal to 1;
wherein the first condition is: the absolute value of the difference between the target parameters of the second pixel and the third pixel is larger than a first threshold value, the difference between the target parameters of the second pixel and the fourth pixel is larger than the first threshold value, the third pixel and the fourth pixel are adjacent to the second pixel, the target parameters of the second pixel are the maximum value or the minimum value of the target parameters of the second pixel, the target parameters of the third pixel and the target parameters of the fourth pixel, and the second pixel, the third pixel and the fourth pixel are the pixels of the same column;
The filtering the first pixel includes:
and filtering the target parameter of the first pixel.
3. The method of claim 1, wherein the determining a first pixel in the image to be processed based on the target parameters of each pixel in the image to be processed comprises:
if the second pixels in M second pixels in the image to be processed meet the first condition, determining that the M second pixels are the first pixels, wherein M is an integer greater than or equal to 1, and the target parameters comprise chromaticity and saturation;
wherein the first condition comprises one or more of the following conditions:
an absolute value of a difference in chromaticity between the second pixel and a third pixel is greater than a first threshold, an absolute value of a difference in chromaticity between the second pixel and a fourth pixel is greater than the first threshold, the third pixel and the fourth pixel are each adjacent to the second pixel, and the chromaticity of the second pixel is a maximum or minimum of the chromaticity of the second pixel, the chromaticity of the third pixel, and the chromaticity of the fourth pixel;
an absolute value of a difference in saturation between the second pixel and a third pixel is greater than a first threshold, an absolute value of a difference in saturation between the second pixel and a fourth pixel is greater than the first threshold, the third pixel and the fourth pixel are each adjacent to the second pixel, and the saturation of the second pixel is a maximum or a minimum of the saturation of the second pixel, the saturation of the third pixel, and the saturation of the fourth pixel;
The filtering the first pixel includes:
and filtering the saturation and the chromaticity of the first pixel.
4. A method according to claim 2 or 3, wherein determining that M second pixels in the image to be processed are the first pixels if there are second pixels satisfying a first condition, comprises:
and if second pixels exist in M second pixels in the image to be processed and do not meet the first condition, determining the M second pixels as the first pixels, wherein the second condition is that the absolute value of the difference of brightness between the second pixels and the third pixels is larger than a third threshold value, the absolute value of the difference of brightness between the second pixels and the fourth pixels is larger than the third threshold value, and the brightness of the second pixels is the maximum value or the minimum value of the brightness of the second pixels, the brightness of the third pixels and the brightness of the fourth pixels.
5. The method of any one of claims 2 to 4, wherein the determining that the M second pixels are the first pixels comprises:
Determining a target area based on the M second pixels, wherein the target area comprises N pixels, the M second pixels are positioned in the target area, and N is an integer greater than M;
the N pixels are determined to be the first pixels.
6. The method of claim 2, wherein the target parameter is chrominance, and the filtering the target parameter of the first pixel comprises:
the chromaticity of the first pixel is filtered based on the following formula:
wherein Bu 2 Representing the chromaticity of the first pixel before filtering, bu 1 The chromaticity of the first pixel after filtering is represented, au represents a sixth pixel adjacent to the first pixel, cu represents a seventh pixel adjacent to the first pixel, and Alpha is a value set by a register.
7. An apparatus, characterized in that the communication apparatus comprises a determination unit and a processing unit, wherein:
the determining unit is configured to determine a first pixel in an image to be processed based on a target parameter of each pixel in the image to be processed, where the target parameter includes one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained by de-interlacing a first image and a second image, the first image and the second image are two adjacent frames of images, the first image is used for displaying pixels of an odd line, the second image is used for displaying pixels of an even line, a region corresponding to the first pixel comprises color burr lines, and the color burr lines are saw-tooth-shaped dividing lines between two adjacent color blocks;
The processing unit is used for carrying out filtering processing on the first pixel.
8. A chip comprising a processor, the chip configured to:
determining a first pixel in an image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained by de-interlacing a first image and a second image, the first image and the second image are two adjacent frames of images, the first image is used for displaying pixels of an odd line, the second image is used for displaying pixels of an even line, a region corresponding to the first pixel comprises color burr lines, and the color burr lines are saw-tooth-shaped dividing lines between two adjacent color blocks;
and filtering the first pixel.
9. A modular apparatus, characterized in that,
the module equipment includes communication module, power module, storage module and chip, wherein:
the power supply module is used for providing electric energy for the module equipment;
the storage module is used for storing data and instructions;
the communication module is used for carrying out internal communication of module equipment or carrying out communication between the module equipment and external equipment;
The chip is used for executing:
determining a first pixel in an image to be processed based on target parameters for each pixel in the image to be processed, the target parameters including one or more of: the method comprises the steps that brightness, chromaticity or saturation is achieved, an image to be processed is obtained by de-interlacing a first image and a second image, the first image and the second image are two adjacent frames of images, the first image is used for displaying pixels of an odd line, the second image is used for displaying pixels of an even line, a region corresponding to the first pixel comprises color burr lines, and the color burr lines are saw-tooth-shaped dividing lines between two adjacent color blocks;
and filtering the first pixel.
10. An electronic device comprising a memory for storing a computer program comprising program instructions and a processor configured to invoke the program instructions to cause the apparatus to perform the method of any of claims 1-6.
11. A computer readable storage medium having stored therein computer readable instructions which, when run on an apparatus, cause the apparatus to perform the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211590245.5A CN116012268A (en) | 2022-12-12 | 2022-12-12 | Image filtering method and device, chip and module equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211590245.5A CN116012268A (en) | 2022-12-12 | 2022-12-12 | Image filtering method and device, chip and module equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116012268A true CN116012268A (en) | 2023-04-25 |
Family
ID=86032574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211590245.5A Pending CN116012268A (en) | 2022-12-12 | 2022-12-12 | Image filtering method and device, chip and module equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116012268A (en) |
-
2022
- 2022-12-12 CN CN202211590245.5A patent/CN116012268A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2387701C (en) | System and method for motion compensation and frame rate conversion | |
US7057664B2 (en) | Method and system for converting interlaced formatted video to progressive scan video using a color edge detection scheme | |
US7280155B2 (en) | Method and system for converting interlaced formatted video to progressive scan video | |
RU2413384C2 (en) | Device of image processing and method of image processing | |
US7671909B2 (en) | Method and apparatus for processing Bayer-pattern digital color video signal | |
CN101442649B (en) | Processing method and apparatus for removing alternate line and FPGA chip | |
US20020057362A1 (en) | System and method for scaling images | |
US20080129875A1 (en) | Motion and/or scene change detection using color components | |
US9756306B2 (en) | Artifact reduction method and apparatus and image processing method and apparatus | |
KR100563023B1 (en) | Method and System for Edge Adaptive Interpolation for Interlaced to Progressive Conversion | |
WO2011141197A1 (en) | Method for detecting directions of regularity in a two-dimensional image | |
US9215353B2 (en) | Image processing device, image processing method, image display device, and image display method | |
US20080063295A1 (en) | Imaging Device | |
CN101088290B (en) | Spatio-temporal adaptive video de-interlacing method, device and system | |
CN113068011B (en) | Image sensor, image processing method and system | |
US8274605B2 (en) | System and method for adjacent field comparison in video processing | |
CN116012268A (en) | Image filtering method and device, chip and module equipment | |
JP3748446B2 (en) | Image processing circuit of image input device | |
US11501416B2 (en) | Image processing method and image processing circuit capable of smoothing false contouring without using low-pass filtering | |
WO2012124516A1 (en) | Noise reduction processing device, and display device | |
JP2004153848A (en) | Image processing circuit for image input apparatus | |
JP3071564U (en) | Image data processing device | |
US8350962B2 (en) | Method and system for video format conversion | |
Roussel et al. | Improvement of conventional deinterlacing methods with extrema detection and interpolation | |
TWI392336B (en) | Apparatus and method for motion adaptive de-interlacing with chroma up-sampling error remover |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |