[go: up one dir, main page]

CN117132489B - Image correction method, device and equipment - Google Patents

Image correction method, device and equipment

Info

Publication number
CN117132489B
CN117132489B CN202310983118.XA CN202310983118A CN117132489B CN 117132489 B CN117132489 B CN 117132489B CN 202310983118 A CN202310983118 A CN 202310983118A CN 117132489 B CN117132489 B CN 117132489B
Authority
CN
China
Prior art keywords
pixel
value
corrected
block
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310983118.XA
Other languages
Chinese (zh)
Other versions
CN117132489A (en
Inventor
何炎森
刘恩毅
廖宇豪
杨宇辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Huiying Technology Co ltd
Original Assignee
Hangzhou Haikang Huiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Huiying Technology Co ltd filed Critical Hangzhou Haikang Huiying Technology Co ltd
Priority to CN202310983118.XA priority Critical patent/CN117132489B/en
Publication of CN117132489A publication Critical patent/CN117132489A/en
Application granted granted Critical
Publication of CN117132489B publication Critical patent/CN117132489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种图像校正方法、装置及设备,该方法包括:获取原始图像中的待校正像素点,并从所述原始图像中获取待校正像素点对应的第一图像区域,所述第一图像区域的中心像素点为所述待校正像素点;将所述第一图像区域划分为K个像素块,K为大于1的正整数;基于每个像素块对应的像素特征从所述K个像素块中选取目标像素块;基于所述目标像素块对应的像素特征对所述待校正像素点的像素值进行校正,得到所述待校正像素点的校正后像素值;基于所述待校正像素点的校正后像素值生成校正后图像。通过本申请的技术方案,解决细节信息严重丢失的问题,有效提高图像的细节还原度与图像完整度。

The present application provides an image correction method, device, and equipment, which includes: obtaining a pixel point to be corrected in an original image, and obtaining a first image area corresponding to the pixel point to be corrected from the original image, wherein the central pixel point of the first image area is the pixel point to be corrected; dividing the first image area into K pixel blocks, where K is a positive integer greater than 1; selecting a target pixel block from the K pixel blocks based on the pixel features corresponding to each pixel block; correcting the pixel value of the pixel point to be corrected based on the pixel features corresponding to the target pixel block to obtain the corrected pixel value of the pixel point to be corrected; and generating a corrected image based on the corrected pixel value of the pixel point to be corrected. The technical solution of the present application solves the problem of severe loss of detail information and effectively improves the detail restoration and image integrity of the image.

Description

Image correction method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image correction method, apparatus, and device.
Background
An image capture device (e.g., a camera, etc.) may capture an image of the target scene, such as the image capture device capturing an image of the target scene via a sensor (i.e., a sensor). However, due to design defects of the sensor itself, limited manufacturing process, or due to transmission link defects of the imaging system, abnormal pixels exist on the image, the pixel values of the pixels are obviously not matched with the pixel values of surrounding pixels, the definition and the integrity of the image are damaged, and the abnormal pixels can be called dead pixels.
For example, referring to fig. 1, in the process of detecting a lesion, there is a significant overexposure area on an imaged image due to reflection of light of a human body internal structure, and there is a lesion on an edge of the overexposure area, which affects quality of the image and causes interference to doctor's observation.
The existence of dead pixels can lead to the reduction of image quality, and can affect the image definition, the image details and the image integrity, namely the image definition is reduced, the image details are lost, and the image integrity is poor.
Disclosure of Invention
The application provides an image correction method, which comprises the following steps:
Acquiring a pixel point to be corrected in an original image, and acquiring a first image area corresponding to the pixel point to be corrected from the original image, wherein the central pixel point of the first image area is the pixel point to be corrected;
dividing the first image area into K pixel blocks, wherein K is a positive integer greater than 1;
selecting a target pixel block from the K pixel blocks based on pixel characteristics corresponding to each pixel block, wherein the target pixel block is the pixel block with the maximum similarity with the pixel point to be corrected;
correcting the pixel value of the pixel point to be corrected based on the pixel characteristics corresponding to the target pixel block to obtain a corrected pixel value of the pixel point to be corrected;
and generating a corrected image based on the corrected pixel value of the pixel point to be corrected.
The present application provides an image correction apparatus, the apparatus comprising:
The acquisition module is used for acquiring a pixel point to be corrected in an original image, and acquiring a first image area corresponding to the pixel point to be corrected from the original image, wherein the central pixel point of the first image area is the pixel point to be corrected;
The processing module is used for selecting a target pixel block from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, correcting the pixel value of the pixel point to be corrected based on the pixel characteristics corresponding to the target pixel block, and obtaining the corrected pixel value of the pixel point to be corrected, wherein the target pixel block is the pixel block with the maximum similarity with the pixel point to be corrected;
and the generation module is used for generating a corrected image based on the corrected pixel value of the pixel point to be corrected.
The processing module is specifically used for determining a gradient value corresponding to the pixel block set based on pixel characteristics corresponding to the two pixel blocks in the pixel block set when selecting a target pixel block from the K pixel blocks based on pixel characteristics corresponding to each pixel block, wherein the pixel characteristics corresponding to the pixel block are an average value, a median value, a maximum value or a minimum value of the pixel values of all the pixel points in the pixel block set, and selecting a pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value as the target pixel block based on the gradient value corresponding to each pixel block set.
The processing module is used for correcting pixel values of the pixel points to be corrected based on pixel characteristics corresponding to the target pixel block, and is particularly used for obtaining a second image area from an original image when the corrected pixel values of the pixel points to be corrected are obtained, wherein the central pixel point of the second image area is the pixel point to be corrected, the size of the second image area is smaller than that of the first image area, the second image area is used for determining the pixel points to be corrected, the first image area is used for determining the target pixel block, the target pixel block is used for correcting the pixel points of the second image area, the pixel values of the pixel points are corrected based on the pixel characteristics corresponding to the target pixel block in the second image area, and the corrected pixel values of the pixel points are obtained, and the pixel characteristics corresponding to the target pixel block are the average value, the median value, the maximum value or the minimum value of the pixel values of all the pixel points in the target pixel block.
The size of the first image area is determined based on a target gain value corresponding to the original image, wherein the target gain value is a gain value adopted when the original image is acquired through a sensor, and the size of the second image area is determined based on the target gain value.
The acquisition module is further configured to determine a target gain value interval to which the target gain value belongs, obtain a target intensity control parameter corresponding to the target gain value interval by querying a first mapping relation, wherein the first mapping relation comprises a corresponding relation between the gain value interval and the intensity control parameter, when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger, obtain a first target size corresponding to the target intensity control parameter by querying a second mapping relation, the first target size is the size of the first image area, obtain a second target size corresponding to the target intensity control parameter, the second target size is the size of the second image area, and the second target size is smaller than the first target size, wherein the second mapping relation comprises a corresponding relation between the intensity control parameter and the first size, when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger, and the third mapping relation comprises a corresponding relation between the intensity control parameter and the second size, when the intensity control parameter is larger, the second size is larger.
The acquisition module is specifically configured to acquire a first image area corresponding to a pixel point in an original image for the pixel point in the original image, select an adjacent pixel point corresponding to the pixel point from the first image area, determine an upper limit pixel value and a lower limit pixel value corresponding to the pixel point based on the pixel value of the adjacent pixel point, determine that the pixel point is not the pixel point to be corrected if the pixel value of the pixel point is located between the upper limit pixel value and the lower limit pixel value, and determine that the pixel point is not the pixel point to be corrected if the pixel value of the pixel point is not located between the upper limit pixel value and the lower limit pixel value, or determine that the pixel point is not the pixel point to be corrected or is the pixel point to be corrected based on an auxiliary feature corresponding to the first image area.
The acquisition module is specifically configured to determine that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area if the auxiliary feature includes a pixel value of each pixel in the first image area, determine that the pixel is a pixel to be corrected if the pixel value of the pixel is smaller than a first threshold and there is an overexposure area in the first image area, determine that the pixel is not a pixel to be corrected if the pixel value of the pixel is not smaller than the first threshold and/or determine that the pixel is not a pixel to be corrected if there is no overexposure area in the first image area, and determine that the pixel is not a pixel to be corrected if the pixel value of each pixel in the first image area is smaller than a second threshold and the second threshold is larger than the first threshold, and determine that there is no overexposure area in the first image area if the pixel value of at least one pixel in the first image area is not smaller than the second threshold.
The application provides an image correction method, which comprises the following steps:
acquiring a first image area from an original image, wherein the first image area comprises a pixel point to be corrected and an overexposed image area, or the first image area comprises the pixel point to be corrected and is connected with the overexposed image area;
Extracting a target pixel block from the first image area, wherein the target pixel block is any pixel block in the first image area, and the target pixel block is related to the gradient value of the pixel block and the pixel value of the pixel block;
Correcting the pixel value of the pixel point to be corrected based on the pixel characteristics of the target pixel block to obtain a corrected pixel value;
a corrected image is generated based on the corrected pixel values.
The application provides an electronic device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor for executing the machine-executable instructions to implement the image correction method of the above example.
As can be seen from the above technical solutions, in the embodiments of the present application, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, and the pixel values of the pixels to be corrected are corrected based on the pixel characteristics corresponding to the target pixel block, so as to obtain a corrected image. By correcting the pixel value of the pixel point to be corrected (namely the dead pixel), the image quality can be improved, the image definition is improved, the image detail is not lacked or is reduced, the image integrity is relatively good, the problem that detailed information is seriously lost is solved, and the detail reduction degree and the image integrity of the image are effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a dead pixel in an embodiment of the application;
FIG. 2 is a flow chart of an image correction method in one embodiment of the application;
FIGS. 3A and 3B are schematic diagrams of dead pixels and dead pixel blocks in one embodiment of the application;
FIG. 3C is a flow chart of an image correction method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of pixel location relationships in one embodiment of the application;
FIG. 5 is a schematic diagram of 8 pixel blocks in one embodiment of the application;
FIGS. 6A and 6B are pictorial representations of one embodiment of the present application;
fig. 7 is a schematic structural view of an image correction apparatus in an embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device in an embodiment of the application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Depending on the context, furthermore, the word "if" used may be interpreted as "at..once" or "when..once" or "in response to a determination".
The embodiment of the application provides an image correction method, which can be applied to any electronic device, and is shown in fig. 2, which is a schematic flow chart of the image correction method, and the method can include:
step 201, obtaining a pixel to be corrected in an original image, and obtaining a first image area corresponding to the pixel to be corrected from the original image, wherein a center pixel of the first image area is the pixel to be corrected. The pixel point to be corrected may be an abnormal pixel point belonging to a bad pixel block.
Step 202, dividing the first image area into K pixel blocks, where K is a positive integer greater than 1.
Step 203, selecting a target pixel block from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, that is, selecting one pixel block from the K pixel blocks as the target pixel block corresponding to the pixel point to be corrected. The target pixel block is a pixel block with the greatest similarity with the pixel point to be corrected, namely, the target pixel block is a pixel block with the greatest reference information provided for the pixel point to be corrected.
And 204, correcting the pixel value of the pixel point to be corrected based on the pixel characteristics corresponding to the target pixel block, and obtaining the corrected pixel value of the pixel point to be corrected. For example, the second image area may be obtained from the original image, and when the pixel value of the pixel to be corrected is corrected, the pixel value of each pixel in the second image area is corrected based on the pixel feature corresponding to the target pixel block.
Step 205, generating a corrected image based on the corrected pixel values of the pixel points to be corrected.
For example, K pixel blocks form a plurality of pixel block sets, and for each pixel block set, the pixel block set may include two pixel blocks, where a line of a central pixel point of the two pixel blocks in the pixel block set passes through a pixel point to be corrected. Based on the above, selecting the target pixel block from the K pixel blocks based on the pixel characteristics corresponding to each pixel block can include, but is not limited to, determining a gradient value corresponding to each pixel block set based on the pixel characteristics respectively corresponding to two pixel blocks in the pixel block set, wherein the pixel characteristics corresponding to the pixel block can be an average value, a median value, a maximum value or a minimum value of the pixel values of all pixel points in the pixel block, and selecting the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value as the target pixel block based on the gradient value corresponding to each pixel block set.
And the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is the pixel block with the maximum similarity with the pixel point to be corrected, and the pixel block is taken as the target pixel block. Obviously, based on the gradient value and the pixel characteristic corresponding to the pixel block, the target pixel block can be found from all the pixel blocks.
The method includes the steps of obtaining a second image area from an original image, wherein a center pixel of the second image area is the pixel to be corrected, and the size of the second image area may be smaller than that of a first image area, wherein the second image area is used for determining the pixel to be corrected (namely, each pixel in the second image area is used as the pixel to be corrected), the first image area is used for determining the target pixel block, and the target pixel block is used for correcting the pixels of the second image area (all pixels of the second image area). And correcting the pixel value of each pixel point in the second image area based on the pixel characteristic corresponding to the target pixel block to obtain a corrected pixel value of the pixel point, wherein the pixel characteristic corresponding to the target pixel block can be the average value, the median value, the maximum value or the minimum value of the pixel values of all the pixel points in the target pixel block.
The size of the first image region may be determined based on a target gain value corresponding to the original image, which may be a gain value employed when the original image is acquired by the sensor, and the size of the second image region may be determined based on the target gain value, for example.
The method comprises the steps of determining a target gain value interval to which a target gain value belongs, obtaining a target intensity control parameter corresponding to the target gain value interval by inquiring a first mapping relation, wherein the first mapping relation comprises a corresponding relation between the gain value interval and the intensity control parameter, when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger, obtaining a first target size corresponding to the target intensity control parameter by inquiring a second mapping relation, the first target size is the size of a first image area, obtaining a second target size corresponding to the target intensity control parameter by inquiring a third mapping relation, the second target size is the size of a second image area, and the second target size is smaller than the first target size, wherein the second mapping relation comprises a corresponding relation between the intensity control parameter and the first size, when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger, and the third mapping relation comprises a corresponding relation between the intensity control parameter and the second size, when the intensity control parameter is larger, the second size corresponding to the intensity control parameter is larger.
The method for obtaining the pixel to be corrected in the original image includes, but is not limited to, obtaining a first image area corresponding to the pixel from the original image for the pixel in the original image, selecting an adjacent pixel corresponding to the pixel from the first image area, determining an upper limit pixel value and a lower limit pixel value corresponding to the pixel based on the pixel value of the adjacent pixel, and determining that the pixel is not the pixel to be corrected, namely is not an abnormal pixel belonging to a bad pixel block if the pixel value of the pixel is located between the upper limit pixel value and the lower limit pixel value. If the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is a pixel to be corrected, namely an abnormal pixel belonging to the bad pixel block, or if the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area.
For example, if the auxiliary feature includes a pixel value of each pixel in the first image area, determining that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area may include, but is not limited to, determining that the pixel is a pixel to be corrected, that is, an abnormal pixel belonging to the bad pixel block if the pixel value of the pixel is less than a first threshold and there is an overexposure area in the first image area. If the pixel value of the pixel is not smaller than the first threshold value and/or an overexposure region does not exist in the first image region, determining that the pixel is not the pixel to be corrected, namely, is not an abnormal pixel belonging to the dead pixel block.
If the pixel value of each pixel point in the first image area is smaller than a second threshold value, and the second threshold value is larger than the first threshold value, no overexposure area exists in the first image area; and if the pixel value of at least one pixel point in the first image area is not smaller than the second threshold value, the overexposure area exists in the first image area.
As can be seen from the above technical solutions, in the embodiments of the present application, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, and the pixel values of the pixels to be corrected are corrected based on the pixel characteristics corresponding to the target pixel block, so as to obtain a corrected image. By correcting the pixel value of the pixel point to be corrected (namely the dead pixel), the image quality can be improved, the image definition is improved, the image detail is not lacked or is reduced, the image integrity is relatively good, the problem that detailed information is seriously lost is solved, and the detail reduction degree and the image integrity of the image are effectively improved.
The image correction method according to the embodiment of the present application will be described below with reference to specific embodiments.
When the image acquisition equipment acquires an image of a target scene through a sensor (sensor), due to the design defect of the sensor, the limited manufacturing process or the defect of a transmission link of an imaging system, abnormal pixel points exist on the image, the pixel values of the pixel points are obviously not matched with the pixel values of surrounding pixel points, the definition and the integrity of the image are damaged, and the abnormal pixel points are called as bad points.
The dead pixels in the image are generally classified into static dead pixels and dynamic dead pixels, wherein the static dead pixels are pixels with fixed positions and inaccurate pixel values caused by manufacturing process defects. The dynamic dead pixel is a pixel with unfixed position caused by manufacturing process defects or error in the photoelectric signal conversion process, the dynamic dead pixel is normal in a certain brightness range, when the brightness range is exceeded, the pixel value of the dynamic dead pixel can be abnormally changed, and the difference between the dynamic dead pixel and surrounding pixel can be changed along with the change of the temperature and the gain value of the sensor.
The dead pixel region formed by a large number of dynamic dead pixel adhesion can be called as a dead pixel block (namely a dynamic dead pixel block), the dead pixel block is caused by a sensor design defect or a data transmission link design defect, and the dead pixel block usually appears at the edge of an image high light reflection region. Compared with static dead pixels and dynamic dead pixels, the dead pixel blocks are less influenced by the pixel points of the dead pixels and are more influenced by the surrounding pixel points, the dead pixel blocks have uncertainty and randomness of the dynamic dead pixels, the area of the dead pixel blocks is larger, the damage to the definition, details and integrity of the image is more serious, and the unstable area and more neighborhood invalid pixels are the main characteristics of the dead pixel blocks.
Referring to fig. 3A, a dead pixel (static dead pixel or dynamic dead pixel) is illustrated, and referring to fig. 3B, a dead pixel block is illustrated, wherein the dead pixel is only a small number of dead pixels, and the dead pixel block is formed by adhering a large number of dynamic dead pixels. In the focus detection process, due to reflection of human body internal structures and other reasons, an obvious overexposure area exists on an imaged image, dead spots exist at the edge of the overexposure area, a large number of dynamic dead spots are adhered to form dead spot blocks, the quality of the image is affected by the dead spot blocks, and the observation of doctors is interfered.
The existence of the dead pixel blocks can cause the reduction of image quality, and the influence on the image definition, the image details and the image integrity is caused, namely, the image definition is reduced, the image details are lost, and the image integrity is poor.
Aiming at the discovery, the embodiment of the application provides an image correction method which can correct the dead pixel blocks (namely dead pixel areas formed by a large number of dynamic dead pixel adhesion) in the original image to obtain a corrected image. The method can be applied to image acquisition equipment (such as a camera and the like), and the image correction is carried out on the original image after the original image is acquired by the image acquisition equipment. The method can also be applied to back-end equipment (such as a server, a host, NVR, storage equipment and the like), for example, after the image acquisition equipment acquires the original image, the original image is input to the back-end equipment, and the back-end equipment performs image correction on the original image.
Referring to fig. 3C, a flowchart of an image correction method is shown, and the method may include:
Step 301, acquiring an original image. The original image may be a Bayer format image, or may be an image of another format, such as an RGB format image, a YUV format image, etc., where the format of the image is not limited, and the Bayer format image is used as an example, and is capable of simulating the sensitivity to color, and converting gray information into color information by adopting a1 red 2 green 1 blue arrangement mode.
For example, in some application scenarios, an endoscope may be used to acquire an image of a specific type of tissue (e.g., nerve tissue or organ tissue, etc.) within a target object, and the image may be used as the original image.
For another example, in some application scenarios, a camera may be used to capture an image (e.g., a vehicle image, etc.) of a target scene (e.g., a highway, etc.), which may be taken as the original image.
For another example, in some application scenarios, a camera may be used to capture an image (e.g., a human body image, etc.) of a target scene (e.g., an access control system, etc.), which may be used as an original image.
Of course, the above is just a few examples and there is no limitation on the source of this original image.
Step 302, obtaining a target gain value corresponding to an original image, where the target gain value may be a gain value adopted when the image acquisition device acquires the original image through a sensor. For example, when the image acquisition device acquires the original image through the sensor, the brightness of the image can be controlled through parameters such as a shutter, a gain value, an aperture and the like, so that the gain value can be used as a target gain value corresponding to the original image.
For example, since the area of the dead pixel block changes with the change of the gain value, a target gain value corresponding to the original image may be obtained, the dead pixel block degree may be estimated by the target gain value, and the size of the dead pixel correction window may be estimated according to the target gain value, so that the dead pixel block detection and the dead pixel block correction may be linked by the target gain value, and the process of the dead pixel block detection and the dead pixel block correction may refer to the subsequent steps.
Step 303, locating the bad point block in the original image.
For example, since the dead pixel block is formed by a large number of dynamic dead pixel adhesion, the pixel information in the neighborhood is unreliable, and in order to avoid the conditions of missing detection, false detection and the like, the detection precision of the dead pixel block can be improved by methods of dead pixel degree estimation, pixel constraint, auxiliary feature judgment and the like. For example, the following steps may be used to locate a bad block in the original image (i.e. determine whether a pixel in the original image belongs to a bad block):
Step 3031, estimating the bad point degree. For example, for a pixel point in an original image, a first image area corresponding to the pixel point is obtained from the original image, and the first image area is a dead point degree estimation result.
The central pixel point of the first image area is the pixel point, the size of the first image area is m×m, and the value of m is empirically configured, which is not limited. The value of m may be odd or even, and the following example is odd. The value of m may be between a minimum value and a maximum value, the minimum value may be empirically configured, such as 5, 7, etc., the maximum value may be empirically configured, such as 9, 11, etc., for example, taking the minimum value of 7 and the maximum value of 11 as examples, m is greater than or equal to 7 and m is less than or equal to 11, and when the value of m is an odd number, m may be 7 and m may be 9,m and 11.
In one possible embodiment, the size (m) of the first image region may be determined based on a target gain value corresponding to the original image. For example, assuming that m has K size values, all gain values may be divided into K gain value intervals, gain value interval 1 corresponding to the 1 st size value of m, gain value interval 2 corresponding to the 2 nd size value of m, and so on. After obtaining a target gain value corresponding to the original image, determining a target gain value interval to which the target gain value belongs, and taking a size value corresponding to the target gain value interval as the size of the first image area.
For example, assuming that m has 3 size values of 7, 9, 11, etc., all gain values may be divided into 3 gain value intervals, where gain value interval 1 (e.g., minimum gain value to gain value a) corresponds to size value 7, gain value interval 2 (e.g., gain value a to gain value B) corresponds to size value 9, and gain value interval 3 (e.g., gain value B to maximum gain value) corresponds to size value 11. After the target gain value is obtained, if the target gain value belongs to the gain value section 1, the size value 7 is set as the size of the first image area, that is, the first image area is 7*7, if the target gain value belongs to the gain value section 2, the size value 9 is set as the size of the first image area, that is, the first image area is 9*9, and if the target gain value belongs to the gain value section 3, the size value 11 is set as the size of the first image area, that is, the first image area is 11×11.
In one possible embodiment, the size (m) of the first image region may be determined based on a target gain value corresponding to the original image. By way of example, assuming that m has K size values, all gain values may be divided into K gain value intervals. The first mapping relationship may be preconfigured, where the first mapping relationship includes a correspondence between a gain value interval and an intensity control parameter, and when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger. The first mapping relationship may be a mapping table, a mapping function, or a mapping curve, which is not limited as long as the mapping relationship between the gain value interval and the intensity control parameter is included. The second mapping relationship may be preconfigured, where the second mapping relationship includes a correspondence between the intensity control parameter and the first size, and when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger. The second mapping relationship may be a mapping table, a mapping function, a mapping curve, which is not limited thereto. Based on the method, a target gain value interval to which the target gain value belongs can be determined, a target intensity control parameter corresponding to the target gain value interval is obtained by inquiring a first mapping relation, and then a first target size corresponding to the target intensity control parameter is obtained by inquiring a second mapping relation, wherein the first target size is the size of the first image area.
For example, assuming that m has 3 size values of 7, 9, 11, etc., all gain values are divided into a gain value interval 1 (e.g., minimum gain value to gain value a), a gain value interval 2 (e.g., gain value a to gain value B), and a gain value interval 3 (e.g., gain value B to maximum gain value). In addition, the first mapping relationship may include a correspondence between the gain value interval 1 and the intensity control parameter 1, a correspondence between the gain value interval 2 and the intensity control parameter 2, and a correspondence between the gain value interval 3 and the intensity control parameter 3, where the intensity control parameter 3 is greater than the intensity control parameter 2, and the intensity control parameter 2 is greater than the intensity control parameter 1. The second mapping relationship may include a correspondence relationship between the intensity control parameter 1 and the size value 7 (i.e., the first size), a correspondence relationship between the intensity control parameter 2 and the size value 9, and a correspondence relationship between the intensity control parameter 3 and the size value 11. After obtaining the target gain value, if the target gain value belongs to the gain value interval 1, obtaining the intensity control parameter 1 corresponding to the gain value interval 1 by inquiring the first mapping relation. And obtaining a size value 7 corresponding to the intensity control parameter 1 by inquiring the second mapping relation, and taking the size value 7 as the size of the first image area, namely the image area with the first image area being 7*7.
Of course, the above is only an example of determining the size (m×m) of the first image area based on the target gain value, and is not limited thereto, as long as the size (m×m) of the first image area is related to the target gain value.
For example, the intensity control parameter may be a dead pixel block degree estimation parameter, the intensity control parameter may be denoted as α, and the intensity control parameter may control the size of the first image area, and since the intensity control parameter is determined based on the target gain value, the size of the first image area is determined based on the target gain value.
For example, for a pixel point in an original image, a first image area with the pixel point as a center pixel point, that is, an image area with a size of m×m, is obtained from the original image based on the size of the first image area.
Step 3032, analyzing whether the pixel points in the original image belong to the bad point block based on the pixel constraint.
For example, for a pixel point in the original image, an adjacent pixel point corresponding to the pixel point may be selected from the first image area, and an upper limit pixel value and a lower limit pixel value corresponding to the pixel point may be determined based on the pixel values of the adjacent pixel point. If the pixel value of the pixel is located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is not the pixel to be corrected, i.e. the pixel is a normal pixel which does not belong to the bad pixel block. If the pixel value of the pixel is not located between the upper limit pixel value and the lower limit pixel value, determining that the pixel is a pixel to be corrected, that is, the pixel is an abnormal pixel belonging to the bad pixel block, or continuously determining that the pixel is not the pixel to be corrected or is the pixel to be corrected based on the auxiliary feature corresponding to the first image area.
For example, the normal pixel point in the initial image should be constrained by the adjacent pixel point to take a value range, and the bad pixel can be primarily deleted according to the pixel constraint method. For the pixel point P (i, j) in the original image, the pixel point P (i, j) can be used as a central pixel point to construct 1*m adjacent pixel point arrays, namely, the pixel points in the same row with the pixel point P (i, j) in the first image area form the adjacent pixel point arrays. Or the pixel point P (i, j) may be used as a central pixel point, and an adjacent pixel point array with m×1 is constructed, that is, the pixel points in the same column as the pixel point P (i, j) in the first image area form an adjacent pixel point array.
Taking the adjacent pixel array of 1*m as an example, an adjacent pixel corresponding to the pixel P (i, j) may be selected from the adjacent pixel array, and the upper limit pixel value P max and the lower limit pixel value P min corresponding to the pixel P (i, j) are determined based on the pixel values of the adjacent pixels. For example, the determination method of the upper limit pixel value P max is shown in the formula (1), however, the formula (1) is merely an example, the determination method of the upper limit pixel value P max is not limited, and the upper limit pixel value P max can be determined based on the pixel values of the adjacent pixel points. The determination method of the lower limit pixel value P min is shown in the formula (2), however, the formula (2) is merely an example, the determination method of the lower limit pixel value P min is not limited, and the lower limit pixel value P mi can be determined based on the pixel values of the adjacent pixel points.
Assuming that the size of the first image area is 9*9, that is, the value of m is 9, equation (1) may be converted into equation (3), that is, the upper limit pixel value P max may be determined using equation (3), and equation (2) may be converted into equation (4), that is, the lower limit pixel value P min may be determined using equation (4).
P max = MAX (2 x P2-P1, 2x P3-P4, P2, P3) formula (3)
P min = MIN (2 x p2-P1, 2x p3-P4, P2, P3) formula (4)
In the formulas (3) and (4), P1, P2, P3, and P4 are pixel values of adjacent pixels corresponding to the pixel point P (i, j), and the positional relationship of P1, P2, P3, and P4 with the pixel point P (i, j) can be shown with reference to fig. 4, that is, P1 represents a pixel value of a4 th adjacent pixel on the left side of the pixel point P (i, j), P2 represents a pixel value of a2 nd adjacent pixel on the left side of the pixel point P (i, j), P3 represents a pixel value of a2 nd adjacent pixel on the right side of the pixel point P (i, j), and P4 represents a pixel value of a4 th adjacent pixel on the right side of the pixel point P (i, j).
After the upper limit pixel value P max and the lower limit pixel value P min are obtained, if the pixel value of the pixel point P (i, j) is located between the upper limit pixel value P max and the lower limit pixel value P min, it is determined that the pixel point P (i, j) is a normal pixel point not belonging to the defective pixel block. If the pixel value of the pixel P (i, j) is not located between the upper limit pixel value P max and the lower limit pixel value P min, determining that the pixel P (i, j) is a suspected bad pixel (whether the pixel belongs to a bad pixel block cannot be confirmed yet), executing step 3033, and analyzing whether the pixel belongs to a bad pixel block in the original image based on the auxiliary feature.
Step 3033, analyzing whether the pixel points in the original image belong to the dead pixel blocks based on the auxiliary features.
For example, the auxiliary feature may include, but is not limited to, a pixel value of each pixel in the first image area, and if the pixel value of the pixel P (i, j) is smaller than a first threshold (configured empirically) and there is an overexposed area in the first image area, the pixel P (i, j) is determined to be a pixel to be corrected, that is, the pixel P (i, j) is an abnormal pixel belonging to the dead pixel block. If the pixel value of the pixel P (i, j) is not less than the first threshold value and/or there is no overexposure region in the first image region, it is determined that the pixel P (i, j) is not the pixel to be corrected, that is, the pixel P (i, j) is a normal pixel that does not belong to the dead pixel block.
If the pixel value of each pixel point in the first image area is smaller than a second threshold value, and the second threshold value is larger than the first threshold value, no overexposure area exists in the first image area; and if the pixel value of at least one pixel point in the first image area is not smaller than the second threshold value, the overexposure area exists in the first image area.
For example, the auxiliary feature judgment is used as the supplement of the pixel constraint method, and scene characteristics, pixel features and the like of the dead pixel blocks are used as auxiliary features to improve the precision of dead pixel judgment. Because the dead pixel blocks appear at the edge of the high reflection area, namely the dead pixel neighborhood exists in the overexposure area, and meanwhile, the pixel value of the dead pixel is far lower than the surrounding pixel value, based on the principle, the characteristics (namely the overexposure area exists in the dead pixel neighborhood and the pixel value of the dead pixel is far lower than the surrounding pixel value) can be used as an auxiliary judging method of the pixel constraint method, whether the pixel belongs to the dead pixel blocks or not is detected through the auxiliary judging method, and the accuracy of dead pixel block detection is effectively improved.
The existence of the overexposure region in the dead pixel neighborhood refers to that if the pixel value of each pixel point in the first image region is smaller than a second threshold (which can be configured empirically, and can be a relatively large pixel value, that is, the second threshold is larger than the first threshold), the overexposure region does not exist in the dead pixel neighborhood, and if the pixel value of at least one pixel point in the first image region is not smaller than the second threshold, the overexposure region exists in the dead pixel neighborhood.
The bad pixel value is far lower than the surrounding pixel value, which means that if the pixel value of the pixel P (i, j) is smaller than the first threshold (which may be a relatively small pixel value according to an empirical configuration), it means that the pixel P (i, j) is far lower than the surrounding pixel value, and if the pixel value of the pixel P (i, j) is not smaller than the first threshold, it means that the pixel P (i, j) is not far lower than the surrounding pixel value.
Thus, step 303 is completed, where the pixel to be corrected in the original image may be located, where the pixel to be corrected is an abnormal pixel belonging to the bad pixel block, and after a certain pixel to be corrected is found, the subsequent step 304 is executed for the pixel to be corrected. For example, each pixel in the original image is traversed in turn, and for the currently traversed pixel, if the pixel is not the pixel to be corrected, the next pixel is traversed continuously, and if the pixel is the pixel to be corrected, the next step 304 is executed for the pixel to be corrected.
Through step 3031, step 3032 and step 3033, the defective pixel block area can be effectively positioned, and the pixel point to be corrected can be found, so that the defective pixel block (i.e., the pixel point to be corrected) can be corrected conveniently.
And 304, carrying out bad point block correction on the bad point blocks in the original image.
By way of example, as the dead pixels are distributed in a large quantity within a certain range and are adhered to form blocks, the number of invalid pixels within a certain neighborhood range is relatively large, the information provided by the pixels closer to the dead pixels is unreliable, and the normal pixel blocks exist in the regions farther from the correction center within the neighborhood, the detailed information contained in the normal pixel blocks is similar to the lost detailed information in the dead pixel block regions, so that the normal pixel blocks can be migrated to the dead pixel block regions in an integral migration and fusion mode of the pixel blocks, and the pixel smoothness is improved by carrying out interpolation and fusion on the original pixels in the dead pixel block regions. Based on the above principle, in order to perform bad block correction on a bad block in an original image, a pixel block selection direction may be preferentially determined, a target pixel block is selected based on the pixel block selection direction, and interpolation and fusion are performed on pixel values of the bad block based on pixel values of the target pixel block, so as to correct the bad block. Obviously, by using a pixel migration mode, the pixel blocks far away from the dead pixel block area can be subjected to integral pixel value migration and interpolation fusion with the original pixels, so that a more accurate correction value is obtained.
For example, the following steps may be used to perform bad block correction on a bad block in an original image:
Step 3041, determining the pixel migration direction. For example, for a pixel point to be corrected in an original image (the pixel point to be corrected belongs to a dead pixel block), a first image area corresponding to the pixel point to be corrected is obtained from the original image, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on pixel characteristics corresponding to each pixel block, and the target pixel block is a pixel block in a pixel migration direction.
For example, after obtaining the pixel to be corrected from the original image, a first image area may be obtained, where the center pixel of the first image area is the pixel to be corrected, and the size of the first image area is m×m, and the first image area is already obtained in step 303, and may be used in step 304.
The first image area may be divided into K pixel blocks, and the K pixel blocks constitute a plurality of pixel block sets, each including two pixel blocks, for example, the first image area may be divided into 4 pixel blocks, or 6 pixel blocks, or 8 pixel blocks, or 10 pixel blocks, or the like, and the number of the pixel blocks is not limited. For each pixel block set, the connection line of the central pixel points of the two pixel blocks in the pixel block set passes through the pixel point to be corrected, and of course, the connection line of the central pixel points of the two pixel blocks does not pass through the pixel point to be corrected, but the vertical distance between the pixel point to be corrected and the connection line of the central pixel point is smaller than a preset threshold value, which is not limited.
For example, taking the first image area divided into 8 pixel blocks as an example, referring to fig. 5, an example of the first image area is that the first image area is divided into 8 pixel blocks by using an eight-neighborhood method, and the center pixel point of the first image area is the pixel point to be corrected. The 8 pixel blocks are Block1、Block2、Block3、Block4、Block5、Block6、Block7、Block8,, and the 8 pixel blocks can be pixel blocks with the same size or pixel blocks with different sizes. The 8 pixel blocks may have overlapping pixels (e.g., block 1 and Block 2 have overlapping pixels, block 1 and Block 4 have overlapping pixels, block 2 and Block 3 have overlapping pixels, and so on, such overlapping relationship is not shown in fig. 5), and the 8 pixel blocks may not have overlapping pixels (such overlapping relationship is shown in fig. 5). As for the size of each pixel block, the size of each pixel block is not limited in this embodiment as long as the size is smaller than the size of the first image area, for example, 3*3 or 5*5 may be used if the size of the first image area is 9*9.
Referring to fig. 5, block 1 and Block 8 form a pixel Block set 1, block 2 and Block 7 form a pixel Block set 2, block 3 and Block 6 form a pixel Block set 3, and Block 4 and Block 5 form a pixel Block set 4. Of course, the first image region may include all pixel Block sets in the pixel Block set 1, the pixel Block set 2, the pixel Block set 3 and the pixel Block set 4, or may include a partial pixel Block set.
Obviously, the central pixel point connection line of the blocks 1 and 8 in the pixel Block set 1 passes through the pixel point to be corrected, the central pixel point connection line of the blocks 2 and 7 in the pixel Block set 2 passes through the pixel point to be corrected, the central pixel point connection line of the blocks 3 and 6 in the pixel Block set 3 passes through the pixel point to be corrected, and the central pixel point connection line of the blocks 4 and 5 in the pixel Block set 4 passes through the pixel point to be corrected.
For each set of pixel blocks, the gradient value corresponding to the set of pixel blocks may be determined based on the pixel characteristics corresponding to the two pixel blocks within the set of pixel blocks, respectively. The pixel characteristics corresponding to the pixel block may be an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the pixel block, which is, of course, only a few examples. Based on this, the average value P1 of the pixel values of all the pixels in Block 1, the average value P2 of the pixel values of all the pixels in Block 2, the average value P3 of the pixel values of all the pixels in Block 3, the average value P4 of the pixel values of all the pixels in Block 4, the average value P5 of the pixel values of all the pixels in Block 5, the average value P6 of the pixel values of all the pixels in Block 6, the average value P7 of the pixel values of all the pixels in Block 7, and the average value P8 of the pixel values of all the pixels in Block 8 are calculated.
Then, based on the pixel characteristics corresponding to Block 1 and Block 8 in the pixel Block set 1, the Gradient value Gradient WN:GradientWN=|P1-P8 l corresponding to the pixel Block set 1 (i.e., the pixel Block set in the northwest direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 2 and Block 7 in the pixel Block set 2, the Gradient value Gradient V:GradientV=|P2-P7 corresponding to the pixel Block set 2 (i.e., the pixel Block set in the vertical direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 3 and Block 6 in the pixel Block set 3, the Gradient value Gradient EN:GradientEN=|P3-P6 corresponding to the pixel Block set 3 (i.e., the pixel Block set in the northeast direction) is determined by the following formula. Based on the pixel characteristics corresponding to Block 4 and Block 5 in the pixel Block set 4, the Gradient value Gradient H:GradientH=|P5-P4 corresponding to the pixel Block set 4 (i.e., the pixel Block set in the horizontal direction) is determined by the following formula.
For example, based on the gradient value corresponding to each pixel block set, the pixel block set corresponding to the maximum gradient value may be selected, so that the pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is selected as the target pixel block, and thus, the target pixel block corresponding to the pixel point to be corrected is successfully selected. The target pixel block is a pixel block with the greatest similarity to the pixel point to be corrected, namely, a pixel block with smaller pixel characteristics in the pixel block set corresponding to the maximum gradient value is a pixel block with the greatest similarity to the pixel point to be corrected.
For example, gradient H is a horizontal Gradient (Gradient value corresponding to a pixel block set in the horizontal direction), gradient V is a vertical Gradient (Gradient value corresponding to a pixel block set in the vertical direction), gradient EN is a northeast Gradient (Gradient value corresponding to a pixel block set in the northeast direction), gradient WN is a northwest Gradient (Gradient value corresponding to a pixel block set in the northwest direction), and based on this, the maximum Gradient value can be expressed as Graident max=MAX(GraidentH,GraidentV,GraidentEN,GraidentWN.
If the maximum gradient value is Graident H, selecting a target pixel Block from the pixel Block set 4 in the horizontal direction, and when the pixel Block is P 5>P4, determining the target pixel Block as Block 4, otherwise, determining the target pixel Block as Block 5.
If the maximum gradient value is Graident V, selecting a target pixel Block from the pixel Block set 2 in the vertical direction, and when the pixel Block is P 2>P7, determining that the target pixel Block is Block 7, otherwise, determining that the target pixel Block is Block 2.
If the maximum gradient value is Graident EN, selecting a target pixel Block from the northeast direction pixel Block set 3, and when the pixel Block is P 3>P6, determining the target pixel Block as Block 6, otherwise, determining the target pixel Block as Block 3.
If the maximum gradient value is Graident WN, selecting a target pixel Block from the pixel Block set 1 in the northwest direction, and when the pixel Block is P 1>P8, determining that the target pixel Block is Block 8, otherwise, determining that the target pixel Block is Block 1.
So far, the target pixel block corresponding to the pixel point to be corrected is successfully determined from all the pixel blocks.
And step 3042, pixel interpolation fusion. For example, when correcting the pixel value of the pixel to be corrected, the second image area may be obtained from the original image, and the pixel value of each pixel in the second image area may be corrected based on the pixel feature corresponding to the target pixel block, that is, the second image area may be understood as a bad pixel block, and the pixel value of each pixel in the bad pixel block may be corrected.
The second image area is used for determining the pixel points to be corrected, the first image area is used for determining the target pixel block (see step 3041 for determining the target pixel block based on the first image area), and the target pixel block is used for correcting the pixel points of the second image area, that is, correcting the pixel value of each pixel point in the second image area based on the pixel characteristics corresponding to the target pixel block.
The central pixel point of the second image area is the pixel point to be corrected, the size of the second image area is n×n, the value of n is configured empirically, and n may be smaller than m, which is not limited. The value of n may be between a minimum value and a maximum value, the minimum value may be empirically configured, such as 2, 3, etc., the maximum value may be empirically configured, such as 4, 5, etc., for example, taking the minimum value of 3 and the maximum value of 5 as examples, n is greater than or equal to 3, and n is less than or equal to 5, such as n may be 3, n may be 4, and n may be 5.
The size (n x n) of the second image region may be determined based on a target gain value corresponding to the original image. For example, assuming that n has K size values, all gain values are divided into K gain value intervals, gain value interval 1 corresponds to the 1 st size value of n, and so on. After obtaining a target gain value corresponding to the original image, determining a target gain value section to which the target gain value belongs, and taking a size value corresponding to the target gain value section as the size of the second image area.
The size (n x n) of the second image region may be determined based on a target gain value corresponding to the original image. Illustratively, assuming that n has K size values, all gain values are divided into K gain value intervals. The first mapping relationship may be preconfigured, where the first mapping relationship includes a correspondence between a gain value interval and an intensity control parameter. A third mapping relationship may be preconfigured, where the third mapping relationship includes a correspondence between the intensity control parameter and the second size, and when the intensity control parameter is larger, the second size corresponding to the intensity control parameter is larger. The third mapping relationship may be a mapping table, a mapping function, a mapping curve, which is not limited thereto. The method comprises the steps of determining a target gain value interval to which a target gain value belongs, obtaining a target intensity control parameter corresponding to the target gain value interval by inquiring a first mapping relation, and obtaining a second target size corresponding to the target intensity control parameter by inquiring a third mapping relation, wherein the second target size is the size of a second image area, and the second target size of the second image area is smaller than the first target size of the first image area.
For example, assuming that n has 3 size values such as 3,4,5, etc., all the gain values are divided into a gain value interval 1, a gain value interval 2, and a gain value interval 3, the first mapping relationship may include a correspondence between the gain value interval 1 and the intensity control parameter 1, a correspondence between the gain value interval 2 and the intensity control parameter 2, a correspondence between the gain value interval 3 and the intensity control parameter 3, and the third mapping relationship may include a correspondence between the intensity control parameter 1 and the size value 3 (i.e., the second size), a correspondence between the intensity control parameter 2 and the size value 4, and a correspondence between the intensity control parameter 3 and the size value 5. After obtaining the target gain value, if the target gain value belongs to the gain value interval 1, obtaining the intensity control parameter 1 corresponding to the gain value interval 1 by inquiring the first mapping relation. And obtaining a size value 3 corresponding to the intensity control parameter 1 by inquiring the third mapping relation, and taking the size value 3 as the size of the second image area, namely the image area with the second image area being 3*3.
For a pixel to be corrected in the original image, a second image area with the pixel to be corrected as a central pixel may be obtained from the original image based on a size (n×n) of the second image area.
For example, the pixel characteristics corresponding to the target pixel block may be determined, and the pixel characteristics corresponding to the target pixel block may be an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the target pixel block, which are, of course, only a few examples, and the pixel characteristics are not limited, and the median value of all the pixel points in the target pixel block is taken as an example. Assuming that the original image is an image in Bayer format, the target pixel Block (noted as Block i) may include a pixel value of B channel, a pixel value of Gb channel, a pixel value of Gr channel, a pixel value of R channel. Based on this, the median value b_val of the pixel values of all the B channels within the target pixel Block may be calculated using the following formula, b_val=media (Block i (B)), the median value gb_val of the pixel values of all the Gb channels within the target pixel Block may be calculated using the following formula, gb_val=media (Block i (Gb)), the median value gr_val of the pixel values of all the Gr channels within the target pixel Block may be calculated using the following formula, gr_val=media (Block i (Gr)), and the median value r_val of the pixel values of all the R channels within the target pixel Block may be calculated using the following formula, r_val=media (Block i (R)).
For example, after obtaining the pixel feature corresponding to the target pixel block, for example, b_val, gb_val, gr_val, r_val, and the like, the pixel value of each pixel in the second image area may be corrected based on the pixel feature corresponding to the target pixel block, for example, for each pixel in the second image area, the corrected pixel value of the pixel is determined based on the pixel feature corresponding to the target pixel block and the pixel value of the pixel.
For example, after determining the pixel to be corrected, a second image area corresponding to the pixel to be corrected may be obtained from the original image, and the second image area may be used as a dead pixel block, and the pixel value of each pixel in the dead pixel block may be corrected, so that the dead pixel block is corrected by using an area correction method.
Assuming that the coordinates of the pixel to be corrected are (i, j), and the size of the second image area is n×n, the coordinate range of the second image area may beTo the point ofEach pixel point within the coordinate range may be corrected, and the pixel point to be corrected (i, j) may be included in the pixel points.
For example, for each pixel point in the second image region, the corrected pixel value for that pixel point may be determined using the following formula: In the above formula, the display represents the corrected pixel value of the pixel point, P represents the pixel value of the pixel point before correction, that is, the pixel value corresponding to the pixel point in the original image, val represents the pixel characteristic (such as the median value of all the pixel points) corresponding to the target pixel block, 255 represents the maximum pixel value, and may be set to other values.
For example, for each pixel in the second image area, if the pixel is a B-channel pixel, the corrected pixel value of the pixel may be determined according to the following formula: B val represents the median value of the pixel values of all B channels within the target pixel block. If the pixel is a pixel of the Gb channel, the corrected pixel value of the pixel may be determined using the following formula: Gb_val indicates the median value of pixel values of all Gb channels within the target pixel block. If the pixel is a pixel of the Gr channel, the corrected pixel value of the pixel may be determined using the following formula: Gr_val shows the median of the pixel values of all Gr channels within the target pixel block. If the pixel is a pixel of the R channel, the corrected pixel value of the pixel may be determined using the following formula: r val represents the median of the pixel values of all R channels within the target pixel block.
Thus, the correction of the pixel to be corrected is completed, that is, the pixel to be corrected can be used as the central pixel of the dead pixel block (i.e., the second image area), and the correction process of the dead pixel block is completed.
After the correction of the pixel to be corrected is completed, traversing the next pixel of the pixel to be corrected, executing step 303 for the next pixel, and repeating steps 303 and 304 until the traversal of the original image is completed.
Step 305, outputting the corrected image. For example, for each pixel in the original image, if the pixel is corrected, the corrected image includes the corrected pixel value of the pixel, and if the pixel is not corrected, the corrected image includes the original pixel value of the pixel. Referring to fig. 6A, an example of an original image before correction is shown, and referring to fig. 6B, an example of an image after correction is shown.
As can be seen from the above technical solutions, in the embodiments of the present application, the first image area is divided into K pixel blocks, a target pixel block is selected from the K pixel blocks based on the pixel characteristics corresponding to each pixel block, and the pixel values of the pixels to be corrected are corrected based on the pixel characteristics corresponding to the target pixel block, so as to obtain a corrected image. By correcting the pixel value of the pixel point to be corrected (namely the dead pixel), the image quality can be improved, the image definition is improved, the image detail is not lacked or reduced, the image integrity is relatively good, the problem that the detail information is seriously lost is solved, and the detail restoration degree and the image integrity of the image are improved. The method adopts a gain linkage mode to solve the problem of expansion risk of dead pixel block areas caused by gain change of a sensor, proposes pixel migration and interpolation fusion to solve the problem of inaccurate interpolation data, and solves the problem of image classification caused by single data interpolation.
Based on the same application concept as the above method, the embodiment of the application provides an image correction method, which may include acquiring a first image area from an original image, where the first image area includes a pixel to be corrected and an overexposed image area, or the first image area includes a pixel to be corrected, and the first image area is connected to the overexposed image area. The process of acquiring the first image area may refer to step 301, step 302 and step 303, and will not be repeated here.
A target pixel block is extracted from the first image region, wherein the target pixel block is any pixel block in the first image region, and the target pixel block is related to a gradient value of the pixel block and a pixel value of the pixel block. For example, a block pixel value of each pixel block in the first image area is obtained, wherein the block pixel value is represented by any one of an average value, a median value, a maximum value and a minimum value, gradient values in a horizontal direction, a vertical direction and a diagonal direction are respectively obtained according to the block pixel values, a pixel block corresponding to the minimum block pixel value in the direction of the maximum gradient value is taken as a target pixel block, and the target pixel block is the pixel block with the greatest similarity with the pixel point to be corrected. The process of obtaining the target pixel block may refer to step 3041, and the detailed description is not repeated here.
And correcting the pixel value of the pixel point to be corrected based on the pixel characteristics of the target pixel block to obtain a corrected pixel value. The method comprises the steps of obtaining a block correction size, determining a coordinate range of a region to be corrected based on the block correction size and pixel points to be corrected, correcting pixel values of all pixels in the coordinate range of the region to be corrected based on pixel characteristics of a target pixel block to obtain corrected pixel values, wherein the pixel characteristics of the target pixel block are represented by any one of average values, median values, maximum values and minimum values of all the pixel points in the target pixel block. The process of correcting the pixel value of the pixel point to be corrected can refer to step 3042, and the detailed description is not repeated here. The region coordinate range to be corrected may be the second image region of the above embodiment.
A corrected image is generated based on the corrected pixel values, the process may be seen in step 305.
Based on the same application concept as the above method, an image correction device is provided in an embodiment of the present application, and referring to fig. 7, the image correction device is a schematic structural diagram, and the device may include:
The acquisition module 71 is configured to acquire a pixel to be corrected in an original image, and acquire a first image area corresponding to the pixel to be corrected from the original image, wherein a center pixel of the first image area is the pixel to be corrected;
The processing module 72 is configured to select a target pixel block from the K pixel blocks based on a pixel feature corresponding to each pixel block, correct a pixel value of the pixel point to be corrected based on a pixel feature corresponding to the target pixel block, and obtain a corrected pixel value of the pixel point to be corrected, where the target pixel block is a pixel block with a maximum similarity to the pixel point to be corrected;
a generating module 73, configured to generate a corrected image based on the corrected pixel values of the pixel points to be corrected.
The processing module 72 is specifically configured to determine a gradient value corresponding to the pixel block set based on pixel characteristics corresponding to two pixel blocks in the pixel block set when selecting a target pixel block from the K pixel blocks based on pixel characteristics corresponding to each pixel block, wherein the pixel characteristics corresponding to the pixel block are an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixel points in the pixel block set, and select a pixel block with a smaller pixel characteristic in the pixel block set corresponding to the maximum gradient value as the target pixel block based on the gradient value corresponding to each pixel block set.
The processing module 72 is specifically configured to obtain a second image area from an original image when obtaining a corrected pixel value of the pixel to be corrected based on a pixel feature corresponding to the target pixel block, where a size of the second image area is smaller than a size of the first image area, the second image area is configured to determine a pixel to be corrected, the first image area is configured to determine the target pixel block, and the target pixel block is configured to correct the pixel of the second image area, and correct, for each pixel in the second image area, the pixel value of the pixel based on the pixel feature corresponding to the target pixel block, to obtain a corrected pixel value of the pixel, and the pixel feature corresponding to the target pixel block is an average value, a median value, a maximum value, or a minimum value of the pixel values of all the pixels in the target pixel block.
The size of the first image area is determined based on a target gain value corresponding to the original image, wherein the target gain value is a gain value adopted when the original image is acquired through a sensor, and the size of the second image area is determined based on the target gain value.
The obtaining module 71 is further configured to determine a target gain value interval to which the target gain value belongs, obtain a target intensity control parameter corresponding to the target gain value interval by querying a first mapping relationship, where the first mapping relationship includes a correspondence between the gain value interval and the intensity control parameter, when the gain value interval is larger, the intensity control parameter corresponding to the gain value interval is larger, obtain a first target size corresponding to the target intensity control parameter by querying a second mapping relationship, where the first target size is the size of the first image area, and obtain a second target size corresponding to the target intensity control parameter, where the second target size is the size of the second image area, and is smaller than the first target size, and where the second mapping relationship includes a correspondence between the intensity control parameter and the first size, when the intensity control parameter is larger, the first size corresponding to the intensity control parameter is larger, and the third mapping relationship includes a correspondence between the intensity control parameter and the second size, when the intensity control parameter is larger, the second size is larger, and the second size is larger.
The obtaining module 71 is specifically configured to obtain, for a pixel in an original image, a first image area corresponding to the pixel from the original image, select an adjacent pixel corresponding to the pixel from the first image area, determine an upper limit pixel value and a lower limit pixel value corresponding to the pixel based on the pixel value of the adjacent pixel, determine that the pixel is not a pixel to be corrected if the pixel is located between the upper limit pixel value and the lower limit pixel value, and determine that the pixel is not a pixel to be corrected if the pixel is not located between the upper limit pixel value and the lower limit pixel value, or determine that the pixel is not a pixel to be corrected based on an auxiliary feature corresponding to the first image area.
The obtaining module 71 is specifically configured to determine that the pixel is not a pixel to be corrected or is a pixel to be corrected based on the auxiliary feature corresponding to the first image area if the auxiliary feature includes a pixel value of each pixel in the first image area, determine that the pixel is a pixel to be corrected if the pixel value of the pixel is smaller than a first threshold and there is an overexposure area in the first image area, determine that the pixel is not a pixel to be corrected if the pixel value of the pixel is not smaller than the first threshold and/or if there is no overexposure area in the first image area, and determine that the pixel is not a pixel to be corrected if the pixel value of each pixel in the first image area is smaller than a second threshold, and if the second threshold is greater than the first threshold, there is no overexposure area in the first image area, and if the pixel value of at least one pixel in the first image area is not smaller than the second threshold.
Based on the same application conception as the method, the embodiment of the application provides an image correction device which comprises an acquisition module, a correction module and a generation module, wherein the acquisition module is used for acquiring a first image area from an original image, the first image area comprises a pixel point to be corrected and an overexposed image area, or the first image area comprises the pixel point to be corrected and is connected with the overexposed image area, the extraction module is used for extracting a target pixel block from the first image area, the target pixel block is any pixel block in the first image area, the target pixel block is related to a gradient value of a pixel block and a pixel value of the pixel block, the correction module is used for correcting the pixel value of the pixel point to be corrected based on the pixel characteristic of the target pixel block to obtain a corrected pixel value, and the generation module is used for generating a corrected image based on the corrected pixel value.
The extraction module is specifically configured to obtain a block pixel value of each pixel block in the first image area when extracting a target pixel block from the first image area, where the block pixel value is represented by any one of an average value, a median value, a maximum value and a minimum value, obtain gradient values in a horizontal direction, a vertical direction and a diagonal direction according to the block pixel value, and use a pixel block corresponding to the minimum block pixel value in a direction of the maximum gradient value as the target pixel block.
The correction module corrects the pixel value of the pixel point to be corrected based on the pixel characteristic of the target pixel block, and is specifically used for obtaining a block correction size when the corrected pixel value is obtained, determining a coordinate range of a region to be corrected based on the block correction size and the pixel point to be corrected, correcting the pixel values of all pixels in the coordinate range of the region to be corrected based on the pixel characteristic of the target pixel block, and obtaining the corrected pixel value, wherein the pixel characteristic of the target pixel block is represented by any one of an average value, a median value, a maximum value and a minimum value of each pixel point in the target pixel block.
Based on the same application concept as the above method, an electronic device is provided in an embodiment of the present application, and referring to fig. 8, the electronic device includes a processor 81 and a machine-readable storage medium 82, where the machine-readable storage medium 82 stores machine executable instructions that can be executed by the processor 81, and the processor 81 is configured to execute the machine executable instructions to implement the image correction method disclosed in the above example of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a plurality of computer instructions are stored, where the computer instructions can implement the image correction method disclosed in the above example of the present application when the computer instructions are executed by a processor.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, the machine-readable storage medium may be RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state disk, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer entity or by an article of manufacture having some functionality. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1.一种图像校正方法,其特征在于,所述方法包括:1. An image correction method, characterized in that the method comprises: 获取原始图像中的待校正像素点,并从所述原始图像中获取待校正像素点对应的第一图像区域,所述第一图像区域的中心像素点为所述待校正像素点;其中,所述待校正像素点是属于坏点块的异常像素点;Obtaining a pixel to be corrected in an original image, and obtaining a first image region corresponding to the pixel to be corrected from the original image, wherein a central pixel of the first image region is the pixel to be corrected; wherein the pixel to be corrected is an abnormal pixel belonging to a bad pixel block; 将所述第一图像区域划分为K个像素块,K为大于1的正整数;Dividing the first image area into K pixel blocks, where K is a positive integer greater than 1; 基于每个像素块对应的像素特征从所述K个像素块中选取目标像素块;其中,所述目标像素块是与所述待校正像素点相似性最大的像素块;Selecting a target pixel block from the K pixel blocks based on pixel features corresponding to each pixel block; wherein the target pixel block is the pixel block having the greatest similarity to the pixel point to be corrected; 基于所述目标像素块对应的像素特征对所述待校正像素点的像素值进行校正,得到所述待校正像素点的校正后像素值;Correcting the pixel value of the pixel to be corrected based on the pixel feature corresponding to the target pixel block to obtain a corrected pixel value of the pixel to be corrected; 基于所述待校正像素点的校正后像素值生成校正后图像;generating a corrected image based on the corrected pixel values of the pixels to be corrected; 其中,K个像素块组成多个像素块集合,像素块集合内两个像素块的中心像素点连线经过所述待校正像素点;所述基于每个像素块对应的像素特征从所述K个像素块中选取目标像素块,包括:The K pixel blocks form a plurality of pixel block sets, and a line connecting central pixel points of two pixel blocks in the pixel block set passes through the pixel point to be corrected; and selecting a target pixel block from the K pixel blocks based on the pixel features corresponding to each pixel block includes: 基于像素块集合内的两个像素块分别对应的像素特征,确定该像素块集合对应的梯度值;其中,像素块对应的像素特征为该像素块内的所有像素点的像素值的平均值、或中值、或最大值,或最小值;Determine a gradient value corresponding to the pixel block set based on pixel features corresponding to two pixel blocks in the pixel block set; wherein the pixel feature corresponding to the pixel block is an average value, a median value, a maximum value, or a minimum value of pixel values of all pixels in the pixel block; 基于每个像素块集合对应的梯度值,将最大梯度值对应的像素块集合中的像素特征更小的像素块选取为所述目标像素块。Based on the gradient value corresponding to each pixel block set, a pixel block with a smaller pixel feature in the pixel block set corresponding to the maximum gradient value is selected as the target pixel block. 2.根据权利要求1所述的方法,其特征在于,2. The method according to claim 1, characterized in that 所述基于所述目标像素块对应的像素特征对所述待校正像素点的像素值进行校正,得到所述待校正像素点的校正后像素值,包括:Correcting the pixel value of the pixel to be corrected based on the pixel feature corresponding to the target pixel block to obtain the corrected pixel value of the pixel to be corrected includes: 从所述原始图像中获取第二图像区域,所述第二图像区域的中心像素点为所述待校正像素点,所述第二图像区域的尺寸小于所述第一图像区域的尺寸;其中,所述第二图像区域用于确定需要校正的像素点,所述第一图像区域用于确定目标像素块,且目标像素块用于对所述第二图像区域的像素点进行校正;Acquire a second image area from the original image, wherein a central pixel point of the second image area is the pixel point to be corrected, and a size of the second image area is smaller than a size of the first image area; wherein the second image area is used to determine the pixel points to be corrected, and the first image area is used to determine a target pixel block, and the target pixel block is used to correct the pixel points of the second image area; 针对所述第二图像区域内每个像素点,基于所述目标像素块对应的像素特征对该像素点的像素值进行校正,得到该像素点的校正后像素值;For each pixel point in the second image area, correcting the pixel value of the pixel point based on the pixel feature corresponding to the target pixel block to obtain a corrected pixel value of the pixel point; 其中,所述目标像素块对应的像素特征为所述目标像素块内的所有像素点的像素值的平均值、或中值、或最大值,或最小值。The pixel feature corresponding to the target pixel block is the average value, median value, maximum value, or minimum value of the pixel values of all pixels in the target pixel block. 3.根据权利要求2所述的方法,其特征在于,3. The method according to claim 2, characterized in that 所述第一图像区域的尺寸是基于所述原始图像对应的目标增益值确定;其中,所述目标增益值是通过传感器采集所述原始图像时采用的增益值;The size of the first image area is determined based on a target gain value corresponding to the original image; wherein the target gain value is a gain value used when the original image is acquired by a sensor; 所述第二图像区域的尺寸是基于所述目标增益值确定。The size of the second image area is determined based on the target gain value. 4.根据权利要求3所述的方法,其特征在于,所述方法还包括:4. The method according to claim 3, further comprising: 确定所述目标增益值所属的目标增益值区间;determining a target gain value interval to which the target gain value belongs; 通过查询第一映射关系,得到所述目标增益值区间对应的目标强度控制参数;其中,所述第一映射关系包括增益值区间与强度控制参数的对应关系,在增益值区间越大时,该增益值区间对应的强度控制参数越大;Obtaining a target intensity control parameter corresponding to the target gain value interval by querying a first mapping relationship; wherein the first mapping relationship includes a correspondence between the gain value interval and the intensity control parameter, and the larger the gain value interval, the larger the intensity control parameter corresponding to the gain value interval; 通过查询第二映射关系,得到所述目标强度控制参数对应的第一目标尺寸,所述第一目标尺寸为所述第一图像区域的尺寸;通过查询第三映射关系,得到所述目标强度控制参数对应的第二目标尺寸,所述第二目标尺寸为所述第二图像区域的尺寸,所述第二目标尺寸小于所述第一目标尺寸;By querying the second mapping relationship, a first target size corresponding to the target intensity control parameter is obtained, where the first target size is the size of the first image area; and by querying the third mapping relationship, a second target size corresponding to the target intensity control parameter is obtained, where the second target size is the size of the second image area, and the second target size is smaller than the first target size. 其中,所述第二映射关系包括强度控制参数与第一尺寸的对应关系,在强度控制参数越大时,该强度控制参数对应的第一尺寸越大;The second mapping relationship includes a correspondence between an intensity control parameter and a first size, and the larger the intensity control parameter is, the larger the first size corresponding to the intensity control parameter is; 所述第三映射关系包括强度控制参数与第二尺寸的对应关系,在强度控制参数越大时,该强度控制参数对应的第二尺寸越大。The third mapping relationship includes a correspondence between the intensity control parameter and the second size. When the intensity control parameter is larger, the second size corresponding to the intensity control parameter is larger. 5.根据权利要求1-4任一项所述的方法,其特征在于,5. The method according to any one of claims 1 to 4, characterized in that 所述获取原始图像中的待校正像素点,包括:The step of obtaining the pixel points to be corrected in the original image includes: 针对所述原始图像中的像素点,从所述原始图像中获取该像素点对应的第一图像区域,从第一图像区域中选取该像素点对应的相邻像素点,基于所述相邻像素点的像素值确定该像素点对应的上限像素值和下限像素值;For a pixel point in the original image, obtain a first image region corresponding to the pixel point from the original image, select adjacent pixels corresponding to the pixel point from the first image region, and determine an upper limit pixel value and a lower limit pixel value corresponding to the pixel point based on pixel values of the adjacent pixels; 若该像素点的像素值位于所述上限像素值与所述下限像素值之间,则确定该像素点不为待校正像素点;若该像素点的像素值不位于所述上限像素值与所述下限像素值之间,则确定该像素点为待校正像素点,或者,基于第一图像区域对应的辅助特征确定该像素点不为待校正像素点或为待校正像素点。If the pixel value of the pixel point is between the upper limit pixel value and the lower limit pixel value, it is determined that the pixel point is not the pixel point to be corrected; if the pixel value of the pixel point is not between the upper limit pixel value and the lower limit pixel value, it is determined that the pixel point is the pixel point to be corrected, or, based on the auxiliary features corresponding to the first image area, it is determined that the pixel point is not the pixel point to be corrected or is the pixel point to be corrected. 6.根据权利要求5所述的方法,其特征在于,若所述辅助特征包括第一图像区域内每个像素点的像素值,所述基于第一图像区域对应的辅助特征确定该像素点不为待校正像素点或为待校正像素点,包括:6. The method according to claim 5, wherein if the auxiliary feature includes a pixel value of each pixel in the first image area, determining whether the pixel is a pixel to be corrected or not based on the auxiliary feature corresponding to the first image area comprises: 若该像素点的像素值小于第一阈值,且第一图像区域内存在过曝区域,则确定该像素点为待校正像素点;若该像素点的像素值不小于第一阈值,和/或,第一图像区域内不存在过曝区域,则确定该像素点不为待校正像素点;If the pixel value of the pixel point is less than a first threshold value and there is an overexposed area in the first image area, the pixel point is determined to be a pixel point to be corrected; if the pixel value of the pixel point is not less than the first threshold value and/or there is no overexposed area in the first image area, the pixel point is determined not to be a pixel point to be corrected; 其中,若第一图像区域内每个像素点的像素值均小于第二阈值,第二阈值大于第一阈值,则第一图像区域内不存在过曝区域;若第一图像区域内的至少一个像素点的像素值不小于第二阈值,则第一图像区域内存在过曝区域。Among them, if the pixel value of each pixel point in the first image area is less than the second threshold, and the second threshold is greater than the first threshold, then there is no overexposed area in the first image area; if the pixel value of at least one pixel point in the first image area is not less than the second threshold, then there is an overexposed area in the first image area. 7.一种图像校正装置,其特征在于,所述装置包括:7. An image correction device, characterized in that the device comprises: 获取模块,用于获取原始图像中的待校正像素点,从所述原始图像中获取待校正像素点对应的第一图像区域,所述第一图像区域的中心像素点为所述待校正像素点;将所述第一图像区域划分为K个像素块,K为大于1的正整数;其中,所述待校正像素点是属于坏点块的异常像素点;An acquisition module is configured to acquire a pixel to be corrected in an original image, obtain a first image region corresponding to the pixel to be corrected from the original image, wherein the central pixel of the first image region is the pixel to be corrected; divide the first image region into K pixel blocks, where K is a positive integer greater than 1; wherein the pixel to be corrected is an abnormal pixel belonging to a bad pixel block; 处理模块,用于基于每个像素块对应的像素特征从所述K个像素块中选取目标像素块;基于所述目标像素块对应的像素特征对所述待校正像素点的像素值进行校正,得到所述待校正像素点的校正后像素值;其中,所述目标像素块是与所述待校正像素点相似性最大的像素块;a processing module, configured to select a target pixel block from the K pixel blocks based on pixel features corresponding to each pixel block; and correct the pixel value of the pixel point to be corrected based on the pixel features corresponding to the target pixel block to obtain a corrected pixel value of the pixel point to be corrected; wherein the target pixel block is the pixel block having the greatest similarity to the pixel point to be corrected; 生成模块,用于基于所述待校正像素点的校正后像素值生成校正后图像;A generating module, configured to generate a corrected image based on the corrected pixel values of the pixels to be corrected; 其中,K个像素块组成多个像素块集合,像素块集合内两个像素块的中心像素点连线经过所述待校正像素点;所述处理模块基于每个像素块对应的像素特征从所述K个像素块中选取目标像素块时具体用于:基于像素块集合内的两个像素块分别对应的像素特征,确定该像素块集合对应的梯度值;其中,像素块对应的像素特征为该像素块内的所有像素点的像素值的平均值、或中值、或最大值,或最小值;基于每个像素块集合对应的梯度值,将最大梯度值对应的像素块集合中的像素特征更小的像素块选取为所述目标像素块。Among them, K pixel blocks constitute a plurality of pixel block sets, and a line connecting the center pixel points of two pixel blocks in the pixel block set passes through the pixel point to be corrected; when the processing module selects a target pixel block from the K pixel blocks based on the pixel features corresponding to each pixel block, it is specifically used to: determine a gradient value corresponding to the pixel block set based on the pixel features respectively corresponding to the two pixel blocks in the pixel block set; wherein the pixel feature corresponding to the pixel block is the average value, median value, maximum value, or minimum value of the pixel values of all pixel points in the pixel block; based on the gradient value corresponding to each pixel block set, select a pixel block with a smaller pixel feature in the pixel block set corresponding to the maximum gradient value as the target pixel block. 8.一种图像校正方法,其特征在于,所述方法包括:8. An image correction method, characterized in that the method comprises: 从原始图像中获取第一图像区域,其中,所述第一图像区域包含待校正像素点和过曝图像区域,或者,所述第一图像区域包含待校正像素点,且所述第一图像区域与所述过曝图像区域相连;Acquire a first image region from the original image, wherein the first image region includes pixels to be corrected and an overexposed image region, or the first image region includes pixels to be corrected and the first image region is connected to the overexposed image region; 从所述第一图像区域中提取目标像素块,其中,所述目标像素块为所述第一图像区域中的任一像素块,所述目标像素块与像素块的梯度值以及像素块的像素值相关;Extracting a target pixel block from the first image area, wherein the target pixel block is any pixel block in the first image area, and the target pixel block is related to a gradient value of the pixel block and a pixel value of the pixel block; 基于所述目标像素块的像素特征对所述待校正像素点的像素值进行校正,得到校正后像素值;Correcting the pixel value of the pixel to be corrected based on the pixel characteristics of the target pixel block to obtain a corrected pixel value; 基于所述校正后像素值生成校正后图像;generating a corrected image based on the corrected pixel values; 其中,从所述第一图像区域中提取目标像素块包括:The step of extracting a target pixel block from the first image area includes: 获取所述第一图像区域中每个像素块的块像素值,其中,所述块像素值用平均值、中值、最大值和最小值中任一表示;Obtaining a block pixel value of each pixel block in the first image region, wherein the block pixel value is represented by any one of an average value, a median value, a maximum value, and a minimum value; 根据所述块像素值分别获取水平方向、垂直方向和对角线方向的梯度值;Obtaining gradient values in the horizontal direction, the vertical direction, and the diagonal direction respectively according to the block pixel value; 将最大梯度值方向上的最小块像素值对应的像素块,作为所述目标像素块。The pixel block corresponding to the minimum block pixel value in the direction of the maximum gradient value is used as the target pixel block. 9.根据权利要求8所述的方法,其特征在于,基于所述目标像素块的像素特征对所述待校正像素点的像素值进行校正,得到校正后像素值包括:9. The method according to claim 8, wherein correcting the pixel value of the pixel to be corrected based on the pixel characteristics of the target pixel block to obtain the corrected pixel value comprises: 获取块校正尺寸,基于所述块校正尺寸和所述待校正像素点确定待校正区域坐标范围;Acquire a block correction size, and determine a coordinate range of a to-be-corrected area based on the block correction size and the to-be-corrected pixel points; 基于所述目标像素块的像素特征对所述待校正区域坐标范围内的所有像素的像素值进行校正,得到校正后像素值;其中,所述目标像素块的像素特征用所述目标像素块内各像素点的平均值、中值、最大值和最小值中任一表示。Based on the pixel characteristics of the target pixel block, the pixel values of all pixels within the coordinate range of the to-be-corrected area are corrected to obtain corrected pixel values; wherein the pixel characteristics of the target pixel block are represented by any one of the average value, median value, maximum value and minimum value of each pixel point in the target pixel block. 10.一种电子设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现权利要求1-6任一所述的方法;或者,所述处理器用于执行机器可执行指令,以实现权利要求8-9任一所述的方法。10. An electronic device, characterized in that it comprises: a processor and a machine-readable storage medium, the machine-readable storage medium storing machine-executable instructions that can be executed by the processor; the processor is used to execute the machine-executable instructions to implement the method described in any one of claims 1-6; or the processor is used to execute the machine-executable instructions to implement the method described in any one of claims 8-9.
CN202310983118.XA 2023-08-04 2023-08-04 Image correction method, device and equipment Active CN117132489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310983118.XA CN117132489B (en) 2023-08-04 2023-08-04 Image correction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310983118.XA CN117132489B (en) 2023-08-04 2023-08-04 Image correction method, device and equipment

Publications (2)

Publication Number Publication Date
CN117132489A CN117132489A (en) 2023-11-28
CN117132489B true CN117132489B (en) 2025-09-26

Family

ID=88851903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310983118.XA Active CN117132489B (en) 2023-08-04 2023-08-04 Image correction method, device and equipment

Country Status (1)

Country Link
CN (1) CN117132489B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119485052B (en) * 2025-01-10 2025-04-18 元途人工智能(杭州)有限公司 Image dead pixel correction method based on FPGA

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845081A (en) * 2017-12-20 2018-03-27 成都信息工程大学 A kind of Magnetic Resonance Image Denoising
CN110650334A (en) * 2019-10-29 2020-01-03 昆山锐芯微电子有限公司 Dead pixel detection and correction method and device, storage medium and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5060643B1 (en) * 2011-08-31 2012-10-31 株式会社東芝 Image processing apparatus and image processing method
CN106454289B (en) * 2016-11-29 2018-01-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
JP7248042B2 (en) * 2021-01-27 2023-03-29 カシオ計算機株式会社 Image processing device, image processing method and image processing program
CN116017182B (en) * 2022-12-27 2025-09-12 凌云光技术股份有限公司 Bad pixel correction method, device, terminal equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845081A (en) * 2017-12-20 2018-03-27 成都信息工程大学 A kind of Magnetic Resonance Image Denoising
CN110650334A (en) * 2019-10-29 2020-01-03 昆山锐芯微电子有限公司 Dead pixel detection and correction method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN117132489A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
US8508605B2 (en) Method and apparatus for image stabilization
WO2019105154A1 (en) Image processing method, apparatus and device
JP5904213B2 (en) Image processing apparatus, image processing method, and program
JP6478136B1 (en) Endoscope system and operation method of endoscope system
JP4748230B2 (en) Imaging apparatus, imaging method, and imaging program
US20200389573A1 (en) Image processing system, image processing method and storage medium
CN108426640A (en) A kind of bearing calibration for infrared detector defect pixel
CN114359055A (en) Image stitching method and related device for multi-camera shooting screen body
CN109559353B (en) Camera module calibration method, device, electronic device, and computer-readable storage medium
JP2009164690A (en) Image processing apparatus, correction information generation method, and imaging apparatus
US20150243027A1 (en) Image processing device, image processing method, and program
JP6172935B2 (en) Image processing apparatus, image processing method, and image processing program
JP2013219434A (en) Image processing device and control method of the same
CN117132489B (en) Image correction method, device and equipment
JP6320053B2 (en) Image processing apparatus, image processing method, and computer program
JP2013255080A (en) Imaging apparatus, synthesized image generating method and program
CN116744136A (en) TDI line scan camera dead pixel correction method, system, storage media and equipment
KR101559338B1 (en) System for testing camera module centering and method for testing camera module centering using the same
JP2009302960A (en) Video signal processing apparatus, program and method
JP2009088884A (en) Method and apparatus for motion vector detection in imaging data
JP4628851B2 (en) Object detection method and object detection apparatus
JP6041523B2 (en) Image processing apparatus and method
JP5159647B2 (en) Image processing apparatus and image processing method
JP5616743B2 (en) Imaging apparatus and image processing method
JP2006148748A (en) Pixel defect correction apparatus and pixel defect correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant