CN119402759B - Image signal processing method, system and cradle head equipment - Google Patents
Image signal processing method, system and cradle head equipment Download PDFInfo
- Publication number
- CN119402759B CN119402759B CN202411963649.3A CN202411963649A CN119402759B CN 119402759 B CN119402759 B CN 119402759B CN 202411963649 A CN202411963649 A CN 202411963649A CN 119402759 B CN119402759 B CN 119402759B
- Authority
- CN
- China
- Prior art keywords
- image
- weight
- sub
- processed
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 51
- 238000012937 correction Methods 0.000 claims abstract description 31
- 230000004927 fusion Effects 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005728 strengthening Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/82—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction
- H04N23/83—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction specially adapted for colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
- H04N9/69—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/73—Colour balance circuits, e.g. white balance circuits or colour temperature control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Processing Of Color Television Signals (AREA)
Abstract
The invention discloses an image signal processing method, an image signal processing system and a cradle head device, and relates to the technical field of image processing. The method comprises the steps of obtaining a first image through white balance processing of an image to be processed, obtaining a second image through brightness enhancement of the first image, obtaining a third image and a fourth image through gamma correction and sharpening processing of the second image, and obtaining a final high-quality image through fusion of the third image and the fourth image. The color deviation is reduced through white balance processing, the contrast of the image is improved through brightness enhancement, the underexposure or overexposure phenomenon can be repaired through gamma correction, and the detail level and the definition of the image can be enhanced through sharpening processing. By fusing the gamma corrected and sharpened images, the negative effects of separate processing can be avoided, balancing brightness and detail.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image signal processing method, an image signal processing system, and a cradle head device.
Background
The low image quality at night is a common and complex problem in monitoring systems, mainly due to various factors. At night, various artificial light sources become a major source of image brightness due to the lack of natural light. Such artificial light sources include street lights, automotive headlamps, billboards, shop windows, etc. The light sources are unevenly distributed in space and have large intensity differences, resulting in uneven heights of the background light of the night image. Local areas may be too bright, while other areas are too dark, and the uneven distribution of light and the large color temperature difference cause the image to be color cast and distorted on the whole perception, and the quality of the monitored image is low due to the atomizing environment at night, so that the definition and usability of the image are affected.
Patent CN118608441a discloses a noctilucent image enhancement method and device based on color shift correction, and image enhancement is performed through a deep learning model. However, this approach requires a significant amount of computational resources to train and infer, and it focuses mainly on the compensation of brightness and color, and does not work effectively for other aspects of the image (such as texture, detail, or noise). While noise and detail loss are common problems in night monitoring, correction of color shift alone may not address all of the problems in the image, especially noise control and detail enhancement processing.
Disclosure of Invention
The invention aims to solve the problem of low image quality at night in the background art, and provides an image signal processing method, an image signal processing system and a cradle head device.
In a first aspect of the present invention, there is provided an image signal processing method, the method comprising:
Acquiring an image to be processed, wherein the image to be processed is an RGB image;
Performing white balance processing on the image to be processed to obtain a first image;
performing brightness enhancement on the first image to obtain a second image;
Gamma correction and sharpening are carried out on the second image, so that a third image and a fourth image are obtained respectively;
And fusing the third image and the fourth image to obtain a final high-quality image.
Optionally, performing white balance processing on the image to be processed to obtain a first image includes:
Dividing the image to be processed into a plurality of sub-blocks;
converting each sub-block into a gray image, and carrying out normalization processing in the block to obtain a weight coefficient matrix of each sub-block;
according to the weight coefficient matrix of each sub-block, calculating to obtain the weighted average value and standard deviation of each RGB channel of each sub-block;
according to the weighted average value and standard deviation of the plurality of sub-blocks, calculating to obtain the weighted intensity of the RGB channel of the image to be processed:
;
Wherein, Is a channel identification; k and r are sub-block identifications, p is the number of sub-blocks; s is the standard deviation;
and obtaining a color correction coefficient according to the weighted intensity of each channel:
;
and according to the color correction coefficient, adjusting RGB channel values of the image to be processed to obtain a first image.
Optionally, the performing brightness enhancement on the first image to obtain a second image includes:
Converting the first image into HSV space to obtain an original tone component, a saturation component and a brightness component;
performing histogram equalization processing on the original brightness component to obtain an enhanced brightness component;
and combining the enhanced brightness component with the original tone component and the saturation component to obtain a new HSV image, and converting the new HSV image back to the RGB image to obtain a second image.
Optionally, the performing gamma correction and sharpening on the second image to obtain a third image and a fourth image respectively includes:
Performing self-adaptive gamma correction on the second image to obtain a third image;
According to a preset Gaussian filter, blurring processing is carried out on the second image, and a blurred image is obtained;
Calculating the difference between the second image and the blurred image, and performing standardization processing to obtain a detail image;
And adding the detail image back to the second image to obtain a fourth image.
Optionally, the fusing the third image and the fourth image to obtain a final high-quality image includes:
Carrying out Laplace weight calculation on a target image to obtain a first weight image corresponding to the target image, wherein the target image is any one of a third image and a fourth image;
performing significance weight calculation on the target image to obtain a corresponding second weight graph;
obtaining a corresponding fusion weight graph according to the first weight graph and the second weight graph corresponding to the third image and the fourth image:
;
wherein, K and i are the icon identifiers, and correspond to the third image when being equal to 1, and correspond to the fourth image when being equal to 2; is a first weight map; Is a second weight map; t is a temporary weight map generated in the calculation process;
and carrying out weighted fusion on the third image and the fourth image according to the fusion weight graph to obtain a final high-quality image.
Optionally, the performing laplace weight calculation on the target image to obtain a first weight map corresponding to the target image includes:
Converting the target image into a gray scale image;
performing convolution operation on the gray level image according to a preset Laplacian operator to obtain a Laplacian image;
And taking an absolute value of the Laplacian image to obtain a first weight map.
Optionally, performing saliency weight calculation on the target image to obtain a second weight map corresponding to the target image includes:
Converting the target image into an LAB color space to obtain a fifth image, and calculating the average value of each channel of the fifth image;
According to the difference between each pixel in the fifth image and the full-image mean value, calculating to obtain a second weight image:
;
Wherein, Is a location identity; Is a second weight map; three channels of the fifth image; representing the channel mean.
Optionally, the performing weighted fusion on the third image and the fourth image according to the fusion weight map to obtain a final high-quality image includes:
And according to the fusion weight graph, carrying out multi-scale weighted fusion on the third image and the fourth image by utilizing the Gaussian pyramid and the Laplacian pyramid, so as to obtain a final high-quality image.
In a second aspect of the present invention, there is provided an image signal processing system, the system comprising:
the data acquisition module is used for acquiring an image to be processed, wherein the image to be processed is an RGB image;
The color correction module is used for performing white balance processing on the image to be processed to obtain a first image;
the brightness enhancement module is used for enhancing the brightness of the first image to obtain a second image;
The detail strengthening module is used for carrying out gamma correction and sharpening on the second image to respectively obtain a third image and a fourth image, and fusing the third image and the fourth image to obtain a final target image.
In a third aspect of the present invention, a cradle head device is provided, on which a computer program is embedded, the program being capable of implementing any one of the image signal processing methods provided by the embodiments of the present invention when executed.
The invention has the beneficial effects that:
The invention provides an image signal processing method which comprises the steps of obtaining an image to be processed, wherein the image to be processed is an RGB image, performing white balance processing on the image to be processed to obtain a first image, performing brightness enhancement on the first image to obtain a second image, performing gamma correction and sharpening processing on the second image to respectively obtain a third image and a fourth image, and fusing the third image and the fourth image to obtain a final high-quality image.
The color deviation is reduced through white balance processing, the contrast of the image is improved through brightness enhancement, the underexposure or overexposure phenomenon can be repaired through gamma correction, and the detail level and the definition of the image can be enhanced through sharpening processing. By fusing the gamma corrected and sharpened images, the negative effects of separate processing can be avoided, balancing brightness and detail.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image signal processing method according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of an image signal processing system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an image signal processing method. Referring to fig. 1, fig. 1 is a flowchart of an image signal processing method according to an embodiment of the present invention. The method comprises the following steps:
s101, acquiring an image to be processed.
S102, performing white balance processing on the image to be processed to obtain a first image.
And S103, performing brightness enhancement on the first image to obtain a second image.
S104, gamma correction and sharpening are carried out on the second image, and a third image and a fourth image are respectively obtained.
And S105, fusing the third image and the fourth image to obtain a final high-quality image.
Wherein the image to be processed is an RGB image.
According to the image signal processing method provided by the embodiment of the invention, color deviation is reduced through white balance processing, the contrast of an image is improved through brightness enhancement, the underexposure or overexposure phenomenon can be repaired through gamma correction, and the detail level and definition of the image can be enhanced through sharpening processing. By fusing the gamma corrected and sharpened images, the negative effects of separate processing can be avoided, balancing brightness and detail.
In one implementation, after gamma correction, the brightness and contrast of the image are improved, but details may be lost. The sharpening process can then enhance these details to make the image clearer. When the brightness adjustment is integrated, the details brought by sharpening can be recovered while the brightness adjustment benefits are maintained.
In one embodiment, step S102 includes:
step one, dividing an image to be processed into a plurality of sub-blocks.
Step two, converting each sub-block into a gray image, and carrying out normalization processing in the block to obtain a weight coefficient matrix of each sub-block:
;
Wherein, The method comprises the steps of determining a weight coefficient matrix, wherein k is a sub-block identifier, and m and n are the sizes of sub-blocks and represent that the k sub-block has m rows and n columns of pixels; is a gray scale image; And Is a pixel location identification.
Step three, according to the weight coefficient matrix of each sub-block, calculating to obtain the weighted average value and standard deviation of each channel of each sub-block RGB:
;
Wherein, Is a channel identification; Is the weighted average of the kth sub-block channel C; standard deviation of the kth sub-block channel C; is the k sub-block in position C channel value of (C).
Step four, according to the weighted average value and standard deviation of the plurality of sub-blocks, calculating to obtain the weighted intensity of the RGB channel of the image to be processed:
;
Wherein, Is a channel identification; k and r are sub-block identifications, p is the number of sub-blocks; is the weighted mean and s is the standard deviation.
Step five, obtaining a color correction coefficient according to the weighted intensity of each channel:
;
step six, according to the color correction coefficient, the RGB channel value of the image to be processed is adjusted, and a first image is obtained.
In one implementation, the specific way to convert each sub-block into a grayscale image may be: wherein R, G, B are the red, green, and blue channel values of the sub-block, respectively.
In one implementation, the image may be divided into 12 sub-blocks on average. By calculating the weighted mean and standard deviation of each sub-block, the color and brightness distribution of the local area can be reflected more accurately. By dividing the image into sub-blocks, error accumulation caused by global image processing is reduced, the color correction coefficient is adaptively calculated according to the local characteristics of the image, the robustness of the algorithm under different image conditions is improved, and the image with complex illumination conditions or color tone variation can be processed. Under different shooting environments, the method can effectively optimize the color, so that the color of the image is kept natural and consistent.
In one embodiment, step S103 includes:
Step one, converting the first image into HSV space to obtain an original tone component, a saturation component and a brightness component.
And step two, carrying out histogram equalization processing on the original brightness component to obtain the enhanced brightness component.
And thirdly, combining the enhanced brightness component with the original tone component and the saturation component to obtain a new HSV image, and converting the new HSV image back to the RGB image to obtain a second image.
In one implementation, after converting the image to HSV color space, three components of hue, saturation, and brightness are separated so that the brightness enhancement process of the image can be independent of the adjustment of color. The method avoids color distortion possibly caused by brightness enhancement in RGB space, and ensures that the color is kept natural and accurate.
In one implementation, the luminance distribution can be stretched by histogram equalization of the luminance components, improving detail and visibility in the image.
In one embodiment, step S104 includes:
performing self-adaptive gamma correction on the second image to obtain a third image;
sharpening the second image by using a standardized sharpening mask to obtain a fourth image, wherein the method comprises the following steps:
Step one, according to a preset Gaussian filter For the second imagePerforming blurring processing to obtain a blurred image。
Step two, calculating the difference between the second image and the blurred imagePerforming standardization processing to obtain a detail imageWherein, the standardization processThe difference image may be normalized using a linear stretching process.
Step three, adding the detail image back to the second image to obtain a fourth image。
In one implementation, the adaptive gamma correction may employ adaptive gamma calculation based on local brightness distribution, increasing brightness by employing lower gamma values in low brightness regions, and higher gamma values in high brightness regions, reducing overexposure, optimizing details of dark and bright portions.
In one implementation, the standard deviation of the gaussian filter may be set to 20, so that large-scale edge information in the image may be effectively captured, and details may be smoothed as much as possible, so as to avoid excessive noise or fine details affecting the sharpening effect.
In one implementation, the linear stretch calculation formula is:
;
is a preset range and can be set ;. The standardization can avoid the excessive enhancement or the excessive compression of the detail image, thereby ensuring that the enhancement effect is more uniform and natural.AndThe minimum and maximum values of the pixel values in the map DT,Is a pixel location identification.
In one embodiment, step S105 includes:
carrying out Laplace weight calculation on a target image to obtain a first weight image corresponding to the target image, wherein the target image is any one of a third image and a fourth image;
performing significance weight calculation on the target image to obtain a corresponding second weight graph;
Step three, obtaining a corresponding fusion weight graph according to a first weight graph and a second weight graph corresponding to the third image and the fourth image:
;
wherein, K and i are the icon identifiers, and correspond to the third image when being equal to 1, and correspond to the fourth image when being equal to 2; is a first weight map; Is a second weight map; t is a temporary weight map generated in the calculation process;
and step four, carrying out weighted fusion on the third image and the fourth image according to the fusion weight graph to obtain a final high-quality image.
In one implementation, performing laplace weight calculation on the target image to obtain a first weight map corresponding to the target image includes:
step one, converting the target image into a gray scale image.
And step two, carrying out convolution operation on the gray level image according to a preset Laplacian operator to obtain a Laplacian image.
And thirdly, taking an absolute value of the Laplacian image to obtain a first weight map.
Wherein, the convolution kernel of the Laplace operator can be set as [ (0, 1,0], [1, -4,1], [0,1,0] ].
The laplace operator may help extract high frequency information in the image, emphasizing edges, details, and contours in the image. In the fusion process, the Laplace weight graph is used for helping to keep the structural information of the image and avoiding detail loss. This is particularly important for portions of the image that have important details, such as object contours, edges, etc.
In one implementation, performing saliency weight calculation on the target image to obtain a second weight map corresponding to the target image includes:
Step one, converting the target image into an LAB color space to obtain a fifth image, and calculating the average value of each channel of the fifth image.
Step two, according to the difference between each pixel in the fifth image and the full-image mean value, calculating to obtain a second weight image:
;
Wherein, Is a location identity; Is a second weight map; three channels of the fifth image; representing the channel mean.
The saliency detection can weight the image areas based on the visual saliency of the image, helping to ensure that the visual focus portion of the image remains prominent after fusion, not covered by the influence of other areas.
In one implementation, important details and regions in the image can be more intelligently identified and enhanced by combining the laplace weight and the saliency weight, thereby preserving more detail, clarity, and visual appeal when fused.
In one embodiment, performing weighted fusion on the third image and the fourth image according to the fusion weight map, to obtain a final high-quality image includes:
According to the fusion weight graph, carrying out multi-scale weighted fusion on the third image and the fourth image by utilizing a Gaussian pyramid and a Laplacian pyramid to obtain a final high-quality image:
;
Wherein, Representing a third image of the object of interest,Representing a fourth image of the object of interest,Is Gaussian pyramid decomposition fused with a weight graph; is the Laplacian pyramid decomposition of the input image; Representing the hierarchy of the pyramid. The output image is obtained by upsampling the lowest layer image and then adding it to the higher layer image until the first layer is reached and added to the output. The number of layers of the pyramid may be 3.
Through multi-level and multi-scale weighted fusion, different detail levels (from thick to thin) of the image can be enhanced appropriately, so that each region of the image can be processed finely. The high-frequency details (such as edges and textures) and the low-frequency information (such as illumination and colors) are respectively processed and fused, so that the layering sense of the image is ensured to be stronger, and the details and the light and shadow are more abundant.
The embodiment of the invention provides an image signal processing system. Referring to fig. 2, fig. 2 is a schematic diagram of an image signal processing system according to an embodiment of the present invention. The system comprises:
and the data acquisition module is used for acquiring the image to be processed.
And the color correction module is used for carrying out white balance processing on the image to be processed to obtain a first image.
And the brightness enhancement module is used for enhancing the brightness of the first image to obtain a second image.
The detail strengthening module is used for carrying out gamma correction and sharpening on the second image to respectively obtain a third image and a fourth image, and fusing the third image and the fourth image to obtain a final target image.
Wherein the image to be processed is an RGB image.
The image signal processing system provided by the embodiment of the invention reduces color deviation through white balance processing, improves the contrast of an image through brightness enhancement, can repair underexposure or overexposure phenomena through gamma correction, and can enhance the detail level and definition of the image through sharpening processing. By fusing the gamma corrected and sharpened images, the negative effects of separate processing can be avoided, balancing brightness and detail.
The embodiment of the invention provides a cradle head device, on which a computer program is embedded, and the program can realize any one of the image signal processing methods provided by the embodiment of the invention when being executed.
The foregoing describes one embodiment of the present invention in detail, but the description is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.
Claims (9)
1. An image signal processing method, characterized in that the method comprises:
Acquiring an image to be processed, wherein the image to be processed is an RGB image;
Performing white balance processing on the image to be processed to obtain a first image;
performing brightness enhancement on the first image to obtain a second image;
Gamma correction and sharpening are carried out on the second image, so that a third image and a fourth image are obtained respectively;
fusing the third image and the fourth image to obtain a final high-quality image;
Performing white balance processing on the image to be processed to obtain a first image, wherein the step of obtaining the first image comprises the following steps of:
Dividing the image to be processed into a plurality of sub-blocks;
Each sub-block is converted into a grayscale image:
;
wherein R, G, B is the red, green, blue channel value of the sub-block, respectively;
carrying out normalization processing in the block to obtain a weight coefficient matrix of each sub-block;
;
Wherein, The method comprises the steps of determining a weight coefficient matrix, wherein k is a sub-block identifier, and m and n are the sizes of sub-blocks and represent that the k sub-block has m rows and n columns of pixels; is a gray scale image; And Is a pixel location identifier;
according to the weight coefficient matrix of each sub-block, calculating to obtain the weighted average value and standard deviation of each RGB channel of each sub-block:
;
Wherein, Is a channel identification; Is the weighted average of the kth sub-block channel C; standard deviation of the kth sub-block channel C; is the k sub-block in position C channel value of (2);
according to the weighted average value and standard deviation of the plurality of sub-blocks, calculating to obtain the weighted intensity of the RGB channel of the image to be processed:
;
Wherein, Is a channel identification; Representing the weighted intensity, k and r are sub-block identifications, p is the number of sub-blocks, u is the weighted average; is the standard deviation;
and obtaining a color correction coefficient according to the weighted intensity of each channel:
;
and according to the color correction coefficient, adjusting RGB channel values of the image to be processed to obtain a first image.
2. The method of claim 1, wherein said performing brightness enhancement on said first image to obtain a second image comprises:
Converting the first image into HSV space to obtain an original tone component, a saturation component and a brightness component;
performing histogram equalization processing on the original brightness component to obtain an enhanced brightness component;
and combining the enhanced brightness component with the original tone component and the saturation component to obtain a new HSV image, and converting the new HSV image back to the RGB image to obtain a second image.
3. The method of claim 1, wherein performing gamma correction and sharpening on the second image to obtain a third image and a fourth image, respectively, comprises:
Performing self-adaptive gamma correction on the second image to obtain a third image;
According to a preset Gaussian filter, blurring processing is carried out on the second image, and a blurred image is obtained;
Calculating the difference between the second image and the blurred image, and performing standardization processing to obtain a detail image;
And adding the detail image back to the second image to obtain a fourth image.
4. The method of claim 1, wherein fusing the third image and the fourth image to obtain a final high-quality image comprises:
Carrying out Laplace weight calculation on a target image to obtain a first weight image corresponding to the target image, wherein the target image is any one of a third image and a fourth image;
performing significance weight calculation on the target image to obtain a corresponding second weight graph;
obtaining a corresponding fusion weight graph according to the first weight graph and the second weight graph corresponding to the third image and the fourth image:
;
Wherein K and i are the graph identifiers, and correspond to the third image when the graph identifiers are equal to 1, and correspond to the fourth image when the graph identifiers are equal to 2, LW is a first weight graph, SW is a second weight graph, FW is a normalized fusion weight graph, and T is a temporary weight graph generated in the calculation process;
and carrying out weighted fusion on the third image and the fourth image according to the fusion weight graph to obtain a final high-quality image.
5. The method for processing an image signal according to claim 4, wherein the performing laplace weight calculation on the target image to obtain the corresponding first weight map comprises:
Converting the target image into a gray scale image;
performing convolution operation on the gray level image according to a preset Laplacian operator to obtain a Laplacian image;
And taking an absolute value of the Laplacian image to obtain a first weight map.
6. The method for processing an image signal according to claim 4, wherein said calculating the saliency weight of the target image to obtain the corresponding second weight map comprises:
Converting the target image into an LAB color space to obtain a fifth image, and calculating the average value of each channel of the fifth image;
According to the difference between each pixel in the fifth image and the full-image mean value, calculating to obtain a second weight image:
;
wherein (x, y) is a location identity, SW is a second weight map; Is the fifth image, mean represents the mean of the channels.
7. The method of claim 4, wherein the performing weighted fusion on the third image and the fourth image according to the fusion weight map to obtain a final high-quality image comprises:
And according to the fusion weight graph, carrying out multi-scale weighted fusion on the third image and the fourth image by utilizing the Gaussian pyramid and the Laplacian pyramid, so as to obtain a final high-quality image.
8. An image signal processing system for implementing an image signal processing method as claimed in claim 1, said system comprising:
the data acquisition module is used for acquiring an image to be processed, wherein the image to be processed is an RGB image;
The color correction module is used for performing white balance processing on the image to be processed to obtain a first image;
the brightness enhancement module is used for enhancing the brightness of the first image to obtain a second image;
The detail strengthening module is used for carrying out gamma correction and sharpening on the second image to respectively obtain a third image and a fourth image, and fusing the third image and the fourth image to obtain a final target image.
9. A cradle head device having a computer program embedded thereon, wherein the program, when executed, implements an image signal processing method as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411963649.3A CN119402759B (en) | 2024-12-30 | 2024-12-30 | Image signal processing method, system and cradle head equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411963649.3A CN119402759B (en) | 2024-12-30 | 2024-12-30 | Image signal processing method, system and cradle head equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119402759A CN119402759A (en) | 2025-02-07 |
CN119402759B true CN119402759B (en) | 2025-04-08 |
Family
ID=94431377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411963649.3A Active CN119402759B (en) | 2024-12-30 | 2024-12-30 | Image signal processing method, system and cradle head equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119402759B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN120013805B (en) * | 2025-04-16 | 2025-07-25 | 先临三维科技股份有限公司 | Noise reduction method and device for dark areas of images |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889812A (en) * | 2019-10-11 | 2020-03-17 | 大连海事大学 | An underwater image enhancement method based on multi-scale fusion of image feature information |
CN116630198A (en) * | 2023-06-06 | 2023-08-22 | 山东交通学院 | A multi-scale fusion underwater image enhancement method combined with adaptive gamma correction |
CN118261810A (en) * | 2024-04-02 | 2024-06-28 | 安徽工业大学 | A method for underwater image enhancement of nuclear power plants based on multi-scale pixel fusion |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6042030B2 (en) * | 2014-03-28 | 2016-12-14 | 富士フイルム株式会社 | Image processing apparatus, photographing apparatus, image processing method, and program |
US10424054B2 (en) * | 2015-06-26 | 2019-09-24 | Peking University Shenzhen Graduate School | Low-illumination image processing method and device |
CN105872510B (en) * | 2016-04-25 | 2017-11-21 | 浙江大华技术股份有限公司 | A kind of image white balancing treatment method and device |
CN109191390A (en) * | 2018-08-03 | 2019-01-11 | 湘潭大学 | A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space |
CN109903250B (en) * | 2019-02-25 | 2022-10-04 | 大连海事大学 | Underwater image sharpening processing method based on multi-scale gradient domain contrast stretching |
CN110766616B (en) * | 2019-09-12 | 2023-05-09 | 中国海洋大学 | Underwater image dodging algorithm based on single-scale Retinex method |
CN110706172B (en) * | 2019-09-27 | 2023-03-24 | 郑州轻工业学院 | Low-illumination color image enhancement method based on adaptive chaotic particle swarm optimization |
WO2021204202A1 (en) * | 2020-04-10 | 2021-10-14 | 华为技术有限公司 | Image auto white balance method and apparatus |
CN114612314B (en) * | 2021-12-21 | 2025-03-04 | 大连海事大学 | An underwater image enhancement method based on double guided filtering |
WO2023124201A1 (en) * | 2021-12-29 | 2023-07-06 | 荣耀终端有限公司 | Image processing method and electronic device |
US20230388667A1 (en) * | 2022-05-25 | 2023-11-30 | Samsung Electronics Co., Ltd. | Rgb-nir processing and calibration |
CN115423726B (en) * | 2022-11-07 | 2023-03-24 | 复亚智能科技(太仓)有限公司 | Strong light inhibition and dark light enhancement method without multi-exposure fusion |
CN118429195A (en) * | 2024-05-11 | 2024-08-02 | 杭州电子科技大学 | A novel grayscale image enhancement system and method |
CN119091379A (en) * | 2024-08-30 | 2024-12-06 | 上海天齐智能建筑股份有限公司 | A machine vision unit for smart community security management system |
-
2024
- 2024-12-30 CN CN202411963649.3A patent/CN119402759B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889812A (en) * | 2019-10-11 | 2020-03-17 | 大连海事大学 | An underwater image enhancement method based on multi-scale fusion of image feature information |
CN116630198A (en) * | 2023-06-06 | 2023-08-22 | 山东交通学院 | A multi-scale fusion underwater image enhancement method combined with adaptive gamma correction |
CN118261810A (en) * | 2024-04-02 | 2024-06-28 | 安徽工业大学 | A method for underwater image enhancement of nuclear power plants based on multi-scale pixel fusion |
Also Published As
Publication number | Publication date |
---|---|
CN119402759A (en) | 2025-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114331873B (en) | Non-uniform illumination color image correction method based on region division | |
CN105761227B (en) | Underwater image enhancement method based on dark channel prior and white balance | |
CN119402759B (en) | Image signal processing method, system and cradle head equipment | |
CN111429370B (en) | Underground coal mine image enhancement method, system and computer storage medium | |
CN108765342A (en) | A kind of underwater image restoration method based on improvement dark | |
CN102789635A (en) | Image enhancement method and image enhancement device | |
CN105809643B (en) | A kind of image enchancing method based on adaptive block channel extrusion | |
CN115578297A (en) | A Generalized Attenuation Image Enhancement Method with Adaptive Color Compensation and Detail Optimization | |
CN111462022B (en) | Underwater image sharpness enhancement method | |
CN118195980B (en) | A dark detail enhancement method based on grayscale transformation | |
KR20160002517A (en) | Method and apparatus for enhancing digital image, and apparatus for image processing using the same | |
CN115456905B (en) | Single image defogging method based on bright-dark region segmentation | |
CN104021531A (en) | Improved method for enhancing dark environment images on basis of single-scale Retinex | |
CN118571161B (en) | Display control method, device and equipment of LED display screen and storage medium | |
CN107067375A (en) | A kind of image defogging method based on dark channel prior and marginal information | |
CN116703789A (en) | Image enhancement method and system | |
CN104091307A (en) | Frog day image rapid restoration method based on feedback mean value filtering | |
Huang et al. | Efficient contrast enhancement with truncated adaptive gamma correction | |
CN107025641A (en) | Image interfusion method based on Analysis of Contrast | |
Li et al. | Soft binary segmentation-based backlit image enhancement | |
CN116309133A (en) | Low-light image enhancement method for local and global self-adaptive contrast correction | |
CN112435184A (en) | Haze sky image identification method based on Retinex and quaternion | |
Bataineh | Brightness and contrast enhancement method for color images via pairing adaptive gamma correction and histogram equalization | |
CN118071634B (en) | Self-adaptive enhancement method for low-illumination color cast image | |
Abbaspour et al. | A new fast method for foggy image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |