Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and an image processing circuit, which can accurately identify noise characteristics of each image feature region in an image, implement targeted noise reduction in combination with image features, and improve noise reduction effect.
In order to achieve the above object, an embodiment of the first aspect of the present application provides an image processing method, including: acquiring an original image; dividing the original image to obtain a plurality of image areas by combining preset features, wherein the image areas are different in feature value based on the preset features; and denoising the image areas based on artificial intelligence respectively to obtain a target denoising image.
According to the image processing method provided by the embodiment of the first aspect of the application, the original image is divided to obtain the plurality of image areas by acquiring the original image and combining the preset features, the image areas are different based on the feature values of the preset features, and noise reduction is performed on the image areas respectively based on artificial intelligence to obtain the target noise reduction image, so that the noise characteristics of the image feature areas in the image can be accurately identified, targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
In order to achieve the above object, an embodiment of a second aspect of the present application provides an image processing apparatus, including: the acquisition module is used for acquiring an original image; the dividing module is used for dividing the original image to obtain a plurality of image areas by combining preset features, wherein the image areas are different in feature value based on the preset features; and the noise reduction module is used for reducing noise of each image area based on artificial intelligence respectively so as to obtain a target noise reduction image.
According to the image processing device provided by the embodiment of the second aspect of the application, the original image is divided to obtain the plurality of image areas by acquiring the original image and combining the preset features, the image areas are different based on the feature values of the preset features, and noise reduction is performed on the image areas respectively based on artificial intelligence to obtain the target noise reduction image, so that the noise characteristics of the image feature areas in the image can be accurately identified, the targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
In order to achieve the above object, an electronic device according to a third aspect of the present application includes: the image processing method comprises an image sensor, a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the image sensor is electrically connected with the processor, and when the processor executes the program, the image processing method provided by the embodiment of the first aspect of the application is realized.
According to the electronic device provided by the embodiment of the third aspect of the application, the original image is divided to obtain the plurality of image areas by acquiring the original image and combining the preset features, the image areas are different based on the feature values of the preset features, and noise reduction is performed on the image areas respectively based on artificial intelligence to obtain the target noise reduction image, so that the noise characteristics of the image feature areas in the image can be accurately identified, the targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
In order to achieve the above object, an embodiment of a fourth aspect of the present application provides an image processing circuit, including: the system comprises an image signal processing ISP processor and a graphic processor GPU; the ISP processor is electrically connected with the image sensor and is used for controlling the image sensor to acquire an original image; the GPU is electrically connected with the ISP processor and is used for dividing the original image to obtain a plurality of image areas by combining preset characteristics, and the characteristic values of the image areas are different based on the preset characteristics; and denoising the image areas based on artificial intelligence respectively to obtain a target denoising image.
According to the image processing circuit provided by the embodiment of the fourth aspect of the application, the original image is divided to obtain the plurality of image areas by acquiring the original image and combining the preset features, the image areas are different based on the feature values of the preset features, and noise reduction is performed on the image areas respectively based on artificial intelligence to obtain the target noise reduction image, so that the noise characteristics of the image feature areas in the image can be accurately identified, the targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
To achieve the above object, a computer-readable storage medium is provided in an embodiment of the fifth aspect of the present application, on which a computer program is stored, which when executed by a processor implements the image processing method as provided in the embodiment of the first aspect of the present application.
According to the computer-readable storage medium provided by the embodiment of the fifth aspect of the application, the original image is divided by acquiring the original image and combining the preset features to obtain a plurality of image areas, the image areas are different based on the feature values of the preset features, and noise reduction is performed on the image areas based on artificial intelligence respectively to obtain a target noise reduction image, so that the noise characteristics of each image feature area in the image can be accurately identified, targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. On the contrary, the embodiments of the application include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
In order to solve the technical problems that noise characteristics of each image feature region in an image cannot be accurately identified, noise reduction is not targeted, and noise reduction effect is poor in the related art, the embodiment of the application provides an image processing method.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
The image processing method based on the multi-frame image is applied to electronic equipment, and the electronic equipment can be hardware equipment with various operating systems and imaging equipment, such as a mobile phone, a tablet personal computer, a personal digital assistant and wearable equipment.
Referring to fig. 1, the method includes:
s101: an original image is acquired.
The RAW image may be, for example, an unprocessed RAW format image acquired by an image sensor of the electronic device, which is not limited in this respect.
The RAW format image is an original image obtained by converting a captured light source signal into a digital signal by an image sensor. RAW format images record RAW information of the digital camera sensor, and also record some metadata generated by camera shooting, such as setting of sensitivity, shutter speed, aperture value, white balance, and the like.
Whether the current shot scene belongs to a night scene or not can be determined by acquiring a preview image of the current shot scene. Because the environmental brightness values in different scenes are different, the contents of the preview images are also different, and the night scene shooting mode can be started to acquire the original image after the current shooting scene is determined to belong to the night scene according to the picture content of the preview image of the current shooting scene and the environmental brightness values of all areas.
For example, the picture content of the preview image includes a night sky, a night scene light source, or the like, or the environment brightness value in each region of the preview image conforms to the brightness distribution characteristic of the image in the night scene environment, so that it can be determined that the current shooting scene belongs to the night scene.
S102: and combining the preset features, dividing the original image to obtain a plurality of image areas, wherein the feature values of the image areas are different based on the preset features.
Alternatively, the preset feature may be, for example, an image feature such as image contrast, sensitivity, white balance, or the like.
And the preset features are image brightness, and the targeted noise reduction is realized by combining the image brightness, and the general noise characteristic difference of image areas with different brightness features is larger, so that the noise reduction effect can be obviously improved, and the user experience is improved.
Optionally, the feature value of the image region based on the preset feature is a brightness mean value of all pixel points in the image region, or alternatively, the brightness variance value of the pixel points in the image region calculated based on a common statistical algorithm may also be used.
In a specific implementation process, the luminance value of each pixel point in the original image may be counted, and the luminance value of each pixel point is compared with a preset first luminance threshold and a preset second luminance threshold, where the first luminance threshold is smaller than the second luminance threshold, and according to a result obtained by the comparison, the original image is divided into a bright area, a bright-dark transition area, and a dark area, see fig. 2, where fig. 2 is an image division schematic diagram in the embodiment of the present application, the luminance values of all pixel points in the bright area are greater than or equal to the second luminance threshold, the luminance values of all pixel points in the bright-dark transition area are less than the second luminance threshold and greater than or equal to the first luminance threshold, and the luminance values of all pixel points in the dark area are less than the first luminance threshold.
In other embodiments, the image may be further divided into a plurality of image regions based on some other image division algorithms, and each image region is different based on a feature value of the preset feature, which is not limited herein.
S103: and denoising each image region based on artificial intelligence respectively to obtain a target denoising image.
Because the image sensor in the electronic device is subjected to different degrees of photo-electromagnetic interference between peripheral circuits and pixels of the image sensor in the electronic device during shooting, noise inevitably exists in the shot original image, and the definition of the shot image is different due to different interference degrees. Therefore, the acquired original image also has noise, and artificial intelligence noise reduction can be further performed on each image area to obtain a target noise reduction image.
Optionally, in some embodiments, referring to fig. 3, the noise reduction based on artificial intelligence for each image region includes:
s301: carrying out noise characteristic identification on each image area by adopting a neural network model and combining the characteristic values corresponding to each image area; the neural network model learns the mapping relation between the characteristic value and the noise characteristic of each image area.
Optionally, the neural network model is trained by using the sample images of the feature values until the noise characteristics identified by the neural network model are matched with the noise characteristics labeled in the corresponding sample images, and the training of the neural network model is completed.
As a possible implementation, since the neural network model has learned the mapping relationship between the feature value and the noise characteristic of each image region. Therefore, each image area can be respectively input into the neural network model, so that the noise characteristics of each image area are respectively identified by adopting the neural network model, the noise characteristics corresponding to each image area are identified, the aim of targeted noise reduction is fulfilled, the noise reduction effect is improved, and the noise reduction power consumption is reduced to the maximum extent.
Of course, the neural network model is only one possible implementation manner for implementing noise reduction based on artificial intelligence, and in the actual implementation process, noise reduction based on artificial intelligence may be implemented in any other possible manner, for example, it may also be implemented by using a conventional programming technique (simulation method and engineering method), or for example, it may also be implemented by using a genetic algorithm and an artificial neural network method.
S302: and denoising each image region according to the identified noise characteristics.
In the embodiment of the present application, the noise characteristic may be a statistical characteristic of random noise caused by the image sensor. The noise mainly includes thermal noise and shot noise, where the thermal noise conforms to a gaussian distribution, and the shot noise conforms to a poisson distribution, and the statistical characteristic in the embodiment of the present application may refer to a variance value of the noise, and may also be a value of other possible situations, which is not limited herein.
In the embodiment of the application, after denoising each image region according to the identified noise characteristics, the image region obtained by denoising may be further used as a target denoising region, and each target denoising region is synthesized to obtain a target denoising image.
The regions obtained by denoising each image region may be referred to as target denoising regions, and the number of the target denoising regions is the same as the number of the image regions obtained by dividing the original image.
In the specific implementation process, the target noise reduction regions may be synthesized based on a stitching algorithm to obtain a target noise reduction image, which is not limited.
In the embodiment, the original image is obtained and combined with the preset features, the original image is divided to obtain a plurality of image areas, each image area is different in feature value based on the preset features, and noise reduction is performed on each image area based on artificial intelligence to obtain a target noise reduction image, so that the noise characteristics of each image feature area in the image can be accurately identified, targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Referring to fig. 4, the apparatus 400 includes:
an obtaining module 401, configured to obtain an original image;
a dividing module 402, configured to divide an original image to obtain a plurality of image areas in combination with preset features, where feature values of the image areas are different based on the preset features;
and a noise reduction module 403, configured to perform noise reduction on each image region based on artificial intelligence, respectively, to obtain a target noise-reduced image.
Optionally, in some embodiments, the noise reduction module 403 is specifically configured to:
carrying out noise characteristic identification on each image area by adopting a neural network model and combining the characteristic values corresponding to each image area; the neural network model learns the mapping relation between the characteristic value and the noise characteristic of each image area;
and denoising each image region according to the identified noise characteristics.
Optionally, in some embodiments, the neural network model is trained by using the sample images of each feature value until the training of the neural network model is completed when the noise characteristics identified by the neural network model match the noise characteristics labeled in the corresponding sample images.
Optionally, in some embodiments, referring to fig. 5, further comprising:
and a synthesizing module 404, configured to take the image region obtained by noise reduction as a target noise reduction region, and synthesize each target noise reduction region to obtain a target noise reduction image.
Optionally, in some embodiments, the preset feature is image brightness.
Optionally, in some embodiments, the feature value of the image region based on the preset feature is a luminance average value of all pixel points in the image region.
It should be noted that the foregoing explanation of the embodiment of the image processing method is also applicable to the image processing apparatus 400 of this embodiment, and is not repeated here.
In the embodiment, the original image is obtained and combined with the preset features, the original image is divided to obtain a plurality of image areas, each image area is different in feature value based on the preset features, and noise reduction is performed on each image area based on artificial intelligence to obtain a target noise reduction image, so that the noise characteristics of each image feature area in the image can be accurately identified, targeted noise reduction is performed by combining the image features, and the noise reduction effect is improved.
In order to implement the foregoing embodiment, the present application further provides an electronic device 200, referring to fig. 6, where fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and the electronic device includes: the image processing system comprises an image sensor 210, a memory 230, a processor 220 and a computer program which is stored on the memory 230 and can run on the processor 220, wherein the image sensor 210 is electrically connected with the processor 220, and the image processing method is realized when the processor 220 executes the program.
As one possible scenario, the processor 220 may include: an image signal processing ISP processor.
The ISP processor is used for controlling the image sensor to acquire an original image.
As another possible scenario, the processor 220 may further include: and a Graphics Processing Unit (GPU) connected to the ISP processor.
The GPU is used for dividing an original image to obtain a plurality of image areas by combining preset characteristics, and the characteristic values of the image areas are different based on the preset characteristics; and denoising each image region based on artificial intelligence respectively to obtain a target denoising image.
Referring to fig. 7 as an example, on the basis of the electronic device in fig. 6, fig. 7 is a schematic diagram illustrating an electronic device according to an embodiment of the present application. The memory 230 of the electronic device 200 includes the non-volatile memory 60, the internal memory 62, and the processor 220. Memory 230 has stored therein computer readable instructions. The computer readable instructions, when executed by the memory, cause the processor 220 to perform the image processing method of any of the above embodiments.
As shown in fig. 7, the electronic apparatus 200 includes a processor 220, a nonvolatile memory 60, an internal memory 62, a display screen 63, and an input device 64 connected via a system bus 61. The non-volatile memory 60 of the electronic device 200 stores, among other things, an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 220 to implement the image processing method of the embodiment of the present application. The processor 220 is used to provide computing and control capabilities that support the operation of the overall electronic device 200. The internal memory 62 of the electronic device 200 provides an environment for the execution of computer-readable instructions in the non-volatile memory 60. The display 63 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 64 may be a touch layer covered on the display 63, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc.
It will be understood by those skilled in the art that the structure shown in fig. 7 is only a schematic diagram of a part of the structure related to the present application, and does not constitute a limitation to the electronic device 200 to which the present application is applied, and a specific electronic device 200 may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
To implement the foregoing embodiments, an image processing circuit is further provided in the present application, please refer to fig. 8, fig. 8 is a schematic diagram of an image processing circuit according to an embodiment of the present application, and as shown in fig. 8, an image processing circuit 70 includes an image signal processing ISP processor 71 (the ISP processor 71 is used as the processor 220) and a graphics processor GPU.
The ISP processor is electrically connected with the image sensor and is used for controlling the image sensor to acquire an original image;
the GPU is electrically connected with the ISP processor and is used for dividing the original image to obtain a plurality of image areas by combining preset characteristics, and the characteristic values of all the image areas are different based on the preset characteristics; and denoising each image region based on artificial intelligence respectively to obtain a target denoising image.
The image data captured by the camera 73 is first processed by the ISP processor 71, and the ISP processor 71 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 73. The camera module 310 may include one or more lenses 732 and an image sensor 734. The image sensor 734 may include an array of color filters (e.g., Bayer filters), and the image sensor 734 may acquire the light intensity and wavelength information captured by each imaging pixel and provide a raw set of image data that may be processed by the ISP processor 71. The sensor 74 (e.g., gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 71 based on the type of interface of the sensor 74. The sensor 74 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination of the above.
Further, the image sensor 734 may also send raw image data to the sensor 74, the sensor 74 may provide the raw image data to the ISP processor 71 based on the sensor 74 interface type, or the sensor 74 may store the raw image data in the image memory 75.
The ISP processor 71 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 71 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 71 may also receive image data from an image memory 75. For example, the sensor 74 interface sends raw image data to the image memory 75, and the raw image data in the image memory 75 is then provided to the ISP processor 71 for processing. The image Memory 75 may be the Memory 330, a portion of the Memory 330, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 734 interface or from sensor 74 interface or from image memory 75, ISP processor 71 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 75 for additional processing before being displayed. The ISP processor 71 receives the processed data from the image memory 75 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 71 may be output to display 77 (display 77 may include display screen 63) for viewing by a user and/or further processing by a graphics engine or GPU. Further, the output of the ISP processor 71 may also be sent to an image memory 75, and the display 77 may read image data from the image memory 75. In one embodiment, image memory 75 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 71 may be sent to an encoder/decoder 76 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 77 device. The encoder/decoder 76 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 71 may be sent to the control logic 72 unit. For example, the statistical data may include image sensor 734 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 732 shading correction, and the like. Control logic 72 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters for camera 73 and control parameters for ISP processor 71 based on the received statistical data. For example, the control parameters of camera 73 may include sensor 74 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 732 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 732 shading correction parameters.
The following steps are used for realizing the image processing method by using the image processing technology in the figure 8: the ISP processor controls the image sensor to acquire an original image; the GPU combines preset characteristics to divide an original image to obtain a plurality of image areas, and the characteristic values of the image areas are different based on the preset characteristics; and denoising each image region based on artificial intelligence respectively to obtain a target denoising image.
To implement the foregoing embodiments, the present application further provides a storage medium, and when instructions in the storage medium are executed by a processor, the processor executes the following steps: acquiring an original image; dividing an original image to obtain a plurality of image areas by combining preset characteristics, wherein the characteristic values of the image areas are different based on the preset characteristics; and denoising each image region based on artificial intelligence respectively to obtain a target denoising image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.