Disclosure of Invention
An embodiment of the application aims to provide an image processing method, an image processing device, electronic equipment and a readable storage medium, which are used for solving the problem that a synthesized image is unclear due to low detection precision of a motion area in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
Obtaining a reference image under a reference exposure compensation value and fusion images under N comparison exposure compensation values, wherein the reference image is obtained by fusing at least two frames of images acquired under the reference exposure compensation value;
Determining a motion mask image between at least two frames of images acquired by each contrast exposure compensation value, wherein the pixel values of pixel points in the motion mask image are used for representing the motion amplitude of corresponding pixel points in the at least two frames of images acquired by each contrast exposure compensation value;
registering the fusion image under each contrast exposure compensation value and the motion mask image corresponding to each contrast exposure compensation value with the reference image respectively to obtain a registration image corresponding to each contrast exposure compensation value and a registration motion mask image;
determining a ghost area in the registration image under each contrast exposure compensation value based on the registration image under each contrast exposure compensation value, the reference image and the registration motion mask image corresponding to each contrast exposure compensation value;
And fusing the registration images corresponding to the N contrast exposure compensation values and the reference image based on the ghost area to obtain a target image.
In the implementation process, the moving mask image between at least two frames of images acquired by each contrast exposure compensation value is acquired, and then the moving mask image is registered, so that the ghost area in the registered image under each contrast exposure compensation value can be accurately detected by combining the registered moving mask image, the detection precision of the ghost area is improved, and in this way, when the images are fused based on the ghost area, the ghost can be well eliminated, and the image with higher definition is obtained.
Optionally, the fusing the registration images corresponding to the N contrast exposure compensation values and the reference image based on the ghost area to obtain a target image includes:
Selecting two images from the reference image, N+1 images formed by N registered images and an intermediate fusion image formed by fusing two images in the N+1 images as images to be fused;
according to the highlight area and the non-ghost area in the ghost area of one image to be fused, fusing the one image to be fused with the other image to be fused, and obtaining the target image after all the images in the n+1 images participate in one-time fusion;
When the image to be fused is an intermediate fused image, the ghost area is an area corresponding to the ghost area serving as a substrate image in the two images participating in fusion, and the highlight area is an area corresponding to the highlight area in the substrate image.
In the implementation process, the high-light areas in the ghost areas in the images are fused, so that the pixels of the high-light areas in the images with different brightness can be fused better, and the definition of the fused images is higher.
Optionally, the fusing the one to-be-fused image with the other to-be-fused image according to the highlight area and the non-ghost area in the ghost area of the one to-be-fused image includes:
Taking the pixel points of the highlight region in the one image to be fused and the pixel points of the region corresponding to the non-highlight region in the ghost region of the one image to be fused in the other image to be fused as the pixel points of the corresponding region in the image obtained after the two images to be fused are fused;
Fusing the pixel points of other areas except the ghost area in the one image to be fused with the pixel points of the areas corresponding to the other areas in the other image to be fused;
The exposure compensation value corresponding to the ghost area in the one image to be fused is smaller than that of the other image to be fused.
In the implementation process, the pixels in the highlight areas in the images with different brightness can be well fused through the fusion method, so that the definition of the fused image is higher.
Optionally, the fusing pixels of the other areas except the ghost area in the one to-be-fused image with pixels of the areas corresponding to the other areas in the other to-be-fused image includes:
selecting one image of the image to be fused and the other image to be fused as a substrate image, and the other image as a non-substrate image;
determining the pixel size of each corresponding pixel point in other areas except for the ghost area in the substrate image;
Determining fusion weights of corresponding pixel points in the other areas in the substrate image and fusion weights of corresponding pixel points in areas corresponding to the other areas in the non-substrate image based on the pixel sizes;
And fusing the other areas in the substrate image with the corresponding pixels in the areas corresponding to the other areas in the non-substrate image based on the fusion weights of the corresponding pixels in the other areas in the substrate image and the fusion weights of the corresponding pixels in the areas corresponding to the other areas in the non-substrate image.
In the implementation process, the pixel fusion is performed based on the fusion weight, so that the highlight pixels in the image can be effectively extracted, and the highlight pixels and the low-light pixels are well fused.
Optionally, the n+1 images are the ith image according to the order of i in the order of the corresponding exposure compensation value from large to small or from small to large, and the selecting two images from the n+1 images formed by the reference image and the N registered images and the intermediate fusion image formed by fusing two images in the n+1 images as the images to be fused includes:
fusing the 1 st image and the 2 nd image to form an intermediate fused image;
And sequentially taking i as 3 to N+1, and selecting an intermediate fusion image formed by fusing all images before the ith image in the sequence and the ith image as images to be fused.
In the implementation process, the images are fused in sequence according to the magnitude sequence of the exposure compensation values, so that the images with different brightness can be fused better.
Optionally, the n+1 images are the ith image according to the order of i in the order of the corresponding exposure compensation value from large to small or from small to large, and the selecting two images from the n+1 images formed by the reference image and the N registered images and the intermediate fusion image formed by fusing two images in the n+1 images as the images to be fused includes:
Sequentially taking i as an odd number from 1 to N+1, and sequentially fusing the i-th image and the i+1-th image which are adjacent in the N+1 images according to the sequencing order to form an intermediate fused image;
and sequentially selecting two adjacent intermediate fusion images as images to be fused according to the sequence of obtaining the intermediate fusion images.
In the implementation process, adjacent images with different brightness are fused, so that brightness errors are reduced during fusion, and the fusion effect is better.
Optionally, when the image to be fused is one of the n+1 images, determining a ghost area in the corresponding image to be fused by:
performing binarization processing on the initial ghost areas in the corresponding images to be fused to obtain binarized ghost areas;
And dividing the binarized ghost areas to obtain a plurality of non-adjacent ghost areas.
In the implementation process, the binarization processing is carried out on the ghost area, so that noise in the ghost area can be effectively filtered.
Alternatively, the highlight region in the corresponding ghost region of the image to be fused is determined by:
determining an average pixel value of pixel points in each ghost area in the corresponding image to be fused;
And if the average pixel value of the corresponding ghost areas is larger than the preset value, determining that the corresponding ghost areas in the corresponding images to be fused are highlight areas.
In the above implementation, by dividing the ghost areas, the highlight areas in the ghost areas can be more accurately found.
Optionally, when the image to be fused is the registered image, determining an initial ghost area in the corresponding registered image by:
And determining an initial ghost area in the registered image under each contrast exposure compensation value based on the pixel deviation between the registered image and the reference image under each contrast exposure compensation value and the pixel value in the registered motion mask image corresponding to each contrast exposure compensation value.
Optionally, the determining the initial ghost area in the registered image under each contrast exposure compensation value based on the pixel deviation between the registered image and the reference image under each contrast exposure compensation value and the pixel value in the registered moving mask image corresponding to each contrast exposure compensation value includes:
determining pixel difference values of corresponding pixel points between the registration image and the reference image under each contrast exposure compensation value;
Judging whether the pixel value difference value of the corresponding pixel point is larger than a first threshold value or not, and judging whether the pixel value of the corresponding pixel point in the registration motion mask image corresponding to each contrast exposure compensation value is larger than a second threshold value or not;
And if the pixel value difference value of the corresponding pixel point is larger than the first threshold value and the pixel value of the corresponding pixel point in the registration motion mask image is larger than the second threshold value, determining the position of the corresponding pixel point in the registration image under the corresponding contrast exposure compensation value as an initial ghost area.
In the implementation process, the pixel values of the pixel points in the registration moving mask image are combined to perform ghost detection in the registration image, so that the problem that a large number of highlight areas are misdetected into ghost areas due to the fact that the ghost detection is performed only through pixel differences between two original images in the prior art can be avoided, and the accuracy of the ghost detection can be effectively improved.
Optionally, the registering the fusion image under each reference exposure compensation value and the motion mask image corresponding to each reference exposure compensation value with the reference image respectively to obtain a registered image corresponding to each reference exposure compensation value and a registered motion mask image includes:
Adjusting the image brightness of the fusion image under each contrast exposure compensation value to the image brightness of the reference image;
Determining registration reference data for registering the fusion image with the reference image under each contrast exposure compensation value after the brightness adjustment of the image;
And registering the fusion image under each contrast exposure compensation value based on the registration reference data to obtain a registration image corresponding to each contrast exposure compensation value.
In the implementation process, the fusion image corresponding to each contrast exposure compensation value is registered to the reference image, so that the subsequent ghost detection and image fusion can be facilitated.
Optionally, the registering the fusion image under each reference exposure compensation value and the motion mask image corresponding to each reference exposure compensation value with the reference image respectively to obtain a registered image corresponding to each reference exposure compensation value and a registered motion mask image includes:
and registering the moving mask image corresponding to each contrast exposure compensation value based on the registration reference data to obtain a registration moving mask image corresponding to each contrast exposure compensation value.
In the implementation process, the moving mask image is registered based on the registration reference data, so that the detection can be conveniently carried out by effectively combining the pixel values of the corresponding positions in the registration moving mask image during ghost detection.
Optionally, after the obtaining the target image, the method further includes:
And adjusting the target image by limiting a contrast self-adaptive histogram equalization algorithm, so that an image with better contrast and definition can be obtained.
Optionally, the obtaining the reference image under the reference exposure compensation value and the fused image under the N reference exposure compensation values includes:
Acquiring at least two frames of images acquired under each of the N contrast exposure compensation values and at least two frames of images acquired under the reference exposure compensation value for the same scene;
And carrying out fusion processing on at least two frames of images shot by each reference exposure compensation value to obtain fusion images under the N reference exposure compensation values, and carrying out fusion processing on at least two frames of images acquired under the reference exposure compensation values to obtain a reference image under the reference exposure compensation values.
In the implementation process, the collected images are fused, so that noise in the collected images can be effectively filtered, and the effect of denoising the images is achieved.
Optionally, the fusion processing is performed on at least two frames of images by:
determining an image with highest definition in the at least two frames of images as a reference image;
And fusing other frame images in the at least two frame images to the reference image to obtain corresponding fused images.
In the implementation process, the image with the highest definition is selected as the reference image, so that after fusion, a fused image with good denoising can be obtained.
Optionally, the fusing other frame images in the at least two frame images to the reference image to obtain corresponding fused images includes:
determining pixel difference values of other corresponding pixel points in each frame of image and the reference image;
Determining the fusion weight of each corresponding pixel point in other each frame of image based on the pixel difference value;
And fusing other frame images with each corresponding pixel point in the reference image based on the fusion weights of each corresponding pixel point in the other frame images to obtain corresponding fusion images.
In the implementation process, the pixel fusion is performed based on the fusion weight, so that the highlight pixels in the image can be effectively extracted, and the highlight pixels and the low-light pixels are well fused.
Optionally, the at least two frames of images acquired by each contrast exposure compensation value include a reference image, and the determining a motion mask image between the at least two frames of images acquired by each contrast exposure compensation value includes:
acquiring pixel difference values of corresponding pixel points between other frame images in at least two frame images acquired by each contrast exposure compensation value and the reference image;
taking the pixel difference value of each corresponding pixel point as an initial motion mask image between other each frame of images and the reference image to obtain a plurality of initial motion mask images;
And superposing the plurality of initial moving mask images to obtain a moving mask image between at least two frames of images acquired by each contrast exposure compensation value.
In the implementation process, the corresponding motion mask image is determined by acquiring the pixel difference value between at least two frames acquired by each contrast exposure compensation value, so that the motion area in the image can be effectively determined.
Optionally, the number of images acquired at each reference exposure compensation value and the number of images acquired at the reference exposure compensation value are determined according to the sensitivities corresponding to the different exposure compensation values.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
The system comprises an image acquisition module, a reference exposure compensation value acquisition module and a fusion module, wherein the image acquisition module is used for acquiring a reference image under the reference exposure compensation value and fusion images under N comparison exposure compensation values, wherein the reference image is obtained by fusing at least two frames of images acquired under the reference exposure compensation value;
The motion mask image acquisition module is used for determining a motion mask image between at least two frames of images acquired by each contrast exposure compensation value, and pixel values of pixel points in the motion mask image are used for representing motion amplitude of corresponding pixel points in the at least two frames of images acquired by each contrast exposure compensation value;
The image registration module is used for registering the fusion image under each contrast exposure compensation value and the motion mask image corresponding to each contrast exposure compensation value with the reference image respectively to obtain a registration image corresponding to each contrast exposure compensation value and a registration motion mask image;
The ghost area detection module is used for determining a ghost area in the registration image under each contrast exposure compensation value based on the registration image under each contrast exposure compensation value, the reference image and the registration motion mask image corresponding to each contrast exposure compensation value;
and the image fusion module is used for fusing the registration images corresponding to the N contrast exposure compensation values and the reference image based on the ghost area to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing computer readable instructions which, when executed by the processor, perform the steps of the method as provided in the first aspect above.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method as provided in the first aspect above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
According to the image processing method provided by the embodiment of the application, the moving mask image between at least two frames of images acquired by each contrast exposure compensation value is acquired, and then the moving mask image is registered, so that the ghost area in the registered image under each contrast exposure compensation value can be accurately detected by combining the registered moving mask image, the detection precision of the ghost area is improved, and the ghost can be well eliminated when the images are fused based on the ghost area, and the image with higher definition is obtained.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device for performing an image processing method according to an embodiment of the present application, where the electronic device may include at least one processor 110, such as a CPU, at least one communication interface 120, at least one memory 130, and at least one communication bus 140. Wherein the communication bus 140 is used to enable direct connection communication of these components. The communication interface 120 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 130 may be a high-speed RAM memory or a nonvolatile memory (non-volatile memory), such as at least one disk memory. Memory 130 may also optionally be at least one storage device located remotely from the aforementioned processor. The memory 130 stores computer readable instructions, which when executed by the processor 110, cause the electronic device to perform a method process shown in fig. 2 described below, for example, the memory 130 may be configured to store images corresponding to respective exposure compensation values, and the processor 110 may obtain, during image processing, corresponding images from the memory 130 to perform corresponding processing to obtain a fused target image.
In the embodiment of the application, the electronic equipment can be terminal equipment, when the electronic equipment is terminal equipment, the terminal equipment can also comprise a camera, the camera can be triggered to be started through an application program instruction to realize a photographing or shooting function, and the camera can send the acquired image to a processor in the terminal equipment for corresponding processing. The terminal device may be a hardware device with various operating systems and imaging devices, such as a smart phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
It will be appreciated that the configuration shown in fig. 1 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method according to an embodiment of the application, where the method includes the following steps:
step S110, obtaining a reference image under the reference exposure compensation value and fusion images under N contrast exposure compensation values.
The exposure compensation is a control mode of exposure, and is to measure the light of the shot object through the terminal equipment, obtain the parameters of the shutter and aperture combination, and then manually change the shutter speed obtained by the light measurement through the exposure compensation.
The exposure compensation value is generally about-3 to +3EV, for example, the exposure compensation value can be EV+2, EV+1, EV0, EV-1, EV-2, if the ambient light source is dark, the exposure value can be increased to highlight the definition of the picture. Of course, in practical application, the exposure compensation values corresponding to different terminal devices may be more or less, and when capturing an image, the corresponding exposure compensation values may be selected according to the requirement to capture the image.
Because of the influence of factors such as light intensity in a shooting scene and jitter degree of terminal equipment, when the terminal equipment shoots an image, the condition that the image is unclear or noise is introduced may occur, so that a plurality of frames of original images can be generally obtained by shooting with corresponding exposure compensation values, and then clear original images can be selected therefrom for fusion noise reduction treatment.
In the embodiment of the application, in order to obtain an image with a better definition effect, a plurality of exposure compensation values can be adopted for image shooting, namely, a reference exposure compensation value and N contrast exposure compensation values are adopted for image shooting, wherein N can be an integer greater than or equal to 1. The reference exposure compensation value may be understood as a normal exposure compensation value, the change of the exposure compensation may be an increase or decrease of the exposure times based on the normal exposure compensation value, for example, the reference exposure compensation value may be a change based on the reference exposure compensation value, for example, the reference exposure compensation value is EV0, the N reference exposure compensation values may include ev+2, ev+1, EV-1, EV-2, where ev+2 may be understood as an increase of twice the exposure amount based on EV0, ev+1 may be understood as an increase of one time the exposure amount based on EV0, and similarly, EV-1 may be understood as an decrease of twice the exposure amount based on EV0, and EV-2 may be understood as an decrease of twice the exposure amount based on EV 0.
In the embodiment of the present application, for convenience of description, in the following description process, the reference exposure compensation value of the example is EV0, and the N reference exposure compensation values are ev+2, ev+1, EV-1, and EV-2 are illustrated as examples.
In order to better realize denoising of the images and achieve better effect, after multi-frame images are acquired under different exposure compensation values, fusion denoising can be carried out on the multi-frame images, and corresponding fusion images are obtained. One or at least two frames of images can be acquired under the reference exposure compensation value, and when at least two frames of images are acquired, the reference image is obtained by fusing at least two frames of images acquired under the reference exposure compensation value, for example, the reference image corresponding to EV0 is obtained by fusing at least two frames of acquired EV0 images. The fused image under the reference exposure compensation value is also obtained by fusing at least two frames of images acquired under the reference exposure compensation value, for example, the fused image corresponding to EV+1 is obtained by fusing at least two frames of acquired EV+1 images. The reference image and the fusion image obtained in this way can be an image with good denoising effect, which is beneficial to the subsequent improvement of the accuracy of ghost detection.
If only one frame of image is acquired under the reference exposure compensation value, the reference image is the acquired frame of image. If at least two frames of images are acquired under the reference exposure compensation value, the reference image is a fused image obtained by fusing the at least two frames of images.
Step S120 is to determine a motion mask image between at least two frames of images acquired for each of the contrast exposure compensation values.
The pixel values of the pixel points in the motion mask image can be used for representing the motion amplitude of the corresponding pixel points in at least two frames of images acquired by each contrast exposure compensation value. It can be understood that, due to shake during shooting, there is a motion shift of an object in at least two frames of images obtained by shooting, so if the motion shift is large, the pixel value of the pixel point in the motion mask image is large, that is, the motion amplitude of the corresponding pixel point in at least two frames of images is large.
And step S130, registering the fusion image under each contrast exposure compensation value and the motion mask image corresponding to each contrast exposure compensation value with the reference image respectively to obtain a registration image corresponding to each contrast exposure compensation value and a registration motion mask image.
Since the images acquired under the respective exposure compensation values may have differences, in order to facilitate subsequent image fusion, the fused image and the motion mask image also need to be registered. If the reference image is taken as a reference, registering the fusion image under each contrast exposure compensation value with the reference image, so that each pixel point in each fusion image is aligned with a corresponding pixel point in the reference image, and a registration image corresponding to each contrast exposure compensation value is obtained. And registering each moving mask image to the reference image, so that each pixel point in each moving mask image is aligned with a corresponding pixel point in the reference image, and a registered moving mask image corresponding to each contrast exposure compensation value is obtained, thereby facilitating the subsequent detection of the ghost areas in the registered images.
And step S140, determining a ghost area in the registration image under each contrast exposure compensation value based on the registration image under each contrast exposure compensation value, the reference image and the registration motion mask image corresponding to each contrast exposure compensation value.
In the shooting process, some areas in the obtained image are blurred or semitransparent due to the movement of objects in the shooting scene or shake generated during shooting, and the phenomenon is called ghosting, and the areas where the phenomena occur in the image are called ghosting areas. In order to obtain a clear image, in the process of image fusion, the ghost areas in the image need to be detected, so that the ghost areas in the image can be eliminated in the process of image fusion, and the obtained fused image has higher definition.
Because the highlight areas in the images obtained by different exposure compensation values are obviously different, if only the pixel difference between the images obtained by different exposure compensation values is used as the basis of ghost detection, a large number of highlight areas in the images can be mistakenly detected as ghost areas, so that the detection precision of the ghost areas is lower, and the finally obtained fusion image still has the unclear problem. Therefore, in the embodiment of the application, in order to improve the accuracy of detecting the ghost area, the ghost area in the registration image is also detected by combining the registration moving mask image.
And step S150, fusing the registration images corresponding to the N contrast exposure compensation values and the reference image based on the ghost area to obtain a target image.
After the ghost areas in each registration image are detected, N Zhang Peizhun images and n+1 images in total of the reference images can be fused based on the ghost areas, so that the ghost areas in the images can be eliminated in the fusion process, and a target image with higher definition can be obtained.
In the implementation process, the moving mask image between at least two frames of images acquired by each contrast exposure compensation value is acquired, and then the moving mask image is registered, so that the ghost area in the registered image under each contrast exposure compensation value can be accurately detected by combining the registered moving mask image, the detection precision of the ghost area is improved, and in this way, when the images are fused based on the ghost area, the ghost can be well eliminated, and the image with higher definition is obtained.
As an embodiment, in order to perform the noise reduction processing on the image acquired at each exposure compensation value, the image acquired at each exposure compensation value may be subjected to the fusion processing first, during the fusion processing, at least two frames of images acquired at each of the N reference exposure compensation values and at least two frames of images acquired at the reference exposure compensation value for the same scene may be acquired first, then the at least two frames of images captured at each of the reference exposure compensation values are subjected to the fusion processing to obtain N fused images at the reference exposure compensation values, and the at least two frames of images acquired at the reference exposure compensation values are subjected to the fusion processing to obtain the reference image at the reference exposure compensation value.
The same scene may refer to image acquisition of the same object in the scene under each exposure compensation value, for example, capturing a moon, capturing a person or animal, and the like. When the image acquisition is carried out, N contrast exposure compensation values and reference exposure compensation values can be stored in the terminal equipment in advance, namely specific values of the exposure compensation values EV+2, EV+1, EV0, EV-1 and EV-2 can be set in the terminal equipment in advance, so that the terminal equipment can automatically set the photographing parameters to the corresponding exposure compensation values for image acquisition after receiving a photographing instruction triggered by a user. For example, when the exposure compensation value is set to ev+2, the terminal device acquires at least two corresponding frames of images for the scene, adjusts the exposure compensation value to ev+1 after the images are acquired, and continues to acquire the images, so that the images under each exposure compensation value can be acquired.
At least two frames of images collected by the terminal equipment under each exposure compensation value can be obtained by continuously shooting the same scene in a short time (such as 1S), and the collected at least two frames of images can form a group of image frame sequences according to the sequence of the collection time.
In addition, when the image is acquired, the image can be acquired by one camera for the same scene, or the image can be acquired by two or more cameras for the same scene, and then the image acquired by the cameras can be taken as the image acquired under the corresponding exposure compensation value.
In the implementation process, the collected images are fused, so that noise in the collected images can be effectively filtered, and the effect of denoising the images is achieved.
In the above embodiment, when capturing images under different exposure compensation values, in order to facilitate image capturing, the conditions for capturing images by the terminal device may be set in advance, that is, before capturing an image, whether the brightness of the surrounding environment of the shooting scene is less than the preset brightness threshold may be determined first, if yes, at least two frames of images captured under each of N contrast exposure compensation values for the same scene and at least one frame of image captured under the reference exposure compensation value are obtained again.
The terminal device can be provided with an ambient light sensor, so that ambient brightness of a shooting scene can be obtained through the ambient light sensor, and a preset brightness threshold can be flexibly set according to actual requirements. In this implementation manner, if the terminal device receives the photographing instruction, only when the ambient brightness is less than the preset brightness threshold, image acquisition is performed under different exposure compensation values, which indicates that the ambient brightness is darker at this time, in order to obtain an image with a better effect, image acquisition is performed under different exposure compensation values, and then the acquired images are correspondingly processed to obtain the image with a better effect.
As another embodiment, before the image is acquired, a photographing instruction may be received first, and then it is determined whether a photographing mode corresponding to the photographing instruction is a night scene photographing mode, if so, at least two frames of images acquired for the same scene under each of N contrast exposure compensation values and at least one frame of images acquired under a reference exposure compensation value are acquired. That is, in this case, only when in the night scene shooting mode, image acquisition is performed under different exposure compensation values, which means that the night scene image acquisition is performed at this time, in order to obtain an image with a better effect, image acquisition is performed under different exposure compensation values, and then the acquired images are processed correspondingly, so that the image with a better effect can be obtained.
Of course, the two modes can be combined, that is, before image acquisition, whether the ambient brightness of the shooting scene is smaller than a preset brightness threshold value or not and whether the shooting scene is in a night scene shooting mode or not can be judged, and when the ambient brightness is smaller than the preset brightness threshold value and the shooting scene is in the night scene shooting mode, different exposure compensation values are adopted to acquire the image.
It can be understood that in the practical application process, the judging conditions for image acquisition can be flexibly set according to the actual requirements, and are not limited to the above embodiments, and other judging conditions can be set, for example, whether the terminal device is in a night scene shooting mode or not and whether the terminal device is in a stable state can be judged after a shooting instruction is received, and when the terminal device is in the night scene shooting mode and the terminal device is in the stable state, different exposure compensation values are adopted for image acquisition, and other judging conditions are not listed here.
As an embodiment, in order to facilitate the terminal device to automatically collect images under different exposure compensation values, the number of images photographed under different exposure compensation values may be preset, for example, the number of images photographed under ev+1 is set to 5 frames, the number of images photographed under EV0 is set to 4 frames, etc., so that the terminal device may photograph to obtain a corresponding number of images under different exposure compensation values, for example, after photographing 4 frames of images under EV0, cut off to ev+1, and then photographing 5 frames of images under ev+1.
Or alternatively, the number of images acquired at each reference exposure compensation value and the number of images acquired at the reference exposure compensation value may be determined according to the sensitivities corresponding to the different exposure compensation values.
The sensitivity may be obtained by calculating based on the exposure compensation value, and the number of images to be shot corresponding to different sensitivities may be preset in the terminal device, so that when images are shot under different exposure compensation values, the sensitivity corresponding to the exposure compensation value may be obtained first, and then the corresponding number of images to be shot is obtained, generally, ev+2 to EV-2, and the corresponding sensitivities thereof decrease in turn, so that the number of corresponding shot images may also be retrieved in turn, for example, for a certain scene, the number of images shot from ev+2 to EV-2 may be 6,5,4,3,2.
In the above embodiment, the method of fusing the images under each exposure compensation value may be that an image with the highest definition in at least two frame images is determined as a reference image, and then other frame images in at least two frame images are fused to the reference image to obtain a corresponding fused image.
The fusion mode of at least two frames of images under each contrast exposure compensation value and the fusion mode of at least two frames of images under the reference exposure compensation value can be the fusion modes, namely, one frame of image is selected as a reference image, then other frames of images are fused to the reference image, and finally, one fused image is obtained through fusion.
The image with the highest definition can be determined by adopting a corresponding gradient algorithm, such as image definition evaluation based on the maximum value of the gradient edge, or image definition evaluation based on the point sharpness and the square gradient, or image definition evaluation based on a Brenner gradient function, and the like, so that the definition evaluation can be performed on each frame of image, and then the image with the highest definition is used as a reference image, so that after fusion, a fused image with good denoising can be obtained. The implementation process for performing sharpness evaluation specifically may refer to the related implementation process in the prior art, and will not be described in detail herein.
The above-mentioned method of fusing the middle image may be to fuse the pixel value of each pixel point in other frame images with the pixel value of the corresponding pixel point in the reference image, for example, taking the average value of the pixel values of two corresponding pixel points as the pixel value of the pixel point in the fused image, according to this method, the multi-frame images may be fused into one image.
In the embodiment of the application, in order to perform better fusion denoising processing on the images, the fusion mode can be that the pixel difference value of each pixel point in other each frame of images and the reference image is determined, the fusion weight of each corresponding pixel point in other each frame of images is determined based on the pixel difference value, and then the fusion weight of each corresponding pixel point in other each frame of images is used for fusing the other frame of images and each corresponding pixel point in the reference image, so as to obtain the corresponding fusion image.
For example, as shown in fig. 3, fig. 3 is a three-frame image acquired at ev+1, and pixel values of respective pixels in each image are as shown in fig. 3, with an image labeled b as a reference image. It should be noted that, before the pixel fusion, because there may be a deviation between each frame of image, the other two frames of images may also be registered to the reference image first, and the specific registration process may be implemented by using an existing related registration method, which is not described in detail herein.
Assuming that the three frames of images shown in fig. 3 are three frames of images after registration, each pixel point of the three frames of images is aligned, for example, a pixel point 1 in an image a, a pixel point 2 in an image b and a pixel point 3 in an image c are aligned, when the three pixel points are fused, pixel difference values of the pixel point 1 and the pixel point 2 and pixel difference values of the pixel point 3 and the pixel point 2 can be calculated first, and a calculation mode of a fusion weight can be (255-pixel difference value)/255, however, in practical application, a calculation mode of the fusion weight can also be other modes, for example, pixel difference value/255, which can be flexibly set according to practical requirements, and the fusion weight can be understood that the closer a pixel at the same position is to the larger the fusion weight. Therefore, based on the above manner, the fusion weight of the pixel 1 and the pixel 3 can be obtained, for example, the pixel value of the pixel 1 is 30, the pixel value of the pixel 2 is 20, the pixel value of the pixel 3 is 40, at this time, the fusion weight of the pixel 1 is (255-10)/255=0.96, the fusion weight of the pixel 3 is (255-20)/255=0.92, and when the three pixels are fused, the calculation formula of the fused pixel value is that the fusion weight of the pixel 1 is (the fusion weight of the pixel 1 is) and the fusion weight of the pixel 2 is (because the image b is the reference image, the fusion weight is 1) +the sum of the fusion weights of the pixel values of the pixel 3 and the fusion weights of the pixels, and the fused pixel value of the three pixels is 29.72 can be obtained according to the calculation formula.
It should be noted that, the above-mentioned fusion calculation formula of pixel values is only an example, in practical application, it may be fused based on other modes, such as adding the fusion weights of the pixel values, multiplying the sum of the pixel values, and the like, and the specific fusion mode based on the fusion weights may be flexibly set based on practical requirements. Of course, in practical application, other fusion modes, such as taking the pixel mean value as the fused pixel value, can be adopted, and the effective pixel point in each frame of image can be effectively extracted by adopting the fusion mode, so that a better denoising effect is realized, and a fused image with good denoising effect is obtained.
According to the above fusion method, the fusion can be performed for each corresponding pixel, so that the pixel value of each fused pixel can be obtained and used as the pixel value of each pixel in the fused image, and the image d in fig. 3 is the fused image after fusion.
The above examples may be adopted for the manner of fusion of the images at each exposure compensation value, that is, the manner of obtaining the reference image for the image fusion at the reference exposure compensation value may also be adopted, and will not be described in detail here.
In the implementation process, the pixel fusion is performed based on the fusion weight, so that the highlight pixels in the image can be effectively extracted, and the highlight pixels and the low-light pixels are well fused.
In the above process of fusing at least two frames of images under each contrast exposure compensation value, a motion mask image between at least two frames of images acquired by each contrast exposure compensation value may also be obtained, where the implementation process is as follows: and acquiring pixel difference values of corresponding pixel points between other frame images in at least two frame images acquired by each contrast exposure compensation value and a reference image, taking the pixel difference values of the corresponding pixel points as initial motion mask images between other frame images and the reference image to obtain a plurality of initial motion mask images, and then superposing the plurality of initial motion mask images to obtain the motion mask image between at least two frame images acquired by each contrast exposure compensation value.
For example, taking three frames of images as shown in fig. 3 as an example, firstly, the pixel difference value of each corresponding pixel point between the image a and the image b (reference image) and the pixel difference value of each corresponding pixel point between the image c and the image b are obtained, if the pixel difference value of the pixel point 1 and the pixel point 2 is 10, the pixel difference value of the pixel point 3 and the pixel point 2 is 20, the pixel value of the pixel point in the initial motion mask image between the image a and the image b is 10, and the pixel value of the pixel point in the initial motion mask image between the image c and the image b is 20, at this time, two initial motion mask images can be obtained. As shown in fig. 4, the two initial moving mask images are then superimposed, where the superimposition may be understood as adding the pixel values of the corresponding pixels, and if the pixel value obtained by adding the pixel value 10 to the pixel value 20 is 30, then the pixel value of the corresponding pixel in the moving mask image between the original three frames of images is used as the pixel value of the corresponding pixel in the moving mask image, and in this manner, the processing may be performed in the same manner as described above for each corresponding pixel, and the pixel value of each pixel in the finally obtained moving mask image is shown in fig. 4.
In the implementation process, the corresponding motion mask image is determined by acquiring the pixel difference value between at least two frames acquired by each contrast exposure compensation value, so that the motion area in the image can be effectively determined.
In the above embodiment, in order to finally fuse the images, the fused image under each contrast exposure compensation value is registered with the reference image, where the registration method may include that the image brightness of the fused image under each contrast exposure compensation value is adjusted to the image brightness of the reference image, then the registration reference data for registering the fused image under each contrast exposure compensation value after the image brightness adjustment with the reference image is determined, and then the registration reference data is used to register the fused image under each contrast exposure compensation value, so as to obtain the registration image corresponding to each contrast exposure compensation value.
The brightness of the image collected under each exposure compensation value is inconsistent, so that during registration, the brightness can be adjusted to the image brightness of the reference image, the specific method can be to process the pixel values of the pixel points, and the pixel value difference between the adjacent exposure compensation values is generally about 2 times, for example, the pixel value of each pixel point in the fused image corresponding to EV+1 can be divided by 2, so that the overall brightness of the fused image corresponding to EV+1 is close to the brightness of the reference image corresponding to EV 0. Or a histogram matching algorithm may be further adopted to process the fusion image corresponding to ev+1, and the manner of processing by using the histogram matching algorithm may refer to the related implementation process in the prior art, which is not described in detail herein.
It should be understood that image registration is actually to find a spatial transformation to map one image to another image, so that pixels corresponding to the same position in space in two images are in one-to-one correspondence, thereby achieving the purpose of information fusion, that is, registration is to find a mapping relationship between a fused image and a reference image, and registration reference data is a mapping relationship, so that two images can be registered based on the mapping relationship.
The image registration method may be a plurality of methods, such as a region-based matching method, an image feature-based registration method, a model-based matching method, etc., and the registration reference data obtained by the image feature-based registration method may be different for different image registration methods, for example, the image feature-based registration method is to extract features that remain unchanged in the image, such as edge points, line features, surface features, matrix features, etc., and these features may be used as registration reference data for registration of two images, and registration of two images may be performed based on the registration reference data, and specific registration processes may refer to existing implementation manners and are not described in detail herein.
The above-mentioned registration process may be exemplified by registering the fusion image corresponding to ev+2 with the reference image corresponding to EV0 based on registration reference data 1 (the registration reference data 1 is known to mean characteristic data between the fusion image corresponding to ev+2 and the reference image corresponding to EV0 based on the description of the registration reference data described above), registering the fusion image corresponding to ev+1 with the reference image corresponding to EV0 based on registration reference data 2, registering the fusion image corresponding to EV-1 with the reference image corresponding to EV0 based on registration reference data 3, registering the fusion image corresponding to EV-1 with the reference image corresponding to EV0 based on registration reference data 4, when the N comparison exposure compensation values include ev+2, ev+1, EV-2, and EV-2.
In other embodiments, because of the large differences between the images acquired by the respective exposure compensation values, if the fused image under the larger or smaller reference exposure compensation value is directly registered to the reference image, a large registration error may be caused, and registration is not easy. Therefore, when the fusion image under each contrast exposure compensation value is aligned with the base station image, each fusion image can be sequentially aligned, for example, the fusion image under the larger contrast exposure compensation value is aligned to the fusion image under the smaller contrast exposure compensation value, and then the fusion image under the smaller contrast exposure compensation value is aligned to the reference image, and the specific implementation process can be as follows:
The reference image is used as the i-th image in the sequence according to the size sequence of the N comparison exposure compensation values, the j fusion images under the comparison exposure compensation values are included before the reference image, and the k fusion images under the comparison exposure compensation values are included after the reference image. And then during registration, firstly registering the 1 st fused image to the 2 nd fused image to obtain 1 st intermediate registration data and the 1 st registered image, sequentially taking x as 1 to j-2, registering the x registered image to the x+2 th fused image based on the x intermediate registration data, and registering the j-1 st registered image to the reference image based on the j-1 st intermediate registration data when the x takes j-1. When registering k fused images, firstly registering the k fused images to the k-1 fused image to obtain k intermediate registration data and the k registered images, sequentially taking x as k to 1, registering the x registered images to the x-2 fused image based on the x intermediate registration data, and registering the x+1 registered images to the reference image based on the x+1 intermediate registration data when taking x to 1.
For example, if the N reference exposure compensation values include ev+2, ev+1, EV-1, and EV-2, and the reference exposure compensation value is EV0, the fusion image corresponding to ev+2 is registered with the reference image corresponding to EV1 based on the registration reference data 1, that is, the fusion image corresponding to ev+2 is registered with the reference image corresponding to EV1, and at this time, the registration image corresponding to ev+2 is obtained, the fusion image corresponding to ev+1 is registered with the reference image corresponding to EV0 based on the registration reference data 2, then the registration image corresponding to ev+2 is registered with the reference image corresponding to EV0 based on the registration reference data 2, the fusion image corresponding to EV-1 is registered with the reference image corresponding to EV0 based on the registration reference data 3, and the registration image corresponding to EV-2 is registered with the reference image corresponding to EV1 based on the registration reference data 4, and then the registration image corresponding to EV-2 is registered with the reference image corresponding to EV0 based on the registration reference data 3.
The fused images are sequentially registered to the reference image according to the order of the exposure compensation values, so that the difference between the fused images corresponding to the exposure compensation values is reduced, and the registration accuracy is higher.
In the implementation process, the fusion image corresponding to each contrast exposure compensation value is registered to the reference image, so that the subsequent ghost detection and image fusion can be facilitated.
Because the moving mask image corresponding to each contrast exposure compensation value cannot be directly registered with the reference image, registration reference data obtained in the registration process of the fusion image and the reference image can be multiplexed, and the moving mask image corresponding to each contrast exposure compensation value is registered based on the registration reference data, so that the registration moving mask image corresponding to each contrast exposure compensation value is obtained.
It can be understood that, when the fusion image corresponding to the contrast exposure compensation value is aligned with the reference image, the obtained registration reference data can be understood as a mapping relationship between the two images, so that the moving mask image can be registered to the reference image based on the same mapping relationship, and thus, the alignment of each corresponding pixel point in the moving mask image and the reference image can be realized, and each corresponding pixel point in the moving mask image and the fusion image can be aligned, so that the ghost region in the registration image can be conveniently found by combining the registration moving mask image.
In the implementation process, the moving mask image is registered based on the registration reference data, so that the detection can be conveniently carried out by effectively combining the pixel values of the corresponding positions in the registration moving mask image during ghost detection.
After registering the images in the above embodiment, detecting the ghost areas in the registered images, and after detecting the ghost areas in the registered images, fusing the images into a target image, as an implementation manner, when fusing the reference image and the N registered images, fusing the two images, and fusing the new fused image formed after the fusion with the unfused image, and finally fusing into a target image, the specific fusing process is as follows:
And selecting two images from the reference image, N+1 images formed by N registration images and an intermediate fusion image formed by fusing two images in the N+1 images as images to be fused, fusing one image to be fused with the other image to be fused according to a highlight area and a non-ghost area in a ghost area of one image to be fused, and fusing all the images in the N+1 images once to obtain a target image.
For example, the n+1 images include 5 images, i.e., image 1, image 2, image 3, image 4, and image 5, where the image to be fused may refer to one image of the 5 images, an intermediate fused image obtained by fusing two images of the 5 images, or an image obtained by fusing two images of the 5 images and the remaining unfused image of the 5 images. For example, if the image 1 and the image 2 are selected to be fused, the image obtained after the image 1 and the image 2 are fused is called an intermediate fusion image, at this time, the image 1 and the image 2 are taken as images to be fused, then if the intermediate fusion image is continuously fused with the image 3, at this time, the intermediate fusion image and the image 3 are also called images to be fused, if the image obtained after the intermediate fusion image is fused with the image 3 is continuously fused for the next time, at this time, the image obtained after the intermediate fusion image and the image 3 are also called intermediate fusion image, and when the image participates in the next fusion, the intermediate fusion image is taken as the images to be fused. That is, the image to be fused may refer to an image in n+1 images, or may refer to an intermediate fused image, where the intermediate fused image may be understood to refer to an image obtained by fusing two images in n+1 images, or may refer to an image obtained by fusing two images in n+1 images with another image to be fused.
The above 5 images include a registration image and a reference image, and in the step S140, the ghost area in each registration image is obtained, and in the case of fusion, if the ghost area in the reference image needs to be obtained, the ghost area in the reference image may be obtained according to the same detection method of the ghost area, and the detailed implementation process of the ghost area in the detected image is described in detail in the following embodiments, which will not be described in detail here.
After the ghost areas in each image are determined, the highlight areas can be determined from the ghost areas, wherein the highlight areas refer to areas with pixel values larger than a threshold value, the brightness of the images in the areas is higher, and the pixels of darker areas in the images with large exposure compensation values and the pixels of lighter areas in the images with small exposure compensation values are reserved as much as possible in order to obtain the images with good fusion effect when fusion is performed because the exposure compensation values corresponding to each image are different, namely the brightness is different. Therefore, at the time of fusion, the two images can be fused based on the highlight region and the non-ghost region in the ghost region of one image.
Taking the fusion of the image 1 and the image 2 as an example, in the fusion, one image may be selected as a substrate image, for example, if the image 1 is used as a substrate image, a highlight region and a non-ghost region in a ghost region in the image 1 may be obtained, and other regions except for a region corresponding to the highlight region in the image 1 in the image 2 are obtained, and fusion is performed based on pixels of these regions in the two images.
When one of the images to be fused is an intermediate fused image, the ghost area is an area corresponding to the ghost area serving as a substrate image in the two images participating in fusion, the highlight area is an area corresponding to the highlight area in the substrate image, for example, when the image 1 and the image 2 are fused, the image 1 is selected as the substrate image, the image 2 is a non-substrate image, and the substrate image can be understood as a reference image, namely, when the image 1 and the image 2 are fused, the highlight area, the non-ghost area and other areas in the image 2 are fused based on the highlight area of the image 1. When the intermediate fusion image a and the image 3 are continuously fused to obtain an intermediate fusion image b, if the intermediate fusion image a is selected as the substrate image, the ghost area in the intermediate fusion image b is the area corresponding to the ghost area in the intermediate fusion image a, and the highlight area is the area corresponding to the highlight area in the intermediate fusion image a.
In the implementation process, the high-light areas in the ghost areas in the images are fused, so that the pixels of the high-light areas in the images with different brightness can be fused better, and the definition of the fused images is higher.
As an embodiment, in a highlight region, a non-ghost region in a ghost region according to one image to be fused, the manner of fusing two images to be fused may be:
And fusing the pixels of the highlight region in one to-be-fused image and the pixels of the region corresponding to the non-highlight region in the ghost region of the one to-be-fused image in the other to-be-fused image as the pixels of the corresponding region in the image obtained after the two to-be-fused images are fused, and fusing the pixels of the other regions except the ghost region in the one to-be-fused image and the pixels of the region corresponding to the other regions in the other to-be-fused image.
For example, when two images to be fused (such as image 1 and image 2) are fused, if image 1 is selected as a substrate image, then image 2 is a non-substrate image, then, during the fusion, pixels in a highlight region in image 1 are retained, that is, pixels in a region corresponding to the highlight region in the image obtained after the fusion are pixels in the highlight region in image 1, and pixels in a region corresponding to a non-highlight region in the ghost region in image 1 in image 2 are retained, that is, pixels in a region corresponding to a non-highlight region in the ghost region in image 1 in the image obtained after the fusion are pixels in a region corresponding to a non-highlight region in the ghost region in image 1 in image 2. For the pixels of other non-ghost areas, the pixels of the non-ghost areas in the image 1 are fused with the pixels of the areas corresponding to the non-ghost areas in the image 2, so that a fused image, namely an intermediate fused image, can be obtained. The intermediate fusion image may also continue to fuse with other unfused images, such as with image 3, and then obtain an updated intermediate fusion image, the process of which is similar to the process of images 1 and 2 described above, and will not be repeated here.
In this way, two images can be selected at random for fusion, then an intermediate fusion image is obtained, then the intermediate fusion image is continuously fused with other images, and finally a target image is fused. Of course, in order to obtain a better fusion effect, two images to be fused may be two adjacent images, such as image 2 and image 3, or image 3 and image 4, and then the obtained intermediate fusion image may be fused with the adjacent images, such as image 2 and image 3, to obtain an intermediate fusion image, and then the intermediate fusion image may be fused with image 1 or image 4.
In order to realize the fusion of the images crossing the exposure compensation values, the image with better fusion effect is obtained, and the fusion standard can be to reserve the pixel points of the bright area in the image with small exposure compensation values and the pixel points of the dark area in the image with large exposure compensation values as far as possible, so that the exposure compensation value corresponding to the ghost area in one image to be fused is smaller than the exposure compensation value corresponding to the ghost area in the other image to be fused.
It can be understood that when the image to be fused is an image in n+1 images, the ghost area is the ghost area of the image itself, and when the image to be fused is an intermediate fused image, such as the image a obtained by fusing the image 1 and the image 2, the ghost area of the image a is the area corresponding to the ghost area of the image 1 (the substrate image), and the exposure compensation value of the image a is the exposure compensation value corresponding to the image 1, for example, the exposure compensation value of the image 1 is EV1, and the exposure compensation value corresponding to the image a is EV1.
In the implementation process, the pixels in the highlight areas in the images with different brightness can be well fused through the fusion method, so that the definition of the fused image is higher.
The process of fusing the pixels in the other areas except the ghost area in one to-be-fused image with the pixels in the areas corresponding to the other areas in the other to-be-fused image can be that one of the to-be-fused image and the other to-be-fused image is selected as a substrate image, the other image is taken as a non-substrate image, then the pixel size of each corresponding pixel in the other areas except the ghost area in the substrate image is determined, then the fusion weight of each corresponding pixel in the other areas in the substrate image and the fusion weight of each corresponding pixel in the areas corresponding to the other areas in the non-substrate image are determined based on the pixel size, and then the fusion weights of each corresponding pixel in the other areas in the substrate image and each corresponding pixel in the areas corresponding to the other areas in the non-substrate image are fused with each corresponding pixel in the areas corresponding to the other areas in the non-substrate image.
For example, one image to be fused is image 1, the other image to be fused is image 2, when fusing, one image may be selected from the two images as a substrate image, wherein any one image may be selected as a substrate image, or the substrate image may be selected according to a certain rule, for example, when fusing all the images, an image with a small exposure compensation value is selected as a substrate image, or an image with a large exposure compensation value is selected as a substrate image, for example, after fusing image 1 and image 2, an image a is obtained, at this time, image 1 is selected as a substrate image (the exposure compensation value of image 1 is smaller than the exposure compensation value of image 2), and if fusing image a and image 3 is continued, then image a is selected as a substrate image (because the exposure compensation value of image a is smaller than the exposure compensation value of image 3), or when fusing image 2 and image 3 are selected as substrate images twice, one image may be selected as a substrate image, and the other image may be selected as a non-image.
After the substrate image and the non-substrate image are determined, fusion can be performed according to the fusion weight of each pixel point during fusion, and then the fusion weight can be determined according to the pixel size of each pixel point in the substrate image. The method of determining the fusion weight based on the pixel size of each pixel in the substrate image may be that, if image 1 (in this case, if image 1 is selected as the substrate image) and two corresponding pixels in image 2 are pixel 1 (in image 1) and pixel 2 (in image 2), at this time, since image 2 is brighter than image 1, pixel in the bright image is retained as much as possible for the pixel fusion of the non-ghost area, so the fusion weight of the corresponding region of image 2 should be larger, if the pixel value of pixel 1 is 20, the pixel value of pixel 2 is 30, the calculation formula of the fusion weight may be that the fusion weight of pixel 1=20/255=0.08, the pixel value of pixel 2 is 20+0.08+30+0.92=29.2 for the pixel in the non-ghost area, and the pixel value of pixel 2 in the non-ghost area can be obtained as the high-image area, and the pixel value of the non-ghost area can be obtained as the high-image area in the image area other than the high-image area 1, and the high-image area can be obtained as the high-image area in the image area where the pixel 1 is 5, and the pixel value of the high-image area is obtained as the high-image area.
In the above exemplary fusion process, if the image 2 is selected as the substrate image, the calculation formula of the fusion weight may be that the fusion weight corresponding to the pixel point 1=30/255=0.12, and the fusion weight corresponding to the pixel point 2= (255-30)/255=0.88, and the pixel value of the corresponding pixel point in the fused image is 20×0.12+30×0.88=28.8. That is, different images are selected as the substrate images, and the corresponding fusion weights are calculated in different manners. Of course, the determination criterion of the fusion weight may be that the fusion weight of the dark region in the image with a large exposure compensation value is large, and the fusion weight of the dark region in the image with a small exposure compensation value is small, so, according to the criterion, the fusion weight may be customized, for example, the fusion weight is reset to 0.8 for the pixel point of the region corresponding to the non-ghost region in the image 1 in the image 2, and the fusion weight is 0.1 for the pixel point of the non-ghost region in the image 1.
Of course, the fusion weight is not required during the fusion, and the average value of the pixel values of the corresponding pixels can be taken as the pixel average value of the fused pixels during the fusion of the pixels, or other fusion modes.
If the intermediate fusion image obtained by the above image 1 and the image 2 is fused with the image 3, the fusion process is similar to that described above, if the intermediate fusion image is used as the substrate image during fusion, the pixels in the highlight region in the intermediate fusion image are retained, and the fusion manner of the pixels in the other regions of the intermediate fusion image and the pixels in the other regions corresponding to the image 3 is the same as that described above, and will not be described in detail here.
In the implementation process, the pixel fusion is performed based on the fusion weight, so that the highlight pixels in the image can be effectively extracted, and the highlight pixels and the low-light pixels are well fused.
However, in order to obtain a better fusion effect, in the fusion, the fusion may be performed according to a certain order, and as an embodiment, the n+1 images may be ranked according to the corresponding exposure compensation value from large to small or from small to large, wherein the image in the ranking order i is the ith image, for example, the registration image includes four registration images corresponding to the exposure compensation values of EV-2, EV-1, ev+1, and ev+2, the reference image is the image corresponding to EV0, and the n+1 images formed according to the ranking order are the image 1 (EV-2), the image 2 (EV-1), the image 3 (ev+1), the image 5 (ev+2), the image 4 (ev+1), the image 3 (ev+0), and the image 2 (EV-1), or the image 1 (EV-2) according to the size of the exposure compensation value, for example, the EV-2, EV-1, EV0, EV-1, and EV-2.
In the process of fusing, the two selected images can be used as the images to be fused, namely fusing the 1 st image and the 2 nd image to form an intermediate fused image, sequentially taking i as 3 to N+1, and selecting all the intermediate fused images formed by fusing the images before the i-th image in the sequence and the i-th image as the images to be fused. The fusion sequence of this mode can be understood as that image 1 and image 2 are fused to obtain image a, then image a and image 3 are fused to obtain image b, then image b and image 4 are fused to obtain image c, and finally image c and image 5 are fused to obtain the target image, or conversely, image 5 and image 4 are fused to obtain image a, then image a and image 3 are fused to obtain image b, then image b and image 2 are fused to obtain image c, and then image c and image 1 are fused to obtain the target image.
The image a, the image b and the image c are intermediate fusion images.
The above-described fusion process is described in detail below in order of image 1 (EV-2), image 2 (EV-1), image 3 (EV 0), image 4 (ev+1), and image 5 (ev+2), and these images are referred to as images to be fused in the above-described fusion process, and the following description will refer to the images themselves in the description.
For example, if the image 1 and the image 2 are selected and fused (in this case, the image 1 and the image 2 are referred to as to-be-fused images in the above embodiment), and if the image 1 is taken as the substrate image in the above embodiment, the highlight region of the ghost region in the image 1 is determined first during the fusion, then, during the fusion, the pixel point of the highlight region in the image 1 is taken as the pixel point of the corresponding region in the image a obtained after the fusion, the pixel point of the non-highlight region in the image 2 and the pixel point of the corresponding region in the image 1 are taken as the pixel point of the corresponding region in the image a obtained after the fusion, and the pixel point of the non-ghost region in the image 1 and the pixel point of the corresponding region in the image 2 can be fused according to the fusion weight described in the above embodiment.
In the above manner, the intermediate fusion image (i.e., the above image a, which is also referred to as an image to be fused in the above embodiment) is continuously fused with the image 3, and then an intermediate fusion image is obtained to be continuously fused with the image 4, and finally a completely fused target image is obtained.
It can be understood that if n+1 images to be fused are sequenced from large to small according to the exposure compensation value, the fusion process is that the image 5 is fused with the image 4 to obtain an intermediate fusion image, then the intermediate fusion image is fused with the image 3 to obtain an intermediate fusion image, and then the intermediate fusion image is further fused with the image 2, and then the intermediate fusion image is further fused with the image 1 to obtain a final fused target image, and the specific fusion process can refer to the above-mentioned fusion process and is not repeated here.
In the implementation process, the images are fused in sequence according to the magnitude sequence of the exposure compensation values, so that the images with different brightness can be fused better.
In another embodiment, the process of selecting two images as the images to be fused during fusion can be that i is an odd number from 1 to N+1 in sequence, then the adjacent ith image and the adjacent ith image in the N+1 images are fused in sequence to form an intermediate fused image, and then the adjacent two intermediate fused images are sequentially selected as the images to be fused according to the sequence obtained by the intermediate fused images.
For example, the sequence of the fusion can be understood that the image 1 and the image 2 can be fused to obtain the image a, then the image 3 and the image 4 can be fused to obtain the image b, then the image a and the image b can be fused to obtain the image c, and finally the image c and the image 5 can be fused to obtain the target image. Or conversely, firstly fusing the image 5 with the image 4 to obtain an image a, then fusing the image 3 with the image 2 to obtain an image b, then fusing the image a with the image b to obtain an image c, and finally fusing the image c with the image 1 to obtain a target image.
The fusion process may refer to the corresponding process of the above embodiment, and will not be repeated here. It should be noted that, when the image a is fused with the image b, if the image a is selected as the substrate image, the highlight region of the image a may be a region corresponding to the highlight region of the image 1 (the image 1 is the substrate image when the image 1 is fused with the image 2), the non-highlight region is a region corresponding to the non-highlight region in the image 1, and the fusion of the two images may be performed according to the fusion method described in the above embodiment.
In the implementation process, adjacent images with different brightness are fused, so that brightness errors are reduced during fusion, and the fusion effect is better.
In addition, the above-mentioned fusion process is to fuse according to the order from small to large or from large to small of the exposure compensation value, in the actual situation, the reference image may be used as the center, fuse from two sides, or fuse to two sides, for example, fuse EV-2 with EV-1, obtain an intermediate fusion image, then fuse intermediate fusion image with EV0, obtain an intermediate fusion image again, fuse EV+2 with EV+1, obtain an intermediate fusion image, fuse intermediate fusion image with EV0 again, finally obtain two intermediate fusion images again, obtain the final target image, or fuse EV0 with EV-1, obtain an intermediate fusion image, then fuse intermediate fusion image with EV-2, obtain an intermediate fusion image again, fuse intermediate fusion image with EV+2 again, obtain an intermediate fusion image again, and finally obtain two intermediate fusion images again, obtain the final target image.
In addition, in the above fusion process, when the substrate image is selected, an image with a small exposure compensation value can be selected as the substrate image each time, or an image with a large exposure compensation value can be selected as the substrate image, so that in the fusion process, the substrate image can be selected according to a uniform standard, and images with different exposure compensation values can be fused better.
Through the method, the pixels of the highlight areas in the images with different brightness can be well fused, so that the definition of the fused image is higher.
In the above embodiment, since the pixel value of the pixel point in the corresponding ghost area in the image to be fused may be 0 or greater than 1, in order to eliminate noise interference and to accurately detect the ghost area, the ghost area in the image may be determined by performing binarization processing on the initial ghost area in the corresponding image to be fused to obtain a binarized ghost area, and dividing the binarized ghost area to obtain a plurality of non-adjacent ghost areas.
The binarization processing may be understood as processing a pixel value in an initial ghost area, for example, a pixel value of a pixel point in the initial ghost area is changed to 1, and a pixel value smaller than or equal to a preset threshold value is changed to 0, so that the initial ghost area in the image to be fused can be divided into a plurality of non-adjacent ghost areas based on the area with the pixel value of 1, and the division manner may adopt an existing related image segmentation algorithm, which is not described in detail herein.
In the implementation process, the binarization processing is carried out on the ghost area, so that noise in the ghost area can be effectively filtered.
In one embodiment, when the image to be fused is a registration image, the method for acquiring the initial ghost area in the registration image may be that the initial ghost area in the registration image under each contrast exposure compensation value is determined based on the pixel deviation between the registration image and the reference image under each contrast exposure compensation value and the pixel value in the registration motion mask image corresponding to each contrast exposure compensation value.
The method comprises the steps of determining pixel differences of corresponding pixel points between a registration image and a reference image under each contrast exposure compensation value, judging whether the pixel differences of the corresponding pixel points are larger than a first threshold value or not, and judging whether the pixel values of the corresponding pixel points in a registration motion mask image corresponding to each contrast exposure compensation value are larger than a second threshold value or not, and determining that the positions of the corresponding pixel points in the registration image under the contrast exposure compensation value are initial ghost areas if the pixel value differences of the pixel points are larger than the first threshold value and the pixel values of the corresponding pixel points in the registration motion mask image are larger than the second threshold value.
For example, taking the reference exposure compensation value ev+1 as the reference exposure compensation value EV0 as an example, since the respective pixels in the registration moving mask image corresponding to ev+1, the reference image, and the registration moving mask image corresponding to ev+1 are already aligned in the above-mentioned registration process, the aligned three pixels, for example, the three pixels in the registration image corresponding to ev+1, the pixel in the reference image 2, and the pixel in the registration moving mask image corresponding to ev+1 are aligned pixels. If the pixel value of the pixel 1 is 30, the pixel value of the pixel 2 is 45, and the pixel value of the pixel 3 is 22, in this case, the pixel difference between the pixel 1 and the pixel 2 is 15, if the first threshold is 10, and the second threshold is 12, the comparison finds that the pixel difference between the pixel 1 and the pixel 2 is greater than the first threshold, and the pixel value of the pixel 3 is greater than the second threshold, so that the position of the pixel 1 in the registration image corresponding to ev+1 can be determined to be the initial ghost area, and for other pixel detection, it can be determined which pixels in the registration image are the ghost points, i.e., which areas are the initial ghost areas.
It should be noted that, when the pixel difference value is less than or equal to the first threshold value and/or the pixel value of the corresponding pixel point in the registration motion mask image is less than or equal to the second threshold value, it is determined that the position where the corresponding pixel point in the registration image is located is not the initial ghost area.
The specific values of the first threshold and the second threshold can be flexibly set according to actual requirements, and in some cases, the values of the first threshold and the second threshold can be the same or different.
According to the above manner, each registration image may be compared with the reference image by pixels, and the corresponding registration motion mask image is combined to determine the initial ghost area in each registration image, and for the detection manner of the initial ghost area in the other registration images, the detection manner of the initial ghost area in the registration image corresponding to ev+1 may be referred to above, which is not described in detail herein for brevity of description.
In addition, if the fusing is required based on the ghost area of the reference image in the fusing process, the detection of the initial ghost area of the reference image may be to determine whether the pixel value of the corresponding pixel point in the reference image is greater than a first threshold value, and determine whether the pixel value of the corresponding pixel point in the motion mask image corresponding to the reference image is greater than a second threshold value, wherein the pixel value of the corresponding pixel point in the reference image is greater than the first threshold value, and the pixel value of the corresponding pixel point in the motion mask image corresponding to the reference image is greater than the second threshold value, and then determine that the position of the corresponding pixel point in the reference image is the initial ghost area.
In the implementation process, the pixel values of the pixel points in the registration moving mask image are combined to perform ghost detection in the registration image, so that the problem that a large number of highlight areas are misdetected into ghost areas due to the fact that the ghost detection is performed only through pixel differences between two original images in the prior art can be avoided, and the accuracy of the ghost detection can be effectively improved.
After the initial ghost areas in the respective images are determined, the above-described processing may be performed on the initial ghost areas to be divided into a plurality of ghost areas, and upon image fusion, a highlight area may be determined from the plurality of ghost areas, and fusion may be performed based on the highlight area, the non-highlight area, and the non-ghost area.
The highlight region in the ghost image region can be determined by determining an average pixel value of pixel points in each ghost image region in the corresponding image to be fused, and determining the corresponding ghost image region in the corresponding image to be fused as the highlight region if the average pixel value of the corresponding ghost image region is larger than a preset value.
After dividing the two images into a plurality of ghost areas, acquiring the pixel average value of the pixel points in each ghost area in the images to be fused, if the pixel average value is larger than a preset value, determining that the corresponding ghost area is a highlight area, wherein the pixels in the highlight area are reserved, the pixels in the non-highlight area are not reserved, and the pixels in the non-ghost area are fused with the pixels in the corresponding area in the other image when the two images are fused.
In the above implementation, by dividing the ghost areas, the highlight areas in the ghost areas can be more accurately found.
As an implementation manner, after the target image is obtained, in order to implement personalized adjustment of the target image, the target image may be adjusted by limiting the contrast adaptive histogram equalization algorithm, that is, the target image may be adjusted by clahe algorithm, so as to implement personalized adjustment of brightness and contrast of the target image, and finally, an image with good overall brightness, contrast, noise and sharpness may be obtained. The specific implementation process of adjusting the target image can refer to the process of adjusting the image by clahe algorithm in the prior art, and will not be described in detail here.
Referring to fig. 6, fig. 6 is a block diagram illustrating an image processing apparatus 200 according to an embodiment of the present application, where the apparatus 200 may be a module, a program segment, or a code on an electronic device. It should be understood that the apparatus 200 corresponds to the above embodiment of the method of fig. 2, and is capable of executing the steps involved in the embodiment of the method of fig. 2, and specific functions of the apparatus 200 may be referred to in the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy.
Optionally, the apparatus 200 includes:
the image acquisition module 210 is configured to obtain a reference image under a reference exposure compensation value and a fused image under N comparison exposure compensation values, where the reference image is obtained by fusing at least two frames of images acquired under the reference exposure compensation value;
A motion mask image obtaining module 220, configured to determine a motion mask image between at least two frames of images acquired by each reference exposure compensation value, where pixel values of pixel points in the motion mask image are used to characterize motion amplitudes of corresponding pixel points in the at least two frames of images acquired by each reference exposure compensation value;
The image registration module 230 is configured to register the fusion image under each reference exposure compensation value and the motion mask image corresponding to each reference exposure compensation value with the reference image, so as to obtain a registration image corresponding to each reference exposure compensation value and a registration motion mask image;
A ghost area detection module 240, configured to determine a ghost area in the registered image under each reference exposure compensation value based on the registered image under each reference exposure compensation value, the reference image, and the registered motion mask image corresponding to each reference exposure compensation value;
And the image fusion module 250 is configured to fuse the registration images corresponding to the N comparison exposure compensation values and the reference image based on the ghost area, so as to obtain a target image.
Optionally, the image fusion module 250 is configured to:
Selecting two images from the reference image, N+1 images formed by N registered images and an intermediate fusion image formed by fusing two images in the N+1 images as images to be fused;
according to the highlight area and the non-ghost area in the ghost area of one image to be fused, fusing the one image to be fused with the other image to be fused, and obtaining the target image after all the images in the N+1 images participate in one-time fusion;
When the image to be fused is an intermediate fused image, the ghost area is an area corresponding to the ghost area serving as a substrate image in the two images participating in fusion, and the highlight area is an area corresponding to the highlight area in the substrate image.
Optionally, the image fusion module 250 is configured to:
Taking the pixel points of the highlight region in the one image to be fused and the pixel points of the region corresponding to the non-highlight region in the ghost region of the one image to be fused in the other image to be fused as the pixel points of the corresponding region in the image obtained after the two images to be fused are fused;
Fusing the pixel points of other areas except the ghost area in the one image to be fused with the pixel points of the areas corresponding to the other areas in the other image to be fused;
The exposure compensation value corresponding to the ghost area in the one image to be fused is smaller than that of the other image to be fused.
Optionally, the image fusion module 250 is configured to:
selecting one image of the image to be fused and the other image to be fused as a substrate image, and the other image as a non-substrate image;
determining the pixel size of each corresponding pixel point in other areas except for the ghost area in the substrate image;
Determining fusion weights of corresponding pixel points in the other areas in the substrate image and fusion weights of corresponding pixel points in areas corresponding to the other areas in the non-substrate image based on the pixel sizes;
And fusing the other areas in the substrate image with the corresponding pixels in the areas corresponding to the other areas in the non-substrate image based on the fusion weights of the corresponding pixels in the other areas in the substrate image and the fusion weights of the corresponding pixels in the areas corresponding to the other areas in the non-substrate image.
Optionally, the n+1 images are the ith image according to the order i in the order of the corresponding exposure compensation values from large to small or from small to large, and the image fusion module 250 is configured to:
fusing the 1 st image and the 2 nd image to form an intermediate fused image;
And sequentially taking i as 3 to N+1, and selecting an intermediate fusion image formed by fusing all images before the ith image in the sequence and the ith image as images to be fused.
Optionally, the n+1 images are the ith image according to the order i in the order of the corresponding exposure compensation values from large to small or from small to large, and the image fusion module 250 is configured to:
Sequentially taking i as an odd number from 1 to N+1, and sequentially fusing the i-th image and the i+1-th image which are adjacent in the N+1 images according to the sequencing order to form an intermediate fused image;
and sequentially selecting two adjacent intermediate fusion images as images to be fused according to the sequence of obtaining the intermediate fusion images.
Optionally, when the image to be fused is one of the n+1 images, determining a ghost area in the corresponding image to be fused by:
performing binarization processing on the initial ghost areas in the corresponding images to be fused to obtain binarized ghost areas;
And dividing the binarized ghost areas to obtain a plurality of non-adjacent ghost areas.
Alternatively, the highlight region in the corresponding ghost region of the image to be fused is determined by:
determining an average pixel value of pixel points in each ghost area in the corresponding image to be fused;
And if the average pixel value of the corresponding ghost areas is larger than the preset value, determining that the corresponding ghost areas in the corresponding images to be fused are highlight areas.
Optionally, when the image to be fused is the registered image, determining an initial ghost area in the corresponding registered image by:
And determining an initial ghost area in the registered image under each contrast exposure compensation value based on the pixel deviation between the registered image and the reference image under each contrast exposure compensation value and the pixel value in the registered motion mask image corresponding to each contrast exposure compensation value.
Optionally, the determining the initial ghost area in the registered image under each contrast exposure compensation value based on the pixel deviation between the registered image and the reference image under each contrast exposure compensation value and the pixel value in the registered moving mask image corresponding to each contrast exposure compensation value includes:
determining pixel difference values of corresponding pixel points between the registration image and the reference image under each contrast exposure compensation value;
Judging whether the pixel value difference value of the corresponding pixel point is larger than a first threshold value or not, and judging whether the pixel value of the corresponding pixel point in the registration motion mask image corresponding to each contrast exposure compensation value is larger than a second threshold value or not;
And if the pixel value difference value of the corresponding pixel point is larger than the first threshold value and the pixel value of the corresponding pixel point in the registration motion mask image is larger than the second threshold value, determining the position of the corresponding pixel point in the registration image under the corresponding contrast exposure compensation value as an initial ghost area.
Optionally, the image registration module 230 is configured to adjust the image brightness of the fused image under each reference exposure compensation value to the image brightness of the reference image, determine registration reference data for registering the fused image under each reference exposure compensation value after the image brightness is adjusted with the reference image, and register the fused image under each reference exposure compensation value based on the registration reference data to obtain a registration image corresponding to each reference exposure compensation value.
Optionally, the image registration module 230 is configured to register the motion mask image corresponding to each contrast exposure compensation value based on the registration reference data, so as to obtain a registered motion mask image corresponding to each contrast exposure compensation value.
Optionally, the apparatus 200 further includes:
and the contrast adjusting module is used for adjusting the target image by limiting a contrast self-adaptive histogram equalization algorithm.
Optionally, the image obtaining module 210 is configured to obtain at least two frames of images of the same scene acquired at each of the N reference exposure compensation values and at least two frames of images acquired at the reference exposure compensation value, perform fusion processing on at least two frames of images captured at each reference exposure compensation value to obtain fused images at the N reference exposure compensation values, and perform fusion processing on at least two frames of images acquired at the reference exposure compensation value to obtain a reference image at the reference exposure compensation value.
Optionally, the fusion processing is performed on at least two frames of images by:
determining an image with highest definition in the at least two frames of images as a reference image;
And fusing other frame images in the at least two frame images to the reference image to obtain corresponding fused images.
Optionally, the image obtaining module 210 is configured to determine a pixel difference value between each corresponding pixel point in the reference image and each other image of each frame, determine a fusion weight of each corresponding pixel point in each other image of each frame based on the pixel difference value, and fuse each corresponding pixel point in the reference image with each other image of each frame based on the fusion weight of each corresponding pixel point in each other image of each frame, so as to obtain a corresponding fusion image.
Optionally, at least two frames of images acquired by each contrast exposure compensation value include a reference image, and the moving mask image acquisition module 220 is configured to:
acquiring pixel difference values of corresponding pixel points between other frame images in at least two frame images acquired by each contrast exposure compensation value and the reference image;
taking the pixel difference value of each corresponding pixel point as an initial motion mask image between other each frame of images and the reference image to obtain a plurality of initial motion mask images;
And superposing the plurality of initial moving mask images to obtain a moving mask image between at least two frames of images acquired by each contrast exposure compensation value.
Optionally, the number of images acquired at each reference exposure compensation value and the number of images acquired at the reference exposure compensation value are determined according to the sensitivities corresponding to the different exposure compensation values.
It should be noted that, for convenience and brevity of description, a person skilled in the art may clearly understand that, for a specific working process of the apparatus described above, reference may be made to a corresponding process in the foregoing method embodiment, which is not described herein again.
An embodiment of the application provides a readable storage medium, which when executed by a processor, performs a method process performed by an electronic device in the method embodiment shown in fig. 2.
The embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the method provided by the above embodiments of the method, for example, comprising obtaining a reference image at a reference exposure compensation value and a fused image at N contrast exposure compensation values, wherein the reference image is obtained by fusing at least two frames of images acquired at the reference exposure compensation value, the fused image is obtained by fusing at least two frames of images acquired at the contrast exposure compensation value, determining a motion mask image between at least two frames of images acquired at each contrast exposure compensation value, wherein pixel values of pixel points in the motion mask image are used for characterizing the motion amplitude of corresponding pixel points in at least two frames of images acquired at each contrast exposure compensation value, registering the fused image at each contrast exposure compensation value and the motion reference image at each contrast exposure compensation value, respectively with the reference image at the contrast exposure compensation value, obtaining a corresponding contrast image and a corresponding region of the contrast exposure compensation value, and determining a corresponding region of the registered image based on the reference image and the contrast exposure compensation value.
In summary, the embodiments of the present application provide an image processing method, an apparatus, an electronic device, and a readable storage medium, by acquiring a moving mask image between at least two frames of images acquired by each contrast exposure compensation value, and then registering the moving mask image, so that a ghost area in a registered image under each contrast exposure compensation value can be accurately detected in combination with the registered moving mask image, and the accuracy of detecting the ghost area is improved, so that when an image fusion is performed based on the ghost area, the ghost can be well eliminated, and an image with higher definition is obtained.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.