[go: up one dir, main page]

CN115589451A - Shooting method, device, electronic device and readable storage medium - Google Patents

Shooting method, device, electronic device and readable storage medium Download PDF

Info

Publication number
CN115589451A
CN115589451A CN202211228431.4A CN202211228431A CN115589451A CN 115589451 A CN115589451 A CN 115589451A CN 202211228431 A CN202211228431 A CN 202211228431A CN 115589451 A CN115589451 A CN 115589451A
Authority
CN
China
Prior art keywords
image
images
pixel
area
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211228431.4A
Other languages
Chinese (zh)
Other versions
CN115589451B (en
Inventor
寇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211228431.4A priority Critical patent/CN115589451B/en
Publication of CN115589451A publication Critical patent/CN115589451A/en
Application granted granted Critical
Publication of CN115589451B publication Critical patent/CN115589451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/211Ghost signal cancellation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a readable storage medium, and belongs to the technical field of images. The method comprises the steps of shooting a first image and N second images, wherein each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2; and denoising the first image according to the N second images to obtain a target image, wherein the exposure parameters of at least two sub-pixels are related to the exposure parameters of the N second images.

Description

Shooting method, shooting device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method, a shooting device, electronic equipment and a readable storage medium.
Background
With the rapid development of science and technology, the functions provided by electronic equipment are increasingly diversified, and convenience is brought to the daily life of people. In particular, a user can capture an HDR image using a High Dynamic Range Imaging (HDR) technique via an electronic device, recording a droplet in life.
In the related art, multiple shooting of a shot scene is required to be performed according to different exposure levels to obtain multiple low dynamic range images, at this time, if a moving object obviously exists in the shot scene, positions of the moving object in the multiple low dynamic range images are inconsistent, and finally, in an area where the moving object exists in a synthesized HDR image, a picture which does not exist in an original scene, such as a Ghost image, discontinuity and the like, namely a Ghost (Ghost) exists, so that the imaging quality of the HDR image is affected.
Disclosure of Invention
An object of the embodiments of the present application is to provide a shooting method, an apparatus, an electronic device, and a storage medium, which can solve the problem of low image quality in the prior art.
In a first aspect, an embodiment of the present application provides a shooting method applied to an electronic device, where the shooting method may include:
shooting a first image and N second images, wherein each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2;
and denoising the first image according to the N second images to obtain a target image, wherein the exposure parameters of at least two sub-pixels are related to the exposure parameters of the N second images.
In a second aspect, an embodiment of the present application provides a shooting device applied to an electronic device, where the shooting device may include:
the image processing device comprises a shooting module, a processing module and a processing module, wherein the shooting module is used for shooting a first image and N second images, each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2;
and the processing module is used for carrying out noise reduction on the first image according to the N second images to obtain a target image, wherein the exposure parameters of at least two sub-pixels are related to the exposure parameters of the N second images.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the shooting method shown in the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the steps of the shooting method as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a display interface, where the display interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the shooting method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which is stored in a storage medium and executed by at least one processor to implement the steps of the shooting method as shown in the first aspect.
In the embodiment of the application, a first image and N second images are shot, each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, N is a positive integer larger than 2, and the noise of the first image is reduced according to the N second images to obtain a target image.
Drawings
Fig. 1 is a flowchart of a shooting method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a pixel arrangement of a first image according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a pixel arrangement of another first image according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a shooting device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
In the related art, multiple shooting of a shot scene is required to be performed on an HDR image according to different exposure levels to obtain multiple low dynamic range images, at this time, if a moving object obviously exists in the shot scene, positions of the moving object in the multiple low dynamic range images are inconsistent, and finally, a picture, namely a Ghost (Ghost), which does not exist in an original scene, such as Ghost, discontinuity and the like, exists in a region where the moving object exists in the synthesized HDR image. However, in the current processing method, due to existence of occlusion, over-bright overflow and over-dark overflow, some regions have no effective information in an input image, the regions can only be predicted by an image inpainting algorithm, and errors occur in the prediction process, so that the content in an output image is abnormal, and the imaging quality is reduced. In addition, the image inpainting algorithm involves too large amount of calculation, and the calculation consumes a long time, so that the efficiency of shooting HDR images is too low, and the user experience is reduced.
Based on this, in order to solve the above problem, an embodiment of the present application provides a processing method for an HDR image, where a first image and N second images are captured in response to a first input of a user, where each pixel in the first image includes a plurality of sub-pixels, the plurality of sub-pixels includes a short exposure sub-pixel, a medium exposure sub-pixel, and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2, and the first image is subjected to noise reduction according to a first region where a moving object is located in each of the N second images to obtain a target image, so that the image quality of the HDR first image is improved by using multiple frames of the N second images having different exposure parameter values, and the generation of light spots or light spots of different shapes in the first image is reduced in combination with the image qualities of the N second images, so as to obtain a high-quality image without ghost, thereby improving the imaging quality of the first image, so as to solve the problem that the imaging quality of the HDR image is low due to the occurrence of a ghost in the current HDR image.
The following describes in detail a shooting method provided by the embodiment of the present application with reference to fig. 1 and fig. 2 through a specific embodiment and an application scenario thereof.
The photographing method provided by the embodiment of the present application is described in detail below with reference to fig. 1.
Fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application.
As shown in fig. 1, the shooting method provided in the embodiment of the present application may include the following steps:
step 110, shooting a first image and N second images, wherein each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2; and 120, denoising the first image according to the N second images to obtain a target image, wherein the exposure parameters of at least two sub-pixels are related to the exposure parameters of the N second images.
The image quality of the first image is improved through the plurality of N second images with different exposure parameter values, and the generation of light spots or bright spots with different shapes in the first image is reduced by combining the image quality of the N second images, so that the imaging quality of the first image is improved.
The above steps are described in detail below, specifically as follows.
First, in one or more possible embodiments, before step 110, the shooting method provided in the embodiment of the present application may further include:
receiving a first input of a user;
in response to the first input, exposure strategy information is acquired.
The exposure strategy information comprises that the scene dynamic range is positively correlated with the exposure gear number and the exposure interval between different gears, and the shooting environment brightness is negatively correlated with the frame number of the shot image at the same gear.
In order to improve the captured scene dynamic range, the scene dynamic range can be configured to be positively correlated with the number of exposure steps and the exposure intervals among different steps; in addition, in order to improve the imaging quality of the first image, the shooting environment brightness can be configured to be in negative correlation with the number of frames of the shooting image in the same gear.
It should be noted that the scene dynamic range in the embodiment of the present application refers to the luminance range from the measurement of shadow to highlight, and the dynamic range (i.e. the difference between the dark area and the bright area in the scene) is very small under the low illumination condition.
The step 110 may specifically include:
and shooting a first image and N second images according to a preset exposure sequence, wherein the difference value between the number of the second images shot before the first image and the number of the second images shot before the first image is smaller than a preset value.
If N is an even number, the sequence of shooting the first image is in the middle of the preset exposure sequence; if N is the base number, the order of capturing the first image may be in any one of the middle two bits of the preset exposure order.
For example, in the embodiment of the present application, when the first input of the user is received, the number of frames of the image that needs to be shot currently and the exposure value of the shot image may be calculated according to the exposure policy information. Here, taking an example of capturing 7 frames of second images surrounding exposure and one frame of first image of 3D-HDR, where the 7 frames of images include 1 frame +2EV image, 4 frames of 0EV image, and 2 frames-4 EV image, in order to ensure that the second image can better reduce noise of the first image, the exposure parameters of at least two sub-pixels in the first image of the embodiment of the present application are related to the exposure parameters of N second images, that is, the first image is captured in three gears of +2EV, 0EV, and-4 EV. In this way, in the shooting method provided by the embodiment of the present application, considering that the first image will be used as a reference frame, the sequence of shooting the first image and the second image is as follows: -4EV, 0EV 2, 3D-HDR, 0EV 2, -4EV, and +2EV.
For different shooting modes, namely 3D-HDR shot by using an HDR technology and N second images shot by using a shooting mode with exposure bracketing, the existing processing algorithm cannot be applied to the images obtained by the two different shooting modes, so that the method and the device for processing the images provide multi-source input and simultaneously take ghost removal and noise reduction into consideration, and the method and the device for processing the images are specifically shown as follows. Where the above-mentioned multi-source input refers to a first image of different types of images such as 3D-HDR captured (or generated) by different imaging mechanisms and a 7-frame second image by bracketing exposure.
Secondly, in one or more possible embodiments, since there is noise in the captured first image and the captured N second images, and there may be a weak change in illumination in each frame, in order to ensure the image quality of the obtained target image, the brightness in each image may be adjusted by the following steps, based on which, in one or more possible embodiments, the step 110 may specifically include:
performing brightness compensation on each frame of low dynamic range image in the i frames of low dynamic range images according to a target exposure value in a preset exposure value set under the condition that the first image is a high dynamic range image to obtain i frames of low dynamic range images after brightness compensation, wherein i is a positive integer greater than 1;
and synthesizing the i-frame low dynamic range image after the brightness compensation to obtain a first image.
Illustratively, since the exposure value of the above-described captured image is at most +2EV, the luminance of each of all the first images and the N second images is subjected to luminance compensation in accordance with the exposure time and the gain with reference to the luminance of +2EV, and the luminance values of the same object, such as a moving object, after the compensation are close in the different images. After the brightness compensation, the 3D-HDR image frame synchronously completes the synthesis of i-frame low dynamic range images of different gears.
It is understood that the first image in the embodiments of the present application may be 3D-HDR, which may also be referred to as Quad Base Coding (QBC) HDR. As shown in fig. 2, each pixel, for example, 20, in the 3D-HDR image has a plurality of sub-pixels, and the plurality of sub-pixels include a Short exposure sub-pixel (S), a Middle exposure sub-pixel (M), and a Long exposure sub-pixel (L), wherein the Short exposure sub-pixel (S), the Middle exposure sub-pixel (M), and the Long exposure sub-pixel (L) are arranged at intervals, for example, composed of four sub-pixels "L", "M", and "S", and the four sub-pixels share one color filter (color filter), and wherein the four sub-pixels are divided into three groups in the HDR mode. Thus, referring to fig. 2, each color channel is expressed with 10bits data, and after HDR synthesis (HDR synthesis), 16bits data are obtained, and then 10bits data are output through tone mapping (tone mapping). Alternatively, as shown in fig. 3, each pixel in the 3D-HDR image in the embodiment of the present application is shown as 30 to have a plurality of sub-pixels, and the plurality of sub-pixels includes a short-exposure sub-pixel and a long-exposure sub-pixel, where the short-exposure sub-pixel (S) and the long-exposure sub-pixel (L) are arranged at intervals, for example, four sub-pixels "S", "L" and "S" are formed, and the four sub-pixels share one color filter (color filter), and when in the HDR mode, the four sub-pixels are divided into two groups by diagonal lines.
In another possible embodiment or embodiments, before step 120, the method further comprises:
respectively acquiring angular points of each image in the first image and the N second images;
and carrying out image alignment processing on the first image and each second image according to the corner of each image, and aligning each aligned second image.
Illustratively, the global alignment is to perform global motion compensation on images, and eliminate overall motion of images in the process of shooting an image sequence, and is specifically implemented by first detecting corner points in each image, for example, harris corner points may be used, then performing feature point matching with 3D-HDR as a reference according to the corner points, detecting a matching relationship between feature points in other images and feature points in 3D-HDR, fitting a Homography matrix (homograph) matrix by using least squares, and then adjusting each image by using the homograph matrix to obtain a group of integrally aligned images.
Then, referring to step 120, since the noise in the 3D-HDR is slightly large and is easily affected by the noise in the image, the false detection and the false detection in the motion detection exist at the same time, if the detection is easy to miss in the determination in the detection (i.e. there is a moving object and not detected), and the false detection is easy to miss (i.e. a stationary object is mistakenly taken as a moving object), the false detection and the false detection can be avoided by the way of the adaptive threshold motion region determination from coarse to fine through the following steps.
In one or more possible embodiments, before the step 120, the shooting method may further include:
step 130, based on a first preset motion detection threshold, extracting a motion area candidate area and a non-motion area candidate area corresponding to the motion area from each second image, wherein the motion area is an area where a motion object in the first image is located;
step 140, performing motion correction on the motion region candidate region based on a second preset motion detection threshold, performing motion correction on the non-motion region candidate region based on a third preset motion detection threshold, and determining the motion region candidate region and the non-motion region candidate region after correction as a motion region selection region, wherein the second preset motion detection threshold is smaller than the first preset motion detection threshold, and the third preset motion detection threshold is larger than the first preset motion detection threshold;
and 150, taking the brightness channel graph of the first image as a guide graph, and performing image filtering on the selected area of the motion area to obtain a first area.
Specifically, the step 130 may specifically include:
the pixel value of a first pixel in the second image is differed with the pixel value of a second pixel in the first image to obtain a pixel difference value, and the position of the first pixel in the second image corresponds to the position of the second pixel in the first image;
determining a region corresponding to a first pixel as a motion region candidate region under the condition that the pixel difference value is greater than a first preset motion detection threshold value;
and under the condition that the pixel difference value is smaller than a first preset motion detection threshold value, determining the area corresponding to the first pixel as a non-motion area candidate area.
Illustratively, since the noise of the 3D-HDR is relatively large, in order to reduce the influence of the noise, first, an average filtering with the same radius is performed on each input image, so that the image becomes smooth, then, with the 3D-HDR image frame as a reference frame, a relatively large first preset motion detection threshold, denoted as t, is set first, and a rough judgment is performed on a motion region of each second image relative to the reference frame. The judgment method comprises the steps of utilizing the difference value of image pixel values, determining that the motion is carried out if the difference value is larger than a set threshold, determining that the motion is carried out if the difference value is smaller than the set threshold, and determining that the motion is not carried out temporarily, obtaining a motion region candidate area, then carrying out corrosion and expansion morphological operations on the motion region candidate area, further eliminating the influence of noise on motion detection, simultaneously slightly expanding the motion region candidate area, and preventing the missed detection of the follow-up operation on the edge of a moving object. Then, different threshold values are set for a motion region candidate region and a non-motion region candidate region for fine motion detection operation, a second preset motion detection threshold value smaller than the first preset motion detection threshold value is set in the motion region, for example, 0.5t, so as to reduce missing detection in a range where a motion region is definitely present, a third larger preset motion detection threshold value is set in the non-motion region candidate region, for example, 2t, so as to reduce false detection on small motions, then, a small-radius edge-preserving smoothing filtering algorithm is used to filter the motion region candidate region by taking a luminance channel map of a 3D-HDR frame as a guide map, and each non-reference image, namely, a second image can obtain a first region L relative to a reference image.
Therefore, by the mode, false detection or missing detection of motion detection can be reduced, ghost is not completely removed, the brightness of the picture is continuous, the definition is high, in addition, the detection of the area where a motion object is located can be improved more accurately, and meanwhile, the definition of the image can be improved by the combination of single-frame noise reduction and multi-frame noise reduction.
Based on this, the step 120 may specifically include:
step 1201, according to each second image, denoising the first image respectively to obtain a first target image;
step 1202, performing image averaging processing on the N second images to obtain an average image;
step 1203, denoising the first image based on the average image to obtain a second target image;
and 1204, performing image fusion on the first target image, the average image and the second target image through a preset fusion algorithm to obtain a target image.
Illustratively, the target image may be obtained by fusing the first target image, the average image, and the second target image as in the following formula (1).
Iout=(M’*Is+(255-M’)*Im)/255 (1)
Where M' Is the average image, is the first target image, im Is the second target image, and Iout Is the target image.
Specifically, in an example, the step 1201 may specifically include:
step 12011, obtaining a first search length value corresponding to a first pixel value at a first position in the first area according to the correlation information between the pixel value and the search length value;
step 12012, acquiring a similar area in each second image, wherein the similar area takes the first position as a search center, and the search length value is an area covered by a search range;
step 12013, determining M matched images from the N second images according to the pixel difference between the adjacent pixels of the similar region, where the pixel difference between the adjacent pixels of the similar region in the matched images is smaller than a first preset threshold, and M is a positive integer;
step 12014, denoising the first image respectively according to the pixel value of each matching image in the M matching images to obtain M denoised first images;
step 12015, the M denoised first images are synthesized to obtain a first target image.
Specifically, in step 12012, the similar area may be an area covered by taking the first position as a center of a circle and the search length value as a radius; alternatively, the similar area may use a quadrangle, such as a square or a rectangle, as the search center, and use a preset length as the area covered by the search range.
Here, an area covered with the first position as a center and the search length value as a radius is described as an example. Illustratively, noise reduction is performed on a plurality of frames of differently exposed images with adaptive search radii for ghost elimination, so as to obtain a motion region selection map, where a pixel value range of the image is 0 to 255, the pixel value represents a possibility that each pixel is a motion pixel, and a motion range of a pixel with a larger pixel value is relatively larger, so that the embodiment of the present application provides a manner for determining a search radius of a noise reduction link based on a motion region selection image value, that is, a radius is enlarged in a region requiring a larger search radius, specifically, 2 search radius parameters r1 and r2 are set, r1 is a minimum search radius, and r2 is a maximum search radius. If the pixel value of the corresponding position in the first region L is 0, using r1 as the search radius; if the pixel value of the corresponding position in the first region L is 255, using r2 as the search radius; if the remaining pixel values in the first region L are between 0 and 255, the search radius is obtained by linear interpolation, where r = r1+ (r 2-r 1) × y/255, where y is a preset threshold. It should be noted that, due to the existence of the different exposure images, there may be saturated pixels in the picture, and the saturated pixels do not participate in noise reduction, so that the step 1202 does not need to be executed.
And for each pixel in the reference frame, performing most similar region search in each non-reference frame by taking r obtained by the previous step as a radius, and if the searched minimum difference region is smaller than a set threshold, determining that a matched pixel is found. If the minimum difference area in the whole search range is still larger than the threshold, it is determined that there is no matched pixel in the image, the image is abandoned, and the corresponding pixel in the first area L of the pixel is set to 255. And denoising the pixels in the reference frame by using all the found matched pixels, wherein the shooting method comprises but is not limited to mean filtering.
Through the link, the second target image Im subjected to multi-frame noise reduction is obtained, the image quality is obviously improved compared with that of the original 3D-HDR, but in a motion area, because pixels are difficult to find or are not matched in a picture at all, the noise of the areas may still be relatively poor, and therefore the image can be subjected to noise reduction in the following mode.
In another example, in step 1202, the first regions L in the n second images are averaged to obtain an average map M', where the larger the value of the image is, the less useful information in the non-reference frame is, and the higher the noise intensity of the region is, the more noise reduction is required.
In another example, the step 1303 may specifically include:
and under the condition that the pixel value of the average image is greater than or equal to a second preset threshold value, taking the average image as a noise reduction amplitude, and reducing the noise of the first image to obtain a second target image.
Illustratively, the effect of the AI-based single-frame noise reduction mode is far better than that of the traditional shooting mode, and the AI-based single-frame noise reduction mode is also adopted to reduce the noise of the original 3D-HDR frame. Limited by the Processing capability of a Neural Network Processing Unit (NPU) of the current electronic device, AI single frame noise reduction needs to be performed in blocks, for example, the size of an input image is 4000 × 3000, an image needs to be divided into image blocks with 1024 × 1024 sizes and transmitted to a noise reduction network, unlike conventional single frame noise reduction, here, a first image needs to be input, and M 'is also input to guide noise reduction amplitude at the same time, where the larger the median of M', the higher the intensity of AI single frame noise reduction. It should be noted that, when the pixel values in the area in M' are all 0, AI single-frame noise reduction operation Is not required to be performed on the block to increase the processing speed, and the first target image Is obtained after the single-frame noise reduction Is performed on the whole image.
Therefore, according to the embodiment of the application, a first image and N second images are shot, each pixel in the first image comprises at least two sub-pixels, the at least two sub-pixels comprise at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel and a long exposure sub-pixel, the N second images have different exposure parameters, N is a positive integer larger than 2, and the noise of the first image is reduced according to the N second images to obtain a target image.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. In the embodiment of the present application, a shooting method executed by a shooting device is taken as an example, and a display device provided in the embodiment of the present application is described.
Based on the same inventive concept, the application also provides a shooting device. This is explained in detail with reference to fig. 4.
Fig. 4 is a schematic structural diagram of a shooting device according to an embodiment of the present application.
As shown in fig. 4, the shooting apparatus 40 is applied to the electronic device shown in fig. 1, and may specifically include:
a shooting module 401, configured to shoot a first image and N second images, where each pixel in the first image includes at least two sub-pixels, and the at least two sub-pixels include at least two types of sub-pixels from a short-exposure sub-pixel, a medium-exposure sub-pixel, and a long-exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2;
a processing module 402, configured to perform noise reduction on the first image according to the N second images to obtain a target image; wherein the exposure parameters of the at least two sub-pixels are related to the exposure parameters of the N second images.
In one or more possible embodiments, the shooting device 40 in the embodiment of the present application may further include a first noise reduction module and a fusion module; wherein,
the first noise reduction module is used for respectively reducing noise of the first image according to the pixel value of each second image to obtain a first target image;
the processing module 402 may be specifically configured to perform image averaging on the first areas of the N second images to obtain an average image;
the first denoising module can be further used for denoising the first image based on the average image to obtain a second target image;
and the fusion module is used for carrying out image fusion on the first target image, the average image and the second target image through a preset fusion algorithm to obtain a target image.
In another or more possible embodiments, the camera 40 in the embodiment of the present application may further include an obtaining module, a matching module, a second noise reduction module, and a synthesizing module; wherein,
the acquisition module is used for acquiring a first search length value corresponding to a first pixel value at a first position in the first area according to the correlation information of the pixel value and the search length value;
the obtaining module is further configured to obtain a detailed region in each second image, where the similar region is a region covered by a search range, where the first position is used as a search center, and the search length value is used as a search center;
the matching module is used for determining M matched images from the N second images according to pixel difference values between adjacent pixels of the similar area, wherein the pixel difference values between the adjacent pixels of the similar area in the matched images are smaller than a first preset threshold value, and M is a positive integer;
the second denoising module is used for denoising the pixel value of each of the M matched images to the first image respectively to obtain M denoised first images;
and the synthesis module is used for synthesizing the M denoised first images to obtain a first target image.
In still another or more possible embodiments, the shooting device 40 in the embodiment of the present application may further include a third noise reduction module; wherein,
and the third noise reduction module is used for taking the average image as the noise reduction amplitude under the condition that the pixel value of the average image is greater than or equal to a second preset threshold value, and reducing the noise of the first image to obtain a second target image.
In still another or more possible embodiments, the photographing apparatus 40 in the embodiment of the present application may further include an extraction module, a correction module, a filtering module, and a first determination module; wherein,
the extraction module is used for extracting a motion area candidate area and a non-motion area candidate area corresponding to the motion area in each second image based on a first preset motion detection threshold, wherein the motion area is an area where a motion object in the first image is located;
a correction module for performing motion correction on the motion region candidate region based on a second preset motion detection threshold and performing motion correction on the non-motion region candidate region based on a third preset motion detection threshold;
a first determining module, configured to determine a motion region candidate region and a non-motion region candidate region as a motion region candidate region after correction, where a second preset motion detection threshold is smaller than a first preset motion detection threshold, and a third preset motion detection threshold is larger than the first preset motion detection threshold;
and the filtering module is used for performing image filtering on the selected area of the motion area by taking the brightness channel map of the first image as a guide map to obtain the first area.
In one or more possible embodiments, the shooting device 40 in the embodiment of the present application may further include a calculating module and a second determining module; wherein,
the calculating module is used for making a difference between the pixel value of the first pixel in the second image and the pixel value of the second pixel in the first image to obtain a pixel difference value, and the position of the first pixel in the second image corresponds to the position of the second pixel in the first image;
a second determining module, configured to determine, when the pixel difference is greater than a first preset motion detection threshold, a region corresponding to the first pixel as a motion region candidate region;
the second determining module is further configured to determine, when the pixel difference is smaller than the first preset motion detection threshold, the region corresponding to the first pixel as a non-motion region candidate region.
In one or more possible embodiments, the capturing module 401 may be specifically configured to capture a first image and N second images according to a preset exposure sequence, where a difference between the number of second images captured before the first image and the number of second images captured before the first image is smaller than a preset value.
The shooting device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The shooting device provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to 3, achieve the same technical effect, and is not described here again to avoid repetition.
Based on this, the shooting device provided in the embodiment of the present application shoots a first image and N second images, each pixel in the first image includes at least two sub-pixels, each of the at least two sub-pixels includes at least two types of sub-pixels in a short exposure sub-pixel, a middle exposure sub-pixel, and a long exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2, and performs noise reduction on the first image according to the N second images to obtain a target image, so that the image quality of the first image is improved by multiple frames of the N second images having different exposure parameter data values, and the generation of light spots or bright spots of different shapes in the first image is reduced by combining the image quality of the N second images, thereby improving the imaging quality of the first image.
Optionally, as shown in fig. 5, an electronic device 50 is further provided in this embodiment of the present application, and includes a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and when the program or the instruction is executed by the processor 501, the steps of the foregoing shooting method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In this embodiment, the sensor 605 is configured to capture a first image and N second images, each pixel in the first image includes at least two sub-pixels, the at least two sub-pixels include at least two types of sub-pixels from a short-exposure sub-pixel, a medium-exposure sub-pixel, and a long-exposure sub-pixel, the N second images have different exposure parameters, and N is a positive integer greater than 2. The processor 610 is configured to perform noise reduction on the first image according to the N second images to obtain a target image, where exposure parameters of at least two sub-pixels are related to exposure parameters of the N second images.
In one or more possible embodiments, the processor 610 may be specifically configured to perform noise reduction on the first image according to the pixel value of the first region in each second image, to obtain a first target image; carrying out image averaging processing on the first areas of the N second images to obtain an average image; based on the average image, denoising the first image to obtain a second target image; and carrying out image fusion on the first target image, the average image and the second target image through a preset fusion algorithm to obtain a target image.
In another or more possible embodiments, the processor 610 in this embodiment may be specifically configured to obtain, according to the association information between the pixel value and the search length value, a first search length value corresponding to a first pixel value at a first position in the first area; in each second image, acquiring a similar area, wherein the similar area is an area which takes the first position as a search center and the search length value is covered by the cable range; determining M matched images from the N second images according to the pixel difference value between the adjacent pixels of the similar area, wherein the pixel difference value between the adjacent pixels of the similar area in the matched images is smaller than a first preset threshold value, and M is a positive integer; denoising the first image respectively by using the pixel value of each of the M matched images to obtain M denoised first images of the first target images; and synthesizing the M denoised first images to obtain the first target image.
In yet another or more possible embodiments, the processor 610 in this embodiment may be specifically configured to, when the pixel value of the average image is greater than or equal to a second preset threshold, take the average image as a noise reduction amplitude, and perform noise reduction on the first image to obtain a second target image.
In still another or more possible embodiments, the processor 610 in this embodiment may be further configured to, based on the first preset motion detection threshold, extract a motion region candidate area and a non-motion region candidate area corresponding to the motion region in each second image, where the motion region is a region where a motion object in the first image is located; performing motion correction on the motion region candidate region based on a second preset motion detection threshold, performing motion correction on the non-motion region candidate region based on a third preset motion detection threshold, and determining the region after the motion region candidate region and the non-motion region candidate region are corrected as a motion region selection region, wherein the second preset motion detection threshold is smaller than the first preset motion detection threshold, and the third preset motion detection threshold is larger than the first preset motion detection threshold; and taking the brightness channel image of the first image as a guide image, and carrying out image filtering on the selected area of the motion area to obtain a first area.
In one or more possible embodiments, the processor 610 in this embodiment may be specifically configured to perform a difference between a pixel value of a first pixel in the second image and a pixel value of a second pixel in the first image to obtain a pixel difference value, where a position of the first pixel in the second image corresponds to a position of the second pixel in the first image;
determining a region corresponding to a first pixel as a motion region candidate region under the condition that the pixel difference value is greater than a first preset motion detection threshold value;
and under the condition that the pixel difference value is smaller than a first preset motion detection threshold value, determining the area corresponding to the first pixel as a non-motion area candidate area.
In one or more possible embodiments, the sensor 605 may be specifically configured to capture the first image and the N second images according to a preset exposure sequence, where a difference between the number of the second images captured before the first image and the number of the second images captured before the first image is smaller than a preset value.
It is to be understood that the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still image or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. A touch panel 6071 also referred to as a touch screen. The touch panel 6071 may include two portions of a touch detection device and a touch display. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume display keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 609 may be used to store software programs and various data, and the memory 609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static random access Memory (Static RAM, SRAM), a Dynamic random access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic random access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic random access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 609 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 6010 may include one or more processing units; optionally, the processor 610 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless display signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic disk or an optical disk, and the like.
In addition, an embodiment of the present application further provides a chip, where the chip includes a processor and a display interface, the display interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing shooting method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1.一种拍摄方法,其特征在于,包括:1. A shooting method, characterized in that, comprising: 拍摄第一图像和N个第二图像,所述第一图像中的每个像素包括至少两个子像素,所述至少两个子像素包括短曝光子像素、中曝光子像素和长曝光子像素中的至少两类子像素,所述N个第二图像具有不同的曝光参数,N为大于2的正整数;Taking a first image and N second images, each pixel in the first image includes at least two sub-pixels, and the at least two sub-pixels include short-exposure sub-pixels, medium-exposure sub-pixels and long-exposure sub-pixels At least two types of sub-pixels, the N second images have different exposure parameters, and N is a positive integer greater than 2; 根据所述N个第二图像,对所述第一图像进行降噪,得到目标图像;Denoising the first image according to the N second images to obtain a target image; 其中,所述至少两个子像素的曝光参数与所述N个第二图像的曝光参数相关。Wherein, the exposure parameters of the at least two sub-pixels are related to the exposure parameters of the N second images. 2.根据权利要求1所述的方法,其特征在于,所述根据所述N个第二图像,对所述第一图像进行降噪,得到目标图像,包括:2. The method according to claim 1, wherein said denoising the first image according to the N second images to obtain the target image comprises: 根据所述每个第二图像的像素值,分别对第一图像进行降噪,得到第一目标图像;Denoising the first image respectively according to the pixel value of each second image to obtain the first target image; 对所述N个第二图像进行图像平均处理,得到平均图像;performing image averaging processing on the N second images to obtain an average image; 基于所述平均图像,对所述第一图像进行降噪,得到所述第二目标图像;Denoising the first image based on the average image to obtain the second target image; 通过预设融合算法,对所述第一目标图像、所述平均图像和所述第二目标图像进行图像融合,得到目标图像。Image fusion is performed on the first target image, the average image and the second target image through a preset fusion algorithm to obtain a target image. 3.根据权利要求2所述的方法,其特征在于,所述根据所述每个第二图像中的第一区域的像素值,分别对第一图像进行降噪,得到第一目标图像,包括:3. The method according to claim 2, wherein the first image is respectively denoised according to the pixel values of the first region in each second image to obtain the first target image, comprising : 根据像素值与搜索长度值的关联信息,获取与所述第一区域中第一位置的第一像素值对应的第一搜索长度值;Acquiring a first search length value corresponding to the first pixel value at the first position in the first region according to the association information between the pixel value and the search length value; 在所述每个第二图像中,获取相似区域,所述相似区域为以所述第一位置为搜索中心,所述搜索长度值为搜索范围所覆盖的区域;In each of the second images, a similar area is obtained, the similar area is the first position as the search center, and the search length value is the area covered by the search range; 根据所述相似区域的相邻像素之间的像素差值,从所述N个第二图像中确定M个匹配图像,所述匹配图像中相似区域的相邻像素之间的像素差值小于第一预设阈值,M为正整数;M matching images are determined from the N second images according to the pixel difference between adjacent pixels in the similar area, and the pixel difference between adjacent pixels in the similar area in the matching image is smaller than the first pixel difference. A preset threshold, M is a positive integer; 将所述M个匹配图像中每个匹配图像的像素值,分别对所述第一图像进行降噪,得到M个降噪后的第一图像;Denoising the first image respectively with the pixel value of each matching image in the M matching images to obtain M first images after denoising; 将所述M个降噪后的第一图像进行合成,得到所述第一目标图像。Combining the M noise-reduced first images to obtain the first target image. 4.根据权利要求2所述的方法,其特征在于,所述基于所述平均图像,对所述第一图像进行降噪,得到所述第二目标图像,包括:4. The method according to claim 2, wherein said denoising the first image based on the average image to obtain the second target image comprises: 在所述平均图像的像素值大于或者等于第二预设阈值的情况下,将所述平均图像作为降噪幅度,对所述第一图像进行降噪,得到所述第二目标图像。If the pixel value of the average image is greater than or equal to a second preset threshold, the average image is used as a noise reduction level to perform noise reduction on the first image to obtain the second target image. 5.根据权利要求1所述的方法,其特征在于,所述根据所述N个第二图像,对所述第一图像进行降噪,得到目标图像之前,所述方法还包括:5. The method according to claim 1, wherein the first image is denoised according to the N second images, and before the target image is obtained, the method further comprises: 基于第一预设运动检测阈值,在所述每个第二图像中,提取与运动区域对应的运动区域候选区和非运动区域候选区域,所述运动区域为所述第一图像中运动对象所在的区域;Based on the first preset motion detection threshold, in each of the second images, a motion area candidate area and a non-motion area candidate area corresponding to the motion area are extracted, the motion area is where the moving object in the first image is located Area; 基于第二预设运动检测阈值对所述运动区域候选区进行运动校正,以及基于第三预设运动检测阈值对非运动区域候选区域进行运动校正,将所述运动区域候选区和所述非运动区域候选区域校正后的区域,确定为运动区域选区,其中,所述第二预设运动检测阈值小于所述第一预设运动检测阈值,所述第三预设运动检测阈值大于所述第一预设运动检测阈值;Motion correction is performed on the candidate motion area based on a second preset motion detection threshold, and motion correction is performed on the candidate non-motion area based on a third preset motion detection threshold, and the candidate motion area and the non-motion The corrected area of the area candidate area is determined as a motion area selection area, wherein the second preset motion detection threshold is smaller than the first preset motion detection threshold, and the third preset motion detection threshold is greater than the first motion detection threshold Preset motion detection threshold; 以所述第一图像的亮度通道图为引导图,对所述运动区域选区进行图像滤波,得到所述第一区域。Using the luminance channel image of the first image as a guide image, image filtering is performed on the selected motion area to obtain the first area. 6.根据权利要求5所述的方法,其特征在于,所述基于第一预设运动检测阈值,在所述每个第二图像中,提取与运动区域对应的运动区域候选区和非运动区域候选区域,包括:6. The method according to claim 5, wherein, based on the first preset motion detection threshold, in each of the second images, a motion area candidate area and a non-motion area corresponding to the motion area are extracted Candidate areas, including: 将所述第二图像中第一像素的像素值与所述第一图像中第二像素的像素值作差,得到像素差值,所述第一像素在所述第二图像中的位置与所述第二像素在所述第一图像中的位置对应;Making a difference between the pixel value of the first pixel in the second image and the pixel value of the second pixel in the first image to obtain a pixel difference value, the position of the first pixel in the second image is the same as the pixel value of the second pixel in the first image corresponding to the position of the second pixel in the first image; 在所述像素差值大于所述第一预设运动检测阈值的情况下,将所述第一像素对应的区域确定为所述运动区域候选区;If the pixel difference value is greater than the first preset motion detection threshold, determine the area corresponding to the first pixel as the motion area candidate area; 在所述像素差值小于所述第一预设运动检测阈值的情况下,将所述第一像素对应的区域确定为所述非运动区域候选区。If the pixel difference value is smaller than the first preset motion detection threshold, the area corresponding to the first pixel is determined as the non-motion area candidate area. 7.根据权利要求1所述的方法,其特征在于,所述拍摄第一图像和N个第二图像,包括:7. The method according to claim 1, wherein said taking the first image and N second images comprises: 按照预设曝光顺序,拍摄所述第一图像和所述N个第二图像,在所述第一图像前拍摄所述第二图像的数量与在所述第一图像前拍摄所述第二图像的数量的差值小于预设值。Taking the first image and the N second images according to a preset exposure sequence, the number of taking the second images before the first image is the same as the number of taking the second images before the first image The difference of the quantity is less than the preset value. 8.一种拍摄装置,其特征在于,包括:8. A photographing device, characterized in that it comprises: 拍摄模块,用于响应于所述第一输入,拍摄第一图像和N个第二图像,所述第一图像中的每个像素包括至少两个子像素,所述至少两个子像素包括短曝光子像素、中曝光子像素和长曝光子像素中的至少两类子像素,所述N个第二图像具有不同的曝光参数,N为大于2的正整数;A photographing module, configured to photograph a first image and N second images in response to the first input, each pixel in the first image includes at least two sub-pixels, and the at least two sub-pixels include short-exposure sub-pixels For at least two types of sub-pixels in pixels, medium-exposure sub-pixels and long-exposure sub-pixels, the N second images have different exposure parameters, and N is a positive integer greater than 2; 处理模块,用于根据所述N个第二图像,对所述第一图像进行降噪,得到目标图像;其中,所述至少两个子像素的曝光参数与所述N个第二图像的曝光参数相关。A processing module, configured to denoise the first image according to the N second images to obtain a target image; wherein, the exposure parameters of the at least two sub-pixels are the same as the exposure parameters of the N second images relevant. 9.一种电子设备,其特征在于,包括:处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-7任一项所述的拍摄方法的步骤。9. An electronic device, characterized in that it comprises: a processor, a memory, and a program or instruction stored on the memory and operable on the processor, when the program or instruction is executed by the processor The steps of the photographing method as described in any one of claims 1-7 are realized. 10.一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-7任一项所述的拍摄方法的步骤。10. A readable storage medium, characterized in that a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the photographing method according to any one of claims 1-7 is realized A step of.
CN202211228431.4A 2022-10-09 2022-10-09 Shooting method, device, electronic device and readable storage medium Active CN115589451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211228431.4A CN115589451B (en) 2022-10-09 2022-10-09 Shooting method, device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211228431.4A CN115589451B (en) 2022-10-09 2022-10-09 Shooting method, device, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115589451A true CN115589451A (en) 2023-01-10
CN115589451B CN115589451B (en) 2025-10-14

Family

ID=84779472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211228431.4A Active CN115589451B (en) 2022-10-09 2022-10-09 Shooting method, device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115589451B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119603566A (en) * 2024-12-06 2025-03-11 维沃移动通信有限公司 Method, device, equipment and readable storage medium for determining optical flow motion vector diagram

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6442203B1 (en) * 1999-11-05 2002-08-27 Demografx System and method for motion compensation and frame rate conversion
CN109040589A (en) * 2018-08-16 2018-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110381263A (en) * 2019-08-20 2019-10-25 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111418201A (en) * 2018-03-27 2020-07-14 华为技术有限公司 A shooting method and device
CN112204948A (en) * 2019-09-19 2021-01-08 深圳市大疆创新科技有限公司 HDR image generation method, optical filter array, image sensor, image processing chip, and imaging device
US20210035308A1 (en) * 2019-08-02 2021-02-04 Hanwha Techwin Co., Ltd. Apparatus and method for calculating motion vector
CN114503541A (en) * 2019-08-06 2022-05-13 三星电子株式会社 Apparatus and method for efficient regularized image alignment for multi-frame fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6442203B1 (en) * 1999-11-05 2002-08-27 Demografx System and method for motion compensation and frame rate conversion
CN111418201A (en) * 2018-03-27 2020-07-14 华为技术有限公司 A shooting method and device
CN109040589A (en) * 2018-08-16 2018-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
US20210035308A1 (en) * 2019-08-02 2021-02-04 Hanwha Techwin Co., Ltd. Apparatus and method for calculating motion vector
CN114503541A (en) * 2019-08-06 2022-05-13 三星电子株式会社 Apparatus and method for efficient regularized image alignment for multi-frame fusion
CN110381263A (en) * 2019-08-20 2019-10-25 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112204948A (en) * 2019-09-19 2021-01-08 深圳市大疆创新科技有限公司 HDR image generation method, optical filter array, image sensor, image processing chip, and imaging device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘超;张晓晖;胡清平;: "超低照度下微光图像增强神经网络损失函数设计分析", 国防科技大学学报, no. 04, 28 August 2018 (2018-08-28) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119603566A (en) * 2024-12-06 2025-03-11 维沃移动通信有限公司 Method, device, equipment and readable storage medium for determining optical flow motion vector diagram

Also Published As

Publication number Publication date
CN115589451B (en) 2025-10-14

Similar Documents

Publication Publication Date Title
KR102512889B1 (en) Image fusion processing module
US10853927B2 (en) Image fusion architecture
CN110572584B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112215875B (en) Image processing method, device and electronic system
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
US10880455B2 (en) High dynamic range color conversion using selective interpolation
KR102819311B1 (en) Content based image processing
WO2021232965A1 (en) Video noise reduction method and apparatus, mobile terminal and storage medium
US20220044349A1 (en) Multi-scale warping circuit for image fusion architecture
US20220044371A1 (en) Image Fusion Architecture
US20240205363A1 (en) Sliding Window for Image Keypoint Detection and Descriptor Generation
CN115589451B (en) Shooting method, device, electronic device and readable storage medium
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN117392030A (en) Image generation method, device, storage medium and chip
US20250054274A1 (en) Filtering of keypoint descriptors based on orientation angle
CN110913143B (en) Image processing method, device, storage medium and electronic device
CN113676657B (en) Time-delay shooting method and device, electronic equipment and storage medium
US20230298302A1 (en) Single read of keypoint descriptors of image from system memory for efficient header matching
US11810266B2 (en) Pattern radius adjustment for keypoint descriptor generation
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN116934654A (en) Image ambiguity determining method and related equipment thereof
CN119255045B (en) Video frame insertion method, device, electronic device and storage medium
CN114125296B (en) Image processing method, device, electronic equipment and readable storage medium
CN119603564A (en) Image processing method, device, equipment and readable storage medium
CN119603565A (en) Image processing method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant