[go: up one dir, main page]

CN115802171B - Image processing method and device - Google Patents

Image processing method and device

Info

Publication number
CN115802171B
CN115802171B CN202211337982.4A CN202211337982A CN115802171B CN 115802171 B CN115802171 B CN 115802171B CN 202211337982 A CN202211337982 A CN 202211337982A CN 115802171 B CN115802171 B CN 115802171B
Authority
CN
China
Prior art keywords
images
image
region
frames
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211337982.4A
Other languages
Chinese (zh)
Other versions
CN115802171A (en
Inventor
于海童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211337982.4A priority Critical patent/CN115802171B/en
Publication of CN115802171A publication Critical patent/CN115802171A/en
Application granted granted Critical
Publication of CN115802171B publication Critical patent/CN115802171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像处理方法及其装置,属于图像处理领域。该方法包括:获取N个第一图像和N个第二图像,N个第一图像均为采集的N帧图像中第一区域的图像,N个第二图像均为N帧图像中第二区域的图像,第一区域为静止对象在图像中所处的区域,第二区域为运动对象在图像中所处的区域,N为正整数;对N个第二图像中的每个第二图像分别进行卷积运算处理,得到N个第三图像;根据N个第一图像和N个第三图像,生成目标图像,目标图像为模拟曝光时长大于或等于曝光时长阈值的图像。

The present application discloses an image processing method and apparatus thereof, belonging to the field of image processing. The method comprises: acquiring N first images and N second images, wherein the N first images are images of a first region in N acquired frames of images, and the N second images are images of a second region in N acquired frames of images, wherein the first region is the region in the image where a stationary object is located, and the second region is the region in the image where a moving object is located, and N is a positive integer; performing a convolution operation on each of the N second images to obtain N third images; and generating a target image based on the N first images and the N third images, wherein the target image is an image having a simulated exposure time greater than or equal to an exposure time threshold.

Description

Image processing method and device
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and an image processing device.
Background
The long-time exposure is a common photography technique, and the electronic device can simulate the effect of the long-time exposure by synthesizing a plurality of continuous short-time exposure frames.
However, due to the influence of the single frame exposure time and the frame capturing speed, the non-exposure accumulation time between every two consecutive short-time exposure frames may be too long, so that defects such as break points, ghosts and the like may appear in the synthesized image after synthesizing the plurality of consecutive short-time exposure frames, resulting in poor effect of simulating the long-time exposure.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and an image processing device, which can solve the problem of poor effect of simulating long-time exposure.
The embodiment of the application provides an image processing method, which comprises the steps of obtaining N first images and N second images, wherein the N first images are images of a first area in the collected N frame images, the N second images are images of a second area in the N frame images, the first area is an area where a static object is located in the image, the second area is an area where a moving object is located in the image, N is a positive integer, performing convolution operation on each second image in the N second images to obtain N third images, and generating a target image according to the N first images and the N third images, wherein the target image is an image with the length of the simulated exposure being greater than or equal to an exposure time threshold.
The embodiment of the application provides an image processing device which comprises an acquisition module, a processing module and a generation module, wherein the acquisition module is used for acquiring N first images and N second images, the N first images are all acquired images of a first area in N frames of images, the N second images are all acquired images of a second area in N frames of images, the first area is an area where a static object is located in the images, the second area is an area where a moving object is located in the images, N is a positive integer, the processing module is used for respectively carrying out convolution operation processing on each of the N second images acquired by the acquisition module to acquire N third images, and the generation module is used for generating a target image according to the N first images acquired by the acquisition module and the N third images acquired by the processing module, wherein the target image is an image with a simulated exposure time length greater than or equal to an exposure time length threshold.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, N first images and N second images can be acquired, the N first images are all acquired images of a first area in N frames of images, the N second images are all acquired images of a second area in the N frames of images, the first area is an area where a static object is located in the images, the second area is an area where a moving object is located in the images, N is a positive integer, convolution operation processing is respectively carried out on each second image in the N second images to obtain N third images, and a target image is generated according to the N first images and the N third images, wherein the target image is an image with the length of exposure time being greater than or equal to a threshold value of exposure time in simulated exposure. According to the scheme, the electronic equipment can carry out convolution operation processing on the image of the area where the moving object is located in the N frames of images, and generates a target image with the simulated exposure time length being greater than or equal to the exposure time length threshold according to the N images obtained after the convolution operation processing and the image of the area where the static object is located in the N frames of images, and the blurred image of the moving object can be obtained through the convolution operation processing, so that the blurred effect of the image of the area where the moving object is located in the generated target image can be generated, defects such as break points, double images and the like can be avoided, and the effect of simulating long-time exposure can be improved.
Drawings
FIG. 1 is a schematic diagram of a set of short time exposure frame sequences;
FIG. 2 is a schematic diagram of the non-exposure accumulation period corresponding to a set of short-exposure frame sequences;
FIG. 3 is a schematic diagram of the effect of direct synthesis of a set of short exposure frames;
FIG. 4 is a flowchart of an image processing method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 7 is a third schematic diagram of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an electronic device according to an embodiment of the present application;
Fig. 10 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method and the device provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Long exposure is a common photography technique in which long exposure can be achieved by a professional camera that can manually adjust the exposure time, a stable tripod, a neutral density filter (e.g., a light reducing lens) that can extend the exposure time. However, the shooting process is complicated due to the fact that too many devices are used for shooting, and the common user is more difficult to use professional devices for shooting.
In recent years, with the increasing functions of electronic devices, a technique for simulating a long-time exposure effect by multi-frame short exposure has been developed in electronic devices such as digital cameras, video cameras, and smart phones. The electronic device may collect a plurality of consecutive short-time exposure frames and then synthesize the plurality of short-time exposure frames by an algorithm to simulate the effect of the long-time exposure.
By way of example, assuming that the electronic device collects 9 consecutive short-time exposure frames for a scene of an automobile driving, fig. 1 shows a sequence of the 9 consecutive short-time exposure frames, it can be seen that the electronic device collects the short-time exposure frames 11 to 19 in sequence, and then the electronic device can perform a composition process on the 9 short-time exposure frames by an algorithm to simulate the effect of long-time exposure of an automobile driving image.
However, the exposure-free accumulation time between every two short-time exposure frames of the above 9 short-time exposure frames is too long due to the influence of the single-frame exposure time and the frame capturing speed, and there is a problem that the actual exposure accumulation is discontinuous, for example, as shown in fig. 2, assuming that the single-frame exposure time is 1/100s and the frame capturing speed is 10 frames per second, the exposure-free accumulation time will occupy 90% of the total shooting duration and the exposure accumulation is discontinuous when the multi-frame short-exposure simulates long exposure.
Therefore, defects such as breakpoints and ghosts may occur in the image obtained after the synthesis process, for example, as shown in fig. 3, the electronic device may cause serious ghosts in the image of the traveling automobile in the image obtained after the synthesis process of the above 9 short-time exposure frames. Although this problem can be ameliorated theoretically by extending the single frame exposure time and increasing the frame rate of the grabber, in practice extending the single frame exposure time can result in overexposure, which can not be infinitely increased for a given sensor, and too many frames can burden image processing, which can create power consumption and performance issues for practical imaging devices. As such, simulating long-time exposure by the above method results in poor effect of simulating long-time exposure.
In order to solve the above-mentioned problems, in the image processing method provided by the embodiment of the present application, 9 first images and 9 second images may be acquired, where the 9 first images are images of a first area in the 9 short-time exposure frames, the 9 second images are images of a second area in the 9 short-time exposure frames, the first area is an area where an object other than an automobile (for example, a still object in the embodiment of the present application) is located in the image, the second area is an area where an automobile (for example, a moving object in the embodiment of the present application) is located in the image, convolution operation processing is performed on each of the 9 second images, respectively, to obtain 9 processed images (for example, N third images in the embodiment of the present application), and an image (for example, a target image in the embodiment of the present application) with a simulated exposure time length greater than or equal to an exposure time threshold is generated according to the 9 first images and the 9 processed images. According to the scheme, the electronic equipment can carry out convolution operation processing on the images of the areas where the automobiles are located in the 9 short-time exposure frames, and generates a target image with the simulated exposure time length being greater than or equal to the exposure time length threshold according to the 9 images obtained after the processing and the images of the areas where the static objects are located in the 9 short-time exposure frames, and the images with the automobile blur can be obtained through the convolution operation processing, so that the images of the areas where the automobiles are located in the generated target image can generate a blurring effect, defects such as break points and ghosts are avoided, the effect of simulating long-time exposure can be improved, and the obtained target image is close to the result of real long-time exposure shooting.
An embodiment of the present application provides an image processing method, and fig. 4 shows a flowchart of the image processing method provided by the embodiment of the present application. As shown in fig. 4, the image processing method provided by the embodiment of the present application may include the following steps 401 to 403. The method is exemplarily described below taking an electronic device as an example of executing the method.
Step 401, the electronic device acquires N first images and N second images.
In the embodiment of the application, the N first images are all acquired images of a first region in N frame images, the N second images are all images of a second region in the N frame images, the first region is a region where a still object is located in the image, the second region is a region where a moving object is located in the image, and N is a positive integer.
In the embodiment of the present application, each frame of image in the N frames of images includes a first image and a second image.
Alternatively, in the embodiment of the present application, the N frames of images may be acquired images of the same scene, for example, the N frames of images are continuously acquired images of a driving scene of an automobile.
Alternatively, in the embodiment of the present application, the N frame images may be N frame images that are continuously acquired.
Alternatively, in the embodiment of the present application, the moving object may be an object in a moving state, for example, the moving object may be a running car, a rolling basketball, or a flowing river, etc.
Alternatively, in the embodiment of the present application, the above step 401 may be specifically implemented by the following steps 401a to 401 d.
Step 401a, the electronic device acquires images of first areas in each frame of image in the N frames of images, and obtains N first area images.
Step 401b, the electronic device acquires images of the second areas in each frame of image in the N frames of images, and obtains N second area images.
Optionally, in the embodiment of the present application, an electronic device may acquire, frame by frame, an image of a first area and an image of a second area in the image of each frame by using an image recognition method, to obtain the N first area images and the N second area images.
Alternatively, in the embodiment of the present application, the image recognition method may include a method of performing image recognition by determining a difference value from frame to frame, or a method of performing image recognition by semantics (such as vehicles, pedestrians, or water currents).
The method of acquiring the image of the first area and the image of the second area in one frame of image by the electronic device using the method of image recognition by semantics will be exemplarily described with reference to the accompanying drawings.
For example, assuming that the above-mentioned one frame image is an image of a running car, the electronic device may divide the car in the frame image by using a method of image recognition by semantics, so that the electronic device may acquire an image of an area 51 (i.e., a first area) where the car (i.e., a moving object) is located in the image, and an image of an area 52 (i.e., a second area) where trees, roads, etc. (i.e., a still object) are located in the image, as shown in fig. 5, so that the electronic device may obtain the first area image and the second area image of the frame image.
Step 401c, the electronic device performs image alignment processing on the N first area images to obtain N first images.
In the embodiment of the application, the image alignment process is used for compensating the visual angle change caused by jitter or movement of the electronic equipment in the process of collecting the N frames of images.
Optionally, in the embodiment of the present application, the specific method of the image alignment processing may include displacement, rotation, distortion, subspace matching, and the like.
For the description of the specific method of the above image alignment process, reference may be made to the related description in the related art, and in order to avoid repetition, the description is omitted here.
Optionally, in the embodiment of the present application, if the N first area images are b 1~bn, an i-th first image b i' in the N first images may be represented by the following formula (1):
bi'=C(bi,xi),i=1,2,...,n; (1)
wherein x i is the spatial offset of the i-th frame image after the image alignment process.
And 401d, the electronic equipment performs image alignment processing on the N second area images by adopting the target space offset to obtain N second images.
In the embodiment of the application, the target spatial offset is the spatial offset of each frame of image of the N first images.
It can be understood that, when the electronic device performs image alignment processing on the N first area images, each frame of image in the N frame of images corresponds to a spatial offset.
Optionally, in the embodiment of the present application, for one second area image of the N second area images, the electronic device may perform image alignment processing by using a spatial offset of a corresponding frame of image to obtain a corresponding one second image, so that the electronic device may obtain the N second images after performing image alignment processing on each second area image.
Optionally, in the embodiment of the present application, if the N second area images are a 1~an, an i-th second image a i' in the N second images may be represented by the following formula (2):
ai'=C(ai,xi),i=1,2,...,n; (2)。
In the embodiment of the application, the electronic device can perform image alignment processing on the N first area images to obtain the N first images, and perform image alignment processing on the N second area images by using the spatial offset of each frame of image after the image alignment processing to obtain the N second images, so that the acquired first images and second images are both images with aligned images, and the acquired images have the same viewing angles.
Step 402, the electronic device performs convolution operation processing on each of the N second images, to obtain N third images.
In the embodiment of the application, the convolution operation processing can be used for improving the blurring effect of the processed image so as to be close to the effect of real long exposure shooting.
It can be understood that each of the N third images is an image obtained by performing convolution operation on one of the N second images.
Alternatively, in the embodiment of the present application, the above step 402 may be specifically implemented by the following steps 402a and 402 b.
Step 402a, the electronic device obtains a convolution kernel corresponding to each of the N second images.
Optionally, in the embodiment of the present application, the N frame images are the first N frame images in the acquired n+1 frame images, and the N second images are the first N second images in the n+1 second images, and then the step 402a may be specifically implemented through the following steps 402a1 and 402a 2.
Step 402a1, the electronic device calculates a motion path of the same feature point in the ith second image and the (i+1) th second image for the ith second image in the N second images.
In the embodiment of the application, i is a positive integer less than or equal to N.
Optionally, in the embodiment of the present application, the same feature point may be any possible same point, such as the same vertex, the same corner, or the same center point in the ith second image and the (i+1) th second image.
Alternatively, in the embodiment of the present application, the method for calculating the motion path may be a feature point matching and displacement detection method.
Alternatively, in the embodiment of the present application, the motion paths may be specifically represented as corresponding vectors or matrices.
For a specific description of the electronic device calculating the above motion path, reference may be made to related descriptions in the related art, and in order to avoid repetition, a description is omitted here.
Step 402a2, the electronic device determines the motion path as a convolution kernel corresponding to the ith second image.
Alternatively, in the embodiment of the present application, the convolution kernel may be referred to as a motion blur convolution kernel, and is used to implement the motion blur effect.
It will be appreciated that the electronic device may perform the steps 402a1 and 402a2 for each of the N second images, so as to obtain a convolution kernel corresponding to each second image.
That is, for the above N second images:
The electronic equipment calculates a motion path of the same feature point in the 1 st second image and the 2 nd second image, and determines the motion path as a convolution kernel corresponding to the 1 st second image;
the electronic equipment calculates the motion path of the same feature point in the 2 nd second image and the 3 rd second image, and determines the motion path as a convolution kernel corresponding to the 2 nd second image;
......
The electronic equipment calculates a motion path of the same feature point in the N second image and the N+1 second image, and determines the motion path as a convolution kernel corresponding to the N second image;
in this way, the electronic device may acquire the convolution kernel corresponding to each of the N second images.
In the embodiment of the application, the electronic equipment can calculate the motion path of the same characteristic point in the ith second image and the (i+1) th second image, and determine the motion path as the convolution kernel corresponding to the ith second image, so that the acquired convolution kernel can reflect the displacement of a moving object in two adjacent short-time exposure frames, and the acquired convolution kernel is adopted for convolution operation processing, so that the accuracy of the processed image can be ensured.
Step 402b, the electronic device performs convolution operation on each of the N second images and the convolution kernel corresponding to the one second image, so as to perform convolution operation processing on each second image.
Optionally, in the embodiment of the present application, if the N second images are a 1'~an' and the convolution kernels corresponding to the N second images are psf 1~psfn, the ith third image a i ″ obtained after performing the convolution operation processing may be represented by the following formula (3):
ai"=[ai'*psfi]; (3);
the a i' may specifically be a pixel value matrix corresponding to the ith second image.
Fig. 6 shows the effect of an image obtained by performing convolution operation processing on one second image, and as shown in fig. 6, the processed image includes a blurred image of a moving object, so that the effect of motion blur can be achieved.
In the embodiment of the application, the electronic equipment can carry out convolution operation on each second image and the corresponding convolution kernel so as to respectively carry out convolution operation processing on each second image, so that the N processed third images comprise blurred images, thereby being convenient for simulating the effect of long-time exposure.
Alternatively, in the embodiment of the present application, the above step 402 may be specifically implemented by the following step 402 c.
Step 402c, the electronic device performs convolution operation processing on each of the N second images when the confidence coefficient of the convolution kernel corresponding to each of the second images is greater than or equal to the confidence coefficient threshold.
In the embodiment of the application, the confidence is used for evaluating whether the corresponding convolution kernel is accurate or not.
Optionally, in the embodiment of the present application, the confidence threshold may be default to the system, or may be set by the user according to the use requirement.
Optionally, in the embodiment of the present application, the magnitude relation between the confidence coefficient of a convolution kernel and the confidence coefficient threshold value may be determined according to at least one of the signal-to-noise ratio of the convolution kernel, the spatial continuity of the convolution kernel, and the time continuity of the convolution kernel.
For example, a convolution kernel may be considered accurate if its signal-to-noise ratio is greater than or equal to a first threshold (i.e., the confidence threshold described above).
For another example, a convolution kernel may be considered accurate if its spatial continuity is greater than or equal to a second threshold (i.e., the confidence threshold described above).
Optionally, in the embodiment of the present application, when the convolution kernel corresponding to each second image is greater than or equal to the confidence threshold, the electronic device performs convolution operation processing on each second image, and when at least one convolution kernel in the N convolution kernels corresponding to the N second images one-to-one is less than the confidence threshold, the electronic device does not perform processing on each second image.
In the embodiment of the application, the electronic equipment can respectively carry out convolution operation processing on each second image under the condition that the confidence coefficient of the target convolution kernel is larger than or equal to the confidence coefficient threshold value, so that the flexibility of the convolution operation processing of the electronic equipment can be improved, and the accuracy of the processed images can be further ensured.
Step 403, the electronic device generates a target image according to the N first images and the N third images.
In the embodiment of the application, the target image is an image with the simulated exposure time length being greater than or equal to the exposure time length threshold value.
It can be understood that the target image is an image of a simulated real long exposure shooting effect.
Alternatively, in the embodiment of the present application, the above step 403 may be specifically implemented by the following steps 403a to 403 c.
Step 403a, the electronic device synthesizes the N first images to obtain a first target image.
Optionally, in the embodiment of the present application, the electronic device may use an average value synthesis algorithm, a maximum value synthesis algorithm, a minimum value synthesis algorithm, or an intermediate value synthesis algorithm to synthesize the N first images.
Step 403b, the electronic device synthesizes the N third images to obtain a second target image.
Optionally, in the embodiment of the application, the electronic device may synthesize the N third images by using a multi-frame noise reduction synthesis algorithm, a super-resolution synthesis algorithm, and a high dynamic range imaging (HIGH DYNAMIC RANGE IMAGING, HDR) extended dynamic range synthesis algorithm.
Step 403c, the electronic device fuses the first target image and the second target image to obtain a target image.
For the specific description of the above algorithms and the fusion of the first target image and the second target image, reference may be made to related descriptions in the related art, and in order to avoid repetition, a description is omitted herein.
In the embodiment of the application, the electronic equipment can perform fusion processing on the first target image obtained by synthesizing the N first images and the second target image obtained by synthesizing the N second images to obtain the target image, namely, the electronic equipment can perform multi-frame synthesis based on the image after motion blur operation (namely convolution operation), so that the reality degree of the long-exposure simulation image can be improved.
Fig. 7 shows an effect schematic diagram of an image processed by the image processing method provided by the embodiment of the application, and as shown in fig. 7, the processed image obviously improves the effect of an output image, and avoids defects of break points and double images in the technology of simulating a long-time exposure effect by using a traditional multi-frame short exposure, so that an image close to a long-exposure real shooting result can be obtained.
In the image processing method provided by the embodiment of the application, the electronic equipment can carry out convolution operation processing on the image of the area where the moving object is located in the N frames of images, and generates the target image with the simulated exposure time length being greater than or equal to the exposure time length threshold according to the N images obtained after the processing and the image of the area where the static object is located in the N frames of images, and the blurred image of the moving object can be obtained through convolution operation processing, so that the blurred effect of the image of the area where the moving object is located in the generated target image can be generated, defects such as break points, double images and the like can be avoided, and the effect of simulating long-time exposure can be improved.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
Referring to fig. 8, an embodiment of the present application provides an image processing apparatus 80, and the image processing apparatus 80 may include an acquisition module 81, a processing module 82, and a generation module 83. The acquiring module 81 may be configured to acquire N first images and N second images, where the N first images are all acquired images of a first region in the N frame images, the N second images are all images of a second region in the N frame images, the first region is a region where a still object is located in the image, the second region is a region where a moving object is located in the image, and N is a positive integer. The processing module 82 may be configured to perform convolution operation processing on each of the N second images acquired by the acquiring module 81, to obtain N third images. The generating module 83 may be configured to generate, according to the N first images acquired by the acquiring module 81 and the N third images obtained by processing by the processing module 82, a target image, where the target image is an image with a simulated exposure time length greater than or equal to an exposure time length threshold.
In a possible implementation manner, the processing module 82 may specifically be configured to obtain a convolution kernel corresponding to each of the second images, and for each of the second images, perform a convolution operation on a second image and the convolution kernel corresponding to the second image, so as to perform a convolution operation process on each of the second images respectively.
In a possible implementation manner, the N frame images are the first N frame images in the acquired n+1 frame images, and the N second images are the first N second images in the n+1 second images. The processing module 82 may be specifically configured to calculate, for an ith second image of the N second images, a motion path of the same feature point in the ith second image and an ith+1th second image, where i is a positive integer less than or equal to N, and determine the motion path as a convolution kernel corresponding to the ith second image.
In a possible implementation manner, the acquiring module 81 may be specifically configured to acquire an image of a first area in each of the N frames of images to obtain N first area images, acquire an image of a second area in each of the N frames of images to obtain N second area images, perform image alignment processing on the N first area images to obtain the N first images, and perform image alignment processing on the N second area images by using a target spatial offset, where the target spatial offset is a spatial offset of each frame of image of the N first images.
In a possible implementation manner, the generating module 83 may be specifically configured to synthesize the N first images to obtain a first target image, synthesize the N third images to obtain a second target image, and fuse the first target image and the second target image to obtain the target image.
In the image processing device provided by the embodiment of the application, the image processing device can perform convolution operation processing on the image of the area where the moving object is located in the N frames of images, and generates the target image with the simulated exposure time length being greater than or equal to the exposure time length threshold according to the N images obtained after the processing and the image of the area where the static object is located in the N frames of images, and the blurred image of the moving object can be obtained through convolution operation processing, so that the blurred effect of the image of the area where the moving object is located in the generated target image can be generated, defects such as break points, ghost and the like can be avoided, and the effect of simulating long-time exposure can be improved.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 4 to fig. 7, and in order to avoid repetition, a detailed description is omitted here.
As shown in fig. 9, the embodiment of the present application further provides an electronic device 900, which includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and the program or the instruction implements each step of the above-mentioned image processing method embodiment when executed by the processor 901, and can achieve the same technical effect, so that repetition is avoided, and no redundant description is provided herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to, a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 may be configured to obtain N first images and N second images, where the N first images are all images of a first region in the acquired N frame images, the N second images are all images of a second region in the N frame images, the first region is a region where a still object is located in an image, the second region is a region where a moving object is located in an image, N is a positive integer, perform convolution operation on each of the obtained N second images to obtain N third images, and generate a target image according to the obtained N first images and the N third images obtained by processing, where the target image is an image with a length greater than or equal to an exposure duration threshold during analog exposure.
In a possible implementation manner, the processor 1010 may be specifically configured to obtain a convolution kernel corresponding to each of the second images, and for each of the second images, perform a convolution operation on a second image and the convolution kernel corresponding to the second image, so as to perform a convolution operation process on each of the second images respectively.
In a possible implementation manner, the N frame images are the first N frame images in the acquired n+1 frame images, and the N second images are the first N second images in the n+1 second images. The processor 1010 may be specifically configured to calculate, for an ith second image of the N second images, a motion path of the same feature point in the ith second image and an ith+1th second image, where i is a positive integer less than or equal to N, and determine the motion path as a convolution kernel corresponding to the ith second image.
In a possible implementation manner, the processor 1010 may be specifically configured to obtain an image of a first area in each of the N frames of images, obtain N first area images, obtain an image of a second area in each of the N frames of images, obtain N second area images, perform image alignment processing on the N first area images to obtain the N first images, and perform image alignment processing on the N second area images by using a target spatial offset, where the target spatial offset is a spatial offset of each frame of image of the N first images.
In a possible implementation manner, the processor 1010 may be specifically configured to synthesize the N first images to obtain a first target image, synthesize the N third images to obtain a second target image, and fuse the first target image and the second target image to obtain the target image.
In the electronic device provided by the embodiment of the application, the electronic device can perform convolution operation processing on the image of the area where the moving object is located in the N frames of images, and generate the target image with the simulated exposure time length greater than or equal to the exposure time length threshold according to the N images obtained after the processing and the image of the area where the static object is located in the N frames of images, and the blurred image of the moving object can be obtained through convolution operation processing, so that the blurred effect of the image of the area where the moving object is located in the generated target image can be generated, defects such as break points, ghost and the like can be avoided, and the effect of simulating long-time exposure can be improved.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units, and optionally the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application program, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no description is repeated here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (6)

1.一种图像处理方法,其特征在于,所述方法包括:1. An image processing method, characterized in that the method comprises: 获取N个第一图像和N个第二图像,所述N个第一图像均为采集的N帧图像中第一区域的图像,所述N个第二图像均为所述N帧图像中第二区域的图像,所述第一区域为静止对象在图像中所处的区域,所述第二区域为运动对象在图像中所处的区域,N为正整数;Acquire N first images and N second images, where the N first images are images of a first region in N acquired image frames, and the N second images are images of a second region in the N acquired image frames, where the first region is a region where a stationary object is located in the image, and the second region is a region where a moving object is located in the image, and N is a positive integer; 对所述N个第二图像中的每个第二图像分别进行卷积运算处理,得到N个第三图像;Performing a convolution operation on each of the N second images to obtain N third images; 根据所述N个第一图像和所述N个第三图像,生成目标图像,所述目标图像为模拟曝光时长大于或等于曝光时长阈值的图像;generating a target image according to the N first images and the N third images, wherein the target image is an image having a simulated exposure time greater than or equal to an exposure time threshold; 其中,所述对所述N个第二图像中的每个第二图像分别进行卷积运算处理,包括:The performing convolution operation on each of the N second images includes: 获取所述每个第二图像对应的卷积核;Obtaining a convolution kernel corresponding to each second image; 在每个第二图像对应的卷积核的置信度均大于或等于置信度阈值的情况下,对于所述每个第二图像,将一个第二图像,与其对应的卷积核进行卷积运算,以对所述每个第二图像分别进行卷积运算处理,所述置信度用于评估对应的卷积核是否准确;When the confidence level of the convolution kernel corresponding to each second image is greater than or equal to a confidence threshold, for each second image, convolve the second image with the convolution kernel corresponding thereto, so as to perform convolution processing on each second image respectively, and the confidence level is used to evaluate whether the corresponding convolution kernel is accurate; 所述N帧图像为采集的N+1帧图像中的前N帧图像,所述N个第二图像为N+1个第二图像中的前N个第二图像;The N frames of images are the first N frames of images among the N+1 frames of images collected, and the N second images are the first N second images among the N+1 second images; 所述获取所述每个第二图像对应的卷积核,包括:The obtaining of the convolution kernel corresponding to each second image includes: 对于所述N个第二图像中的第i个第二图像,计算所述第i个第二图像与第i+1个第二图像中同一特征点的运动路径,i为小于或等于N的正整数;For an i-th second image among the N second images, calculating a motion path of the same feature point in the i-th second image and the (i+1)-th second image, where i is a positive integer less than or equal to N; 将所述运动路径确定为所述第i个第二图像对应的卷积核。The motion path is determined as the convolution kernel corresponding to the i-th second image. 2.根据权利要求1所述的方法,其特征在于,所述获取N个第一图像和N个第二图像,包括:2. The method according to claim 1, wherein acquiring N first images and N second images comprises: 获取所述N帧图像中每帧图像中所述第一区域的图像,得到N个第一区域图像;Acquire an image of the first region in each of the N frames of images to obtain N first region images; 获取所述每帧图像中所述第二区域的图像,得到N个第二区域图像;Acquire an image of the second region in each frame of the image to obtain N second region images; 将所述N个第一区域图像进行图像对齐处理,得到所述N个第一图像;Performing image alignment processing on the N first region images to obtain the N first images; 采用目标空间偏移量,对所述N个第二区域图像进行图像对齐处理,得到所述N个第二图像,所述目标空间偏移量为:得到所述N个第一图像的所述每帧图像的空间偏移量。The target spatial offset is used to perform image alignment processing on the N second region images to obtain the N second images. The target spatial offset is: the spatial offset of each frame image of the N first images. 3.根据权利要求1所述的方法,其特征在于,所述根据所述N个第一图像和所述N个第三图像,生成目标图像,包括:3. The method according to claim 1, wherein generating a target image based on the N first images and the N third images comprises: 将所述N个第一图像进行合成,得到第一目标图像;Synthesizing the N first images to obtain a first target image; 将所述N个第三图像进行合成,得到第二目标图像;synthesizing the N third images to obtain a second target image; 融合所述第一目标图像和所述第二目标图像,得到所述目标图像。The first target image and the second target image are fused to obtain the target image. 4.一种图像处理装置,其特征在于,所述装置包括获取模块、处理模块和生成模块;4. An image processing device, characterized in that the device includes an acquisition module, a processing module and a generation module; 所述获取模块,用于获取N个第一图像和N个第二图像,所述N个第一图像均为采集的N帧图像中第一区域的图像,所述N个第二图像均为所述N帧图像中第二区域的图像,所述第一区域为静止对象在图像中所处的区域,所述第二区域为运动对象在图像中所处的区域,N为正整数;The acquisition module is configured to acquire N first images and N second images, wherein the N first images are images of a first region in the N acquired image frames, and the N second images are images of a second region in the N acquired image frames, wherein the first region is a region where a stationary object is located in the image, and the second region is a region where a moving object is located in the image, and N is a positive integer; 所述处理模块,用于对所述获取模块获取的所述N个第二图像中的每个第二图像,分别进行卷积运算处理,得到N个第三图像;The processing module is configured to perform convolution operation on each of the N second images acquired by the acquisition module to obtain N third images; 所述生成模块,用于根据所述获取模块获取的所述N个第一图像,和所述处理模块处理得到的所述N个第三图像,生成目标图像,所述目标图像为模拟曝光时长大于或等于曝光时长阈值的图像;The generating module is configured to generate a target image based on the N first images acquired by the acquiring module and the N third images processed by the processing module, wherein the target image is an image having a simulated exposure time greater than or equal to an exposure time threshold; 所述处理模块,具体用于获取所述每个第二图像对应的卷积核;并在每个第二图像对应的卷积核的置信度均大于或等于置信度阈值的情况下,对于所述每个第二图像,将一个第二图像,与其对应的卷积核进行卷积运算,以对所述每个第二图像分别进行卷积运算处理,所述置信度用于评估对应的卷积核是否准确;所述N帧图像为采集的N+1帧图像中的前N帧图像,所述N个第二图像为N+1个第二图像中的前N个第二图像;以及,对于所述N个第二图像中的第i个第二图像,计算所述第i个第二图像与第i+1个第二图像中同一特征点的运动路径,i为小于或等于N的正整数;并将所述运动路径确定为所述第i个第二图像对应的卷积核。The processing module is specifically used to obtain the convolution kernel corresponding to each second image; and when the confidence of the convolution kernel corresponding to each second image is greater than or equal to the confidence threshold, for each second image, a second image is convolved with the convolution kernel corresponding to the second image to perform convolution operation on each second image respectively, and the confidence is used to evaluate whether the corresponding convolution kernel is accurate; the N frames of images are the first N frames of images in the collected N+1 frames of images, and the N second images are the first N second images in the N+1 second images; and, for the i-th second image in the N second images, the motion path of the same feature point in the i-th second image and the i+1-th second image is calculated, where i is a positive integer less than or equal to N; and the motion path is determined as the convolution kernel corresponding to the i-th second image. 5.根据权利要求4所述的装置,其特征在于,5. The device according to claim 4, characterized in that 所述获取模块,具体用于获取所述N帧图像中每帧图像中所述第一区域的图像,得到N个第一区域图像;且获取所述每帧图像中所述第二区域的图像,得到N个第二区域图像;且将所述N个第一区域图像进行图像对齐处理,得到所述N个第一图像;并采用目标空间偏移量,对所述N个第二区域图像进行图像对齐处理,得到所述N个第二图像,所述目标空间偏移量为:得到所述N个第一图像的所述每帧图像的空间偏移量。The acquisition module is specifically used to acquire an image of the first area in each of the N frames of images to obtain N first area images; and acquire an image of the second area in each of the frames of images to obtain N second area images; and perform image alignment processing on the N first area images to obtain the N first images; and use a target spatial offset to perform image alignment processing on the N second area images to obtain the N second images, where the target spatial offset is: the spatial offset of each frame of the N first images. 6.根据权利要求4所述的装置,其特征在于,6. The device according to claim 4, characterized in that 所述生成模块,具体用于将所述N个第一图像进行合成,得到第一目标图像;且将所述N个第三图像进行合成,得到第二目标图像;并融合所述第一目标图像和所述第二目标图像,得到所述目标图像。The generation module is specifically configured to synthesize the N first images to obtain a first target image; synthesize the N third images to obtain a second target image; and fuse the first target image and the second target image to obtain the target image.
CN202211337982.4A 2022-10-28 2022-10-28 Image processing method and device Active CN115802171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211337982.4A CN115802171B (en) 2022-10-28 2022-10-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211337982.4A CN115802171B (en) 2022-10-28 2022-10-28 Image processing method and device

Publications (2)

Publication Number Publication Date
CN115802171A CN115802171A (en) 2023-03-14
CN115802171B true CN115802171B (en) 2025-10-24

Family

ID=85434349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211337982.4A Active CN115802171B (en) 2022-10-28 2022-10-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN115802171B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium
CN113630545A (en) * 2020-05-07 2021-11-09 华为技术有限公司 A shooting method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015033006A (en) * 2013-08-02 2015-02-16 オリンパス株式会社 Image processing apparatus, image processing method, image processing program and microscope system
CN112749613B (en) * 2020-08-27 2024-03-26 腾讯科技(深圳)有限公司 Video data processing method, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630545A (en) * 2020-05-07 2021-11-09 华为技术有限公司 A shooting method and equipment
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN115802171A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
CN112822412B (en) Exposure method, exposure device, electronic equipment and storage medium
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN110084765A (en) Image processing method, image processing device and terminal equipment
CN117336422A (en) Video processing method, device, equipment and medium
CN114390201A (en) Focusing method and device thereof
CN116342992B (en) Image processing method and electronic device
CN117041747A (en) Image generation method and image generation device
CN114785969B (en) Shooting method and device
CN113489909B (en) Methods, devices and electronic equipment for determining shooting parameters
CN115802171B (en) Image processing method and device
CN115589532B (en) Anti-shake processing method, device, electronic device and readable storage medium
CN115134536B (en) Shooting method and device thereof
CN113014820A (en) Processing method and device and electronic equipment
CN113891005B (en) Shooting method and device and electronic equipment
CN117271090A (en) Image processing methods, devices, electronic equipment and media
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN115564695A (en) Processing method and electronic equipment
CN114615440B (en) Photographing method, photographing device, electronic device and readable storage medium
CN116385710B (en) Delay data calculation method, image fusion method and electronic equipment
CN115118884B (en) Shooting method, device and electronic equipment
CN119767126B (en) Frame synchronization method, device and equipment
CN115278053B (en) Image shooting method and electronic equipment
CN113850739B (en) Image processing method and device
TW202541511A (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant