[go: up one dir, main page]

CN114119701A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN114119701A
CN114119701A CN202111438638.XA CN202111438638A CN114119701A CN 114119701 A CN114119701 A CN 114119701A CN 202111438638 A CN202111438638 A CN 202111438638A CN 114119701 A CN114119701 A CN 114119701A
Authority
CN
China
Prior art keywords
depth
depth map
image
map
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111438638.XA
Other languages
Chinese (zh)
Inventor
李佐广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111438638.XA priority Critical patent/CN114119701A/en
Publication of CN114119701A publication Critical patent/CN114119701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and an image processing device, and belongs to the technical field of image processing. The image processing method comprises the following steps: acquiring a first image and a second image, wherein the first image and the second image are two images of the same shooting scene captured by a first camera and a second camera of the electronic equipment; determining sparse depth maps of the first camera and the second camera according to the first image and the second image; acquiring a dense depth map of an image of a shooting scene; according to the dense depth map, adjusting the depth of pixel points in the sparse depth map to obtain a target depth map; and performing blurring processing on a target image according to the target depth map, wherein the target image is one of the first image and the second image.

Description

Image processing method and device
Technical Field
The present application belongs to the field of image processing technology, and in particular, relates to an image processing method and apparatus.
Background
With the rapid development of communication technology, intelligent electronic devices, such as mobile phones, have become indispensable tools in various aspects of user life. Among them, taking pictures is becoming one of the common functions of mobile phones. In addition, at present, a photo with a background blurring effect can be shot through a mobile phone.
At present, it is a common technique to implement virtual photography based on two cameras. However, for the regions of sparse texture, repetitive texture, dim light, and occlusion, the parallax processing of the two cameras is weak, so that the depth calculation is not accurate, and therefore blurring omission or blurring error is caused.
Therefore, the blurring method based on the two cameras in the prior art has the problems of missing blurring or false blurring.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method and an image processing apparatus, which can solve the problem of missing blurring or false blurring in a blurring method based on two cameras in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first image and a second image, wherein the first image and the second image are two images of the same shooting scene captured by a first camera and a second camera of the electronic equipment;
determining sparse depth maps of the first camera and the second camera according to the first image and the second image;
obtaining a dense depth map of the image of the shooting scene;
according to the dense depth map, adjusting the depth of pixel points in the sparse depth map to obtain a target depth map;
and performing blurring processing on a target image according to the target depth map, wherein the target image is one of the first image and the second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the device comprises an image acquisition module, a storage module and a display module, wherein the image acquisition module is used for acquiring a first image and a second image, and the first image and the second image are two images of the same shooting scene captured by a first camera and a second camera of the electronic equipment;
the sparse depth map acquisition module is used for determining sparse depth maps of the first camera and the second camera according to the first image and the second image;
the dense depth map acquisition module is used for acquiring a dense depth map of the image of the shooting scene;
the depth adjusting module is used for adjusting the depth of pixel points in the sparse depth map according to the dense depth map to obtain a target depth map;
and the blurring processing module is used for blurring a target image according to the target depth map, wherein the target image is one of the first image and the second image.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first image and a second image of the same shooting scene captured by a first camera and a second camera of an electronic device can be obtained, sparse depth maps of the first camera and the second camera are determined according to the first image and the second image, a dense depth map of the image of the shooting scene can also be obtained, so that the depth of pixel points in the sparse depth map is adjusted according to the dense depth map, a target depth map is obtained, and then one of the first image and the second image is subjected to blurring processing according to the target depth map.
Therefore, in the embodiment of the application, the sparse depth map of the two cameras (namely the first camera and the second camera) can be adjusted according to the dense depth map of the image of the shooting scene, so that more accurate depth calculation can be obtained, the problem that the two cameras are weak in processing in sparse texture, repeated texture, dim light and sheltered areas can be solved, the problem of blurring leakage or blurring error can be solved, and the blurring effect is improved.
Drawings
Fig. 1 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of imaging planes of a first camera and a second camera before stereo rectification in the embodiment of the present application;
fig. 3 is a schematic view of imaging planes of a first camera and a second camera after performing stereo rectification in the embodiment of the present application;
FIG. 4 is a flowchart of an embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a block diagram of another electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
An embodiment of the present application provides an image processing method, as shown in fig. 1, which may include the following steps 101 to 104:
step 101: a first image and a second image are acquired.
The first image and the second image are two images of the same shooting scene captured by a first camera and a second camera of the electronic equipment. The first camera is used for capturing the image of the shooting scene, and the second camera is used for capturing the image of the shooting scene.
In addition, in order to reduce the difference in image content between the first image and the second image, the first image and the second image may be two images of the same shooting scene of the first camera and the second camera.
In addition, an operation control may be set in the photographing setting interface, and when the operation control is in an open state, the image processing method provided in the embodiment of the present application is used to perform blurring processing, that is, steps 101 to 105 are executed; when the operation control is in a closed state, a blurring processing method based on two cameras in the prior art is adopted.
It should be further noted that the image processing method of the embodiment of the present application can be applied not only to the photographing process, but also to the video photographing process.
Step 102: determining sparse depth maps of the first camera and the second camera according to the first image and the second image.
The parallax matching of the first image and the second image is difficult due to sparse texture, repeated texture, dim light, occlusion and the like, namely in the areas of sparse texture, repeated texture, dim light and occlusion, the first image and the second image have poor quality during parallax matching, so that the confidence of the parallax of the areas is relatively low, the parallax values with low confidence are filtered, a parallax image is obtained, and a depth image can be obtained from the parallax image. The depth map obtained by lifting the disparity value is generally referred to as a sparse depth map.
Step 103: obtaining a dense depth map of an image of the capture scene.
Wherein, a dense depth map of the image of the above-mentioned shooting scene may be acquired by a time-of-flight (tof) sensor, which may be disposed on the same plane as the first and second cameras, so that the tof sensor and the first and second cameras capture the image of the same shooting scene.
In addition, in order to reduce the difference in image content between the first image and the second image in the image of the shooting scene captured by the tof sensor, the first camera, the second camera and the tof sensor can be controlled to capture the image of the shooting scene simultaneously.
Further, alternatively, a spot tof (lattice time of flight) sensor may be used in the embodiment of the present application. Wherein, the spot tof sensor has longer measuring distance and stronger anti-illumination capability.
Step 104: and according to the dense depth map, adjusting the depth of a pixel point in the sparse depth map to obtain a target depth map.
As described above, the sparse depth map is obtained by filtering out the disparity map with low confidence parallax, and therefore, the depth of some regions may be lacking in the sparse depth map, and in the embodiment of the present application, a dense depth map of an image of the same shooting scene captured by the tof sensor may also be obtained, so that the depth of a pixel point in the sparse depth map may be adjusted according to the dense depth map to compensate for the depth of the region lacking in the sparse depth map, and thus a more accurate depth map may be obtained, and further, the blurring processing effect may be improved.
Step 105: and performing blurring processing on the target image according to the target depth map.
Wherein the target image is one of the first image and the second image.
In addition, after the target image is subjected to the blurring processing, the blurred image can be displayed on a photographing preview interface, so that a user can view the blurring processing effect. After the blurring process is performed, if a photographing instruction is received, a photograph can be generated of the blurred image.
As can be seen from the foregoing steps 101 to 105, in the embodiment of the present application, a first image and a second image of the same shooting scene captured by a first camera and a second camera of an electronic device can be obtained, sparse depth maps of the first camera and the second camera can be determined according to the first image and the second image, and a dense depth map of an image of the shooting scene can also be obtained, so that depths of pixel points in the sparse depth map are adjusted according to the dense depth map, a target depth map is obtained, and then one of the first image and the second image is blurred according to the target depth map.
Therefore, in the embodiment of the application, the sparse depth map of the two cameras (namely the first camera and the second camera) can be adjusted according to the dense depth map of the image of the shooting scene, so that more accurate depth calculation can be obtained, the problem that the two cameras are weak in processing in sparse texture, repeated texture, dim light and sheltered areas can be solved, the problem of blurring leakage or blurring error can be solved, and the blurring effect is improved.
Optionally, before the acquiring the first image and the second image, the method further includes:
and adjusting the imaging planes of the first camera and the second camera to be in the same plane.
In the three-dimensional imaging principle, the depth information of an object point needs to be estimated through two images, and the same object point needs to be accurately matched in the two images, so that the object depth can be calculated according to the position relation of the object point in the two images. In order to reduce the amount of matching calculation, the embodiments of the present application may further adjust the imaging planes of the first camera and the second camera to be in the same plane, that is, perform stereo rectification on the first camera and the second camera. For example, the imaging planes of the first camera and the second camera before the stereo correction are performed are shown in fig. 2, and the imaging planes of the first camera and the second camera after the stereo correction are shown in fig. 3, where Pl denotes the imaging plane of the first camera and Pr denotes the imaging plane of the second camera in fig. 2 and 3.
Optionally, the obtaining a dense depth map of the image of the shooting scene includes:
and adopting a Fast Bilateral Solver algorithm (Fast Bilateral Solver) to perform densification on the image of the shooting scene captured by the flight time sensor to obtain the dense depth map.
The fast Bilateral Solver algorithm is accelerated on the basis of the binary Solver, namely, a pixel point is projected to a Bilateral space (binary space) to reduce the calculated amount, then the binary space is subjected to filtering processing, and then data is projected to the pixel space.
Specifically, the specific process of performing densification on the image of the shooting scene captured by the time-of-flight sensor by using the fast bilateral solver algorithm may be as follows:
first, pixel values of an image of a photographic scene captured by a tof sensor are mapped onto small-scale Grid (Grid) or Grid (lattice) vertices, thereby compressing the data. For example, a YUV image with width W and height H is projected onto Grid to form a five-dimensional (YUVXY) point, where YUV represents color and XY represents position coordinates.
Next, positional compression is performed, for example, for an image with a width W and a height H, a Block (for example, a Block of 8 × 8) can be represented by one position. The YUV colors are compressed in each Block, and the YUV values can be divided by 8, so that the number of repeated points in the Block is increased, the number of the blocks is increased after the repeated points are removed, and the data volume (namely, the calculated data points) is greatly reduced.
And thirdly: the vertices (vertices) are filtered (blu).
And finally: and mapping each filtered vertex back to the pixel space to obtain a dense depth map.
Optionally, the determining the sparse depth maps of the first camera and the second camera according to the first image and the second image includes:
acquiring a disparity map of the first camera and the second camera and a confidence map of the disparity map according to the first image and the second image;
and acquiring the sparse depth map according to the disparity map and the confidence map of the disparity map.
The disparity maps of the first camera and the second camera can be obtained by adopting a block matching method. Namely, the following steps H1 to H4 are respectively performed for each pixel point in the first image:
step H1: taking a target pixel point in the first image as a center, and acquiring a first window (for example, a 9 × 9 window expanded from the target pixel point in four directions, namely, up, down, left and right directions) where the target pixel point is located;
step H2: determining a second window corresponding to the first window position in the second image (namely, a 9 × 9 window which is expanded by taking a pixel point corresponding to the target pixel point position in the second image as a center);
step H3: moving the second window one pixel point to the left and right respectively, and calculating the absolute value of the difference between the gray value of the v-th pixel point in the first window and the gray value of the v-th pixel point in the second window when v is each integer from 1 to W when the second window moves once, so as to be used as a second parameter of the v-th pixel point in the first window, and further calculating the sum of the second parameters of the 1-W pixel points, so as to be used as a third parameter of the moved second window; wherein W represents the total number of pixel points in the first window;
step H4: and finding the central pixel point of the second window with the minimum third parameter, wherein the difference between the abscissa of the pixel point and the abscissa of the target pixel point is the parallax of the target pixel point.
The target pixel point is one of the pixel points in the first image.
In addition, after the parallax of each pixel point is obtained, the sparse depth map can be obtained according to the confidence map and the parallax map.
In addition, any method in the prior art may be adopted to obtain the confidence map of the disparity map, which is not described herein again.
Optionally, the obtaining the sparse depth map according to the disparity map and the confidence map of the disparity map includes:
according to the confidence map, acquiring a first pixel point with the confidence degree smaller than a first preset threshold value in the disparity map, and setting the depth of the first pixel point as a first preset value;
determining the depth of a second pixel point according to a binocular triangulation principle and the parallax of the second pixel point, wherein the second pixel point is a pixel point except the first pixel point in the parallax map;
and obtaining the sparse depth map according to the depth of the first pixel point and the depth of the second pixel point.
Therefore, in the embodiment of the application, according to the confidence map, a first pixel point with a lower confidence (that is, the confidence is smaller than a first preset threshold) may be obtained, and then the parallax of the first pixel point is set to a first preset value (for example, -1), that is, the pixel point with the low confidence is marked by using the first preset value; and calculating the depth of the second pixel point according to the parallax of the second pixel point (namely the pixel point with high confidence level, namely the pixel point with the confidence level larger than or equal to the first preset threshold) by adopting a binocular triangulation principle, thereby obtaining a sparse depth map.
Optionally, the adjusting the depth of the pixel point in the sparse depth map according to the dense depth map to obtain a target depth map includes:
acquiring a first region and a second region in the sparse depth map, wherein the depth of a pixel point in the first region is a first preset value, the absolute value of the difference between the depth of the pixel point in the second region and the depth of a third pixel point is greater than or equal to a second preset threshold value, and the third pixel point is a pixel point corresponding to the pixel point position in the second region in the dense depth map;
setting the depth of a pixel point of the first region in the sparse depth map as the depth of a fourth pixel point, wherein the fourth pixel point is a pixel point corresponding to the position of the pixel point of the first region in the dense depth map;
when k is each integer from 1 to M, calculating the depth of a kth pixel point in the second area, and obtaining the depth of a kth fifth pixel point by linear weighted sum of the depth of the kth pixel point and the depth of the kth third pixel point, wherein the fifth pixel point is a pixel point corresponding to the position of the pixel point in the second area in the target depth map, and M represents the total number of the pixel points in the second area.
In the sparse depth map, the depth of the pixel point is a first preset value, which indicates that the depth of the pixel point is lacked, namely, in the process of acquiring the sparse depth map, the confidence of the parallax of the pixel point is low, so that the depth is not calculated according to the parallax of the pixel point, but the depth of the pixel point is directly set to be the first preset value. That is, in the embodiment of the present application, the pixel lacking the depth is marked by using the first preset value.
Therefore, in the sparse depth map, the depth of the pixel points in the first region is a first preset value, and the pixel points in the first region lack depth, so that the depth of the pixel points in the region corresponding to the position of the first region in the dense depth map can be assigned to the corresponding pixel points in the first region.
In addition, the absolute value of the difference between the depth of the pixel point in the second region and the depth of the pixel point corresponding to the position of the pixel point in the dense depth map is greater than a second preset threshold, which indicates that the difference between the depth of the pixel point in the second region and the depth of the pixel point in the region corresponding to the position of the second region in the dense depth map is greater, that is, there is a region with a greater depth difference in the sparse depth map and the dense depth map.
Specifically, in the embodiment of the present application, when the depth of the sparse depth map is adjusted according to the dense depth map, the sparse depth map may be counted, the depth is a value of-1, and if an area with a depth of-1 is greater than a certain range, the area may be considered to be a sparse texture or a dim light area. The scene is considered sparse depth missing. The pixel of the area position is marked as 1, that is, the pixel point marked as 1 constitutes the first area.
And the depth of the pixel point at the corresponding position in the sparse depth map and the dense depth map can be subtracted to obtain an absolute value, and when the absolute value is greater than a second preset threshold, the pixel point is marked as 2, that is, the pixel point marked as 2 forms the second region.
Further, for the pixel point labeled 1: and replacing the depth of the pixel marked as 1 in the sparse depth map by the depth of the pixel point at the corresponding position in the depth of the dense depth map, namely filling the corresponding depth of the dense depth map to the region marked as 1 in the sparse depth map.
For the pixel labeled 2: fusing the depth of the pixel point marked as 2 in the sparse depth map with the depth of the pixel point at the corresponding position in the dense depth map by linear weighting, such as dfusionk=d1k*a+d2k(a-1), wherein d1kRepresenting the depth of the k-th pixel labeled 2 in the sparse depth map, d2kRepresenting the depth of the k-th pixel point marked as 2 in the dense depth map, a being a predetermined constant between 0 and 1, dfusionkAnd the fused depth of the k-th pixel point marked as 2 in the sparse depth map is represented.
Optionally, before the obtaining the first region and the second region in the sparse depth map, the adjusting the depth of the pixel point in the sparse depth map according to the dense depth map further includes:
determining a first connected region of the sparse depth map and a second connected region of the dense depth map;
when i is each integer from 1 to N, acquiring a region with the largest area in an overlapping region of the ith first connected region and each first target region, and determining the region as the ith candidate region, wherein N is the number of the first connected regions, and the first target region is a region corresponding to the position of the second connected region in the dense depth map;
obtaining a first average value of depths of pixel points in an ith candidate region in the sparse depth map and a second average value of depths of pixel points in an ith second target region in the dense depth map, wherein the ith second target region is a region corresponding to the ith candidate region in the dense depth map;
calculating the ratio of the first average value to the second average value, and determining the ratio as an ith target parameter;
dividing the depth of the pixel points in the ith candidate region in the sparse depth map by the ith target parameter to obtain the adjusted depth of the pixel points in the ith candidate region in the sparse depth map;
and adjusting the depth of the pixel points outside the candidate region in the sparse depth map by adopting an interpolation method according to the adjusted depth of the pixel points in each candidate region in the sparse depth map.
For example, three connected regions L1, L2, and L3 exist in the sparse depth map, and three connected regions L4, L5, and L6 exist in the dense depth map, in the embodiment of the present application, it is necessary to determine a region corresponding to the position of L4 in the sparse depth map, and overlap regions with L1, L2, and L3, respectively, and if two overlapping regions C1 and C2 exist, a region with the largest area (i.e., the largest number of pixels) is selected as a first candidate region; similarly, a region corresponding to the position of L5 in the sparse depth map needs to be determined, and the region is respectively overlapped with the overlapping regions of L1, L2 and L3, and if there is one overlapping region of C3, the region is a second candidate region; similarly, regions corresponding to the positions of L6 in the dense depth map, i.e., overlapping regions with L1, L2, and L3, respectively, need to be determined, and if there are two overlapping regions of C4 and C5, a region with the largest area (i.e., the largest number of pixels) is selected as the third candidate region.
After the first to three candidate regions are obtained, for each candidate region, a first average value of depths of pixel points in the candidate region in the sparse depth map and a second average value of depths of pixel points in a region of a corresponding position of the candidate region in the dense depth map need to be calculated, and a ratio of the first average value to the second average value is calculated to serve as a target parameter.
For example, the obtained target parameters are S1, S2, S3, where S1 corresponds to the first candidate region, S2 corresponds to the second candidate region, and S3 corresponds to the third candidate region. Dividing the depth of each pixel point belonging to the first candidate region in the sparse depth map by S1; dividing the depth of each pixel point belonging to the second candidate region in the sparse depth map by S2; and for each pixel point in the third candidate region in the sparse depth map, dividing the depth of each pixel point by S3, and completing the process of adjusting the depth of the pixel point in the candidate region in the sparse depth map according to the target parameter. Thus, a depth adjustment relationship table, i.e., a correspondence relationship table of the depth before adjustment and the depth after adjustment, can be obtained.
Then, the depth of the pixel point outside the candidate region in the sparse depth map can be adjusted by adopting an interpolation method according to the corresponding relation table. For example, the depth of a certain pixel point outside the candidate area before adjustment is 1.2 meters, and the relationship table has a correspondence between 1 meter (before adjustment) and 0.8 meter (after adjustment) and a correspondence between 2 meters (before adjustment) and 1.5 meters (after adjustment), and 1.2 meters is between 1 meter and 2 meters, a value between 0.8 meter and 1.5 meters can be selected as the depth of the pixel point after adjustment.
Optionally, after the depth of the pixel point in the sparse depth map is adjusted according to the dense depth map to obtain a target depth map, the method further includes:
obtaining a second connected region of the dense depth map;
when j takes each integer from 1 to L, calculating the average value of pixel points of a jth second connected region and the average value of the depth of the pixel points of a jth third target region, wherein the jth third target region is a region corresponding to the jth second connected region in the sparse depth map;
calculating the average value of the pixel points of the jth second connected region and the absolute value of the difference value of the average value of the pixel points of the jth third target region to serve as a first parameter of the jth third target region;
acquiring a third target area with the first parameter being greater than or equal to a third preset threshold value to serve as a fourth target area;
and filtering pixel points in the target depth map corresponding to the pixel point positions in the fourth target area.
For example, if three connected regions L4, L5, and L6 exist in the dense depth map, it is necessary to determine regions corresponding to positions of L4, L5, and L6 in the sparse depth map, for example, L7, L8, and L9, and then calculate an absolute value of a difference between an average value of depths of pixels in L4 and an average value of depths of pixels in L6 as a first parameter of L7; similarly, the first parameters of L8 and L9 may be obtained; then, a region with the first parameter greater than or equal to a third preset threshold value can be selected, and pixel points in a region corresponding to the position of the region in the target depth map are filtered out.
The obtained first parameter of the fourth target region is greater than or equal to the third preset threshold, and the depth of the pixel points in the regions in the sparse depth map is incorrect, so that the pixel points need to be filtered, the accuracy of the finally obtained target depth map can be higher, and the blurring effect can be further improved.
Optionally, before performing blurring processing on the target image according to the target depth map, the method further includes:
and carrying out depth filtering processing on the target depth map.
Wherein the target depth map may be depth filtered in conjunction with RGB values of images captured by the dual cameras (i.e., the first camera and the second camera). For example, if a human image is taken, the segmentation map of the human image can be used as a constraint of dense growth in depth filtering.
In addition, the depth filtering may be, for example, a guided edge preserving filtering. The guided edge-preserving filtering refers to a special filter which can effectively preserve edge information in an image in a filtering process.
Optionally, the blurring processing on the target image according to the target depth map includes:
performing depth segmentation on the target depth map to obtain a depth range of a clear region;
determining a foreground area and a background area according to the depth range of the clear area;
respectively calculating the virtual radius of each pixel point of the foreground region and the virtual radius of each pixel point of the background region according to the target depth map to obtain a virtual radius map;
and performing blurring processing on the target image according to the blurring radius map.
The position of the focus point on the image may be determined first, then the depth value of the focus point position in the target depth map is obtained, and then the depth range of the clear region is obtained by searching the focus depth value table, for example, may be represented by near, far, near representing the lower limit value of the depth range of the clear region, and far representing the upper limit value of the depth range of the clear region. The focus depth value table is a depth of field range correspondence table at different distances obtained by simulating a single-lens reflex aperture.
In addition, in the target depth map, a region formed by pixel points with the depth smaller than near belongs to a foreground region; and the region formed by the pixel points with the depth greater than far belongs to the background region.
In addition, the blurring radius of the pixel points in the foreground region is (near-d) max _ key _ r/background _ length; blurring radius of a pixel point in the background area is (d-far) max _ bokeh _ r/formgo _ length; wherein d represents the depth of a pixel point in the target depth map, max _ boot _ r represents the maximum blurring radius corresponding to the selected aperture (e.g., F2.0), background _ length represents the background blurring distance, background _ length is near-minutia, Foreground _ length represents the foreground blurring distance for foreground _ length is maxdist-far, minidisk represents the minimum depth in the target depth map, and maxdist is the maximum depth in the target depth map.
Therefore, in the embodiment of the present application, the blurring radius is determined by the depth d of the pixel, the depth range [ near, far ] of the clear region, the aperture size, the foreground blurring distance, and the background blurring distance. The blurring radius is the radius of a circle of the blurring or blurred image, and the smaller the blurring radius is, the lighter the blurring degree is, and the larger the blurring radius is, the heavier the blurring degree is.
In addition, a circular filter may be used to blur the target image according to the aforementioned blurring radius map.
Optionally, before obtaining the dense depth map of the image of the shooting scene, the method further includes:
and converting the image coordinate system of the time sensor into the image coordinate system of the first camera or the second camera, wherein the coordinate systems of the first image and the second image are the same.
The tof sensor is registered to an image coordinate system of the first camera or the second camera so as to obtain a dense depth map, and the depth of pixel points of the sparse depth map is adjusted according to the dense depth map.
As described above, a specific implementation of the image processing method according to the embodiment of the present application can be shown in fig. 4. The image processing method is applied to electronic equipment, and the electronic equipment is provided with a first camera, a second camera and a tof sensor which are located on the same plane. This includes steps 401 to 410 as described below.
Step 401: adjusting the imaging planes of the first camera and the second camera to be in the same plane;
step 402: respectively acquiring images of the same shooting scene captured by a first camera, a second camera and a tof sensor at the same time;
step 403: determining sparse depth maps of the first camera and the second camera according to a first image captured by the first camera and a second image captured by the second camera, wherein the depth of a pixel point with the reliability lower than a first preset threshold value in the sparse depth maps is a first preset value, and in addition, please refer to the foregoing description for the process of specifically obtaining the sparse depth maps, which is not repeated herein;
step 404: converting the image coordinate of the tof sensor into an image coordinate system of the first camera or the second camera, wherein the coordinate systems of the first image and the second image are the same;
step 405: obtaining a dense depth map of an image of a photographic scene captured by a tof sensor;
step 406: performing overlapping calculation on the sparse depth map and the dense depth map, namely determining a first connected region of the sparse depth map and a second connected region of the dense depth map, so that when i is each integer from 1 to N, a region with the largest area in the overlapping region of the ith first connected region and each first target region is obtained and determined as an ith candidate region, N is the number of the first connected regions, and the first target region is a region corresponding to the position of the second connected region in the dense depth map;
step 407: performing lifting proportion calculation, namely when i is each integer from 1 to N, acquiring a first average value of depths of pixel points in an ith candidate region in the sparse depth map and a second average value of depths of pixel points in an ith second target region in the dense depth map, so as to calculate a ratio of the first average value to the second average value, and determining the ratio as an ith target parameter, wherein the ith second target region is a region corresponding to the ith candidate region in the dense depth map;
step 408: pulling up the depth of the sparse depth map, namely when i is each integer from 1 to N, respectively dividing the depth of the pixel points in the ith candidate region in the sparse depth map by the ith target parameter to obtain the adjusted depth of the pixel points in the ith candidate region in the sparse depth map, and adjusting the depth of the pixel points out of the candidate region in the sparse depth map according to the adjusted depth of the pixel points in each candidate region in the sparse depth map by adopting an interpolation method;
step 409: determining a region to be fused in a sparse depth map, namely acquiring a first region and a second region in the sparse depth map, wherein the depth of a pixel point in the first region is a first preset value, the absolute value of the difference between the depth of the pixel point in the second region and the depth of a third pixel point is greater than or equal to a second preset threshold value, and the third pixel point is a pixel point corresponding to the position of the pixel point in the second region in the dense depth map;
step 410: the method comprises the steps of fusing a sparse depth map and a dense depth map aiming at a region to be fused to obtain a target depth map, namely setting the depth of a pixel point of a first region in the sparse depth map as the depth of a fourth pixel point, wherein the fourth pixel point is the pixel point corresponding to the pixel point position of the first region in the dense depth map; when k is an integer from 1 to M, calculating the depth of a kth pixel point in the second area, and obtaining the depth of a kth fifth pixel point by linear weighted sum of the depth of the kth pixel point and the depth of a kth third pixel point, wherein the fifth pixel point is a pixel point in the target depth map corresponding to the position of the pixel point in the second area, and M represents the total number of the pixel points in the second area;
step 411: filtering the error depth in the target depth map, namely when j is each integer from 1 to L, calculating the average value of pixel points of a jth second connected region and the average value of the depth of the pixel points of a jth third target region, so as to calculate the average value of the pixel points of the jth second connected region and the absolute value of the difference value of the average value of the pixel points of the jth third target region, wherein the absolute value is used as a first parameter of the jth third target region, and further the third target region of which the first parameter is greater than or equal to a third preset threshold is obtained and is used as a fourth target region, and filtering the pixel points corresponding to the pixel point positions in the fourth target region in the target depth map, wherein the jth third target region is a region corresponding to the jth second connected region position in the sparse depth map;
step 412: carrying out depth filtering processing on the target depth map;
step 413: blurring the first image or the second image according to the filtered target depth map, namely performing depth segmentation on the target depth map to obtain a depth range of a clear region, determining a foreground region and a background region according to the depth range of the clear region, respectively calculating a blurring radius of each pixel point of the foreground region and a blurring radius of each pixel point of the background region according to the target depth map to obtain a blurring radius map, and further blurring the first image or the second image according to the blurring radius map. For a specific method for calculating the virtual radius, please refer to the above description, and further details are omitted here.
Therefore, according to the embodiment of the application, the sparse depth map of the two cameras (namely the first camera and the second camera) can be adjusted according to the dense depth map of the image of the shooting scene captured by the time flight sensor, so that more accurate depth calculation can be obtained, the problem that the two cameras are weak in processing of sparse texture, repeated texture, dim light and shielding areas can be solved, the problem of leakage blurring or false blurring can be solved, and the blurring effect is improved.
The embodiment of the present application may also be used to implement different blurring strategies, for example, when a person is far away from a camera, the person and the background are not distinguished obviously, and the blurring method of the embodiment of the present application may be adopted to perform blurring on the background region with weak blurring strength.
In addition, it should be noted that, in this document, a certain region in one image corresponds to a position in another image, that is, a certain region in one image is mapped to a region in another image.
In the blurring method provided by the embodiment of the application, the execution subject may be an image processing apparatus. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
An embodiment of the present application further provides an image processing apparatus, and as shown in fig. 5, the image processing apparatus may include the following modules:
an image obtaining module 501, configured to obtain a first image and a second image, where the first image and the second image are two images of a same shooting scene captured by a first camera and a second camera of an electronic device;
a sparse depth map obtaining module 502, configured to determine sparse depth maps of the first camera and the second camera according to the first image and the second image;
a dense depth map acquisition module 503, configured to acquire a dense depth map of the image of the shooting scene;
a depth adjusting module 504, configured to adjust depths of pixel points in the sparse depth map according to the dense depth map to obtain a target depth map;
a blurring module 505, configured to perform blurring processing on a target image according to the target depth map, where the target image is one of the first image and the second image.
Optionally, the apparatus further comprises:
and the imaging plane adjusting module is used for adjusting the imaging planes of the first camera and the second camera to be in the same plane.
Optionally, the dense depth map obtaining module 503 is specifically configured to:
and adopting a rapid bilateral solver algorithm to perform densification on the image of the shooting scene captured by the flight time sensor to obtain the dense depth map.
Optionally, the sparse depth map obtaining module 502 includes:
a reference image obtaining submodule, configured to obtain a disparity map of the first camera and the second camera and a confidence map of the disparity map according to the first image and the second image;
and the sparse depth map acquisition sub-module is used for acquiring the sparse depth map according to the disparity map and the confidence map of the disparity map.
Optionally, the sparse depth map obtaining sub-module is specifically configured to:
according to the confidence map, acquiring a first pixel point with the confidence degree smaller than a first preset threshold value in the disparity map, and setting the depth of the first pixel point as a first preset value;
determining the depth of a second pixel point according to a binocular triangulation principle and the parallax of the second pixel point, wherein the second pixel point is a pixel point except the first pixel point in the parallax map;
and obtaining the sparse depth map according to the depth of the first pixel point and the depth of the second pixel point.
Optionally, the depth adjusting module 504 includes:
a first region obtaining submodule, configured to obtain a first region and a second region in the sparse depth map, where a depth of a pixel in the first region is a first preset value, an absolute value of a difference between the depth of the pixel in the second region and a depth of a third pixel is greater than or equal to a second preset threshold, and the third pixel is a pixel in the dense depth map corresponding to a pixel in the second region;
a first adjusting submodule, configured to set a depth of a pixel point in the first region in the sparse depth map to a depth of a fourth pixel point, where the fourth pixel point is a pixel point in the dense depth map corresponding to a pixel point position in the first region;
and the second adjusting submodule is used for calculating the depth of a kth pixel point in the second area and obtaining the depth of a kth fifth pixel point by linear weighted sum of the depth of the kth pixel point and the depth of the kth third pixel point when k is each integer from 1 to M, wherein the fifth pixel point is a pixel point corresponding to the position of the pixel point in the second area in the target depth map, and M represents the total number of the pixel points in the second area.
Optionally, the depth adjusting module 504 further includes:
a second region acquisition submodule for determining a first connected region of the sparse depth map and a second connected region of the dense depth map;
a third region obtaining sub-module, configured to, when i is each integer from 1 to N, obtain a region with a largest area in an overlapping region between an ith first connected region and each first target region, and determine the region as an ith candidate region, where N is the number of the first connected regions, and the first target region is a region in the dense depth map corresponding to the second connected region;
the first calculation submodule is used for obtaining a first average value of depths of pixel points in an ith candidate region in the sparse depth map and a second average value of depths of pixel points in an ith second target region in the dense depth map, wherein the ith second target region is a region corresponding to the ith candidate region in the dense depth map;
the second calculation submodule is used for calculating the ratio of the first average value to the second average value and determining the ratio as the ith target parameter;
the third adjusting submodule is used for dividing the depth of the pixel points in the ith candidate region in the sparse depth map by the ith target parameter to obtain the adjusted depth of the pixel points in the ith candidate region in the sparse depth map;
and the fourth adjusting submodule is used for adjusting the depth of the pixel points outside the candidate region in the sparse depth map by adopting an interpolation method according to the adjusted depth of the pixel points in each candidate region in the sparse depth map.
Optionally, the apparatus further comprises a first filtering module configured to:
obtaining a second connected region of the dense depth map;
when j takes each integer from 1 to L, calculating the average value of pixel points of a jth second connected region and the average value of the depth of the pixel points of a jth third target region, wherein the jth third target region is a region corresponding to the jth second connected region in the sparse depth map;
calculating the average value of the pixel points of the jth second connected region and the absolute value of the difference value of the average value of the pixel points of the jth third target region to serve as a first parameter of the jth third target region;
acquiring a third target area with the first parameter being greater than or equal to a third preset threshold value to serve as a fourth target area;
and filtering pixel points in the target depth map corresponding to the pixel point positions in the fourth target area.
Optionally, the apparatus further comprises:
and the second filtering module is used for carrying out depth filtering processing on the target depth map.
Optionally, the blurring processing module 505 is specifically configured to:
performing depth segmentation on the target depth map to obtain a depth range of a clear region;
determining a foreground area and a background area according to the depth range of the clear area;
respectively calculating the virtual radius of each pixel point of the foreground region and the virtual radius of each pixel point of the background region according to the target depth map to obtain a virtual radius map;
and performing blurring processing on the target image according to the blurring radius map.
Optionally, the apparatus further comprises:
and the coordinate conversion module is used for converting the image coordinate system of the time sensor into the image coordinate system of the first camera or the second camera, wherein the coordinate systems of the first image and the second image are the same.
As can be seen from the above description, in the embodiment of the present application, a first image and a second image of the same shooting scene captured by a first camera and a second camera of an electronic device can be obtained, sparse depth maps of the first camera and the second camera can be determined according to the first image and the second image, and a dense depth map of an image of the shooting scene can also be obtained, so that depths of pixel points in the sparse depth map are adjusted according to the dense depth map, a target depth map is obtained, and then one of the first image and the second image is subjected to blurring processing according to the target depth map.
Therefore, in the embodiment of the application, the sparse depth map of the two cameras (namely the first camera and the second camera) can be adjusted according to the dense depth map of the image of the shooting scene, so that more accurate depth calculation can be obtained, the problem that the two cameras are weak in processing in sparse texture, repeated texture, dim light and sheltered areas can be solved, the problem of blurring leakage or blurring error can be solved, and the blurring effect is improved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In the embodiment of the present application, the sensor 705 includes a first camera, a second camera, and a time-of-flight sensor.
Additionally, processor 710 is configured to perform the following process:
acquiring a first image and a second image, wherein the first image and the second image are two images of the same shooting scene captured by a first camera and a second camera of the electronic equipment;
determining sparse depth maps of the first camera and the second camera according to the first image and the second image;
obtaining a dense depth map of the image of the shooting scene;
according to the dense depth map, adjusting the depth of pixel points in the sparse depth map to obtain a target depth map;
and performing blurring processing on a target image according to the target depth map, wherein the target image is one of the first image and the second image.
As can be seen from the above description, in the embodiment of the present application, a first image and a second image of the same shooting scene captured by a first camera and a second camera of an electronic device can be obtained, sparse depth maps of the first camera and the second camera can be determined according to the first image and the second image, and a dense depth map of an image of the shooting scene can also be obtained, so that depths of pixel points in the sparse depth map are adjusted according to the dense depth map, a target depth map is obtained, and then one of the first image and the second image is subjected to blurring processing according to the target depth map.
Therefore, in the embodiment of the application, the sparse depth map of the two cameras (namely the first camera and the second camera) can be adjusted according to the dense depth map of the image of the shooting scene, so that more accurate depth calculation can be obtained, the problem that the two cameras are weak in processing in sparse texture, repeated texture, dim light and sheltered areas can be solved, the problem of blurring leakage or blurring error can be solved, and the blurring effect is improved.
Optionally, before acquiring the first image and the second image, the processor 710 is further configured to:
and adjusting the imaging planes of the first camera and the second camera to be in the same plane.
Optionally, when the processor 710 obtains the dense depth map of the image of the shooting scene, it is specifically configured to:
and adopting a rapid bilateral solver algorithm to perform densification on the image of the shooting scene captured by the flight time sensor to obtain the dense depth map.
Optionally, when determining the sparse depth maps of the first camera and the second camera according to the first image and the second image, the processor 710 is specifically configured to:
acquiring a disparity map of the first camera and the second camera and a confidence map of the disparity map according to the first image and the second image;
and acquiring the sparse depth map according to the disparity map and the confidence map of the disparity map.
Optionally, when the processor 710 acquires the sparse depth map according to the disparity map and the confidence map of the disparity map, the processor is specifically configured to:
according to the confidence map, acquiring a first pixel point with the confidence degree smaller than a first preset threshold value in the disparity map, and setting the depth of the first pixel point as a first preset value;
determining the depth of a second pixel point according to a binocular triangulation principle and the parallax of the second pixel point, wherein the second pixel point is a pixel point except the first pixel point in the parallax map;
and obtaining the sparse depth map according to the depth of the first pixel point and the depth of the second pixel point.
Optionally, when the processor 710 adjusts the depth of the pixel point in the sparse depth map according to the dense depth map to obtain the target depth map, the processor is specifically configured to:
acquiring a first region and a second region in the sparse depth map, wherein the depth of a pixel point in the first region is a first preset value, the absolute value of the difference between the depth of the pixel point in the second region and the depth of a third pixel point is greater than or equal to a second preset threshold value, and the third pixel point is a pixel point corresponding to the pixel point position in the second region in the dense depth map;
setting the depth of a pixel point of the first region in the sparse depth map as the depth of a fourth pixel point, wherein the fourth pixel point is a pixel point corresponding to the position of the pixel point of the first region in the dense depth map;
when k is an integer from 7 to M, calculating the depth of a kth pixel point in the second area, and obtaining the depth of a kth fifth pixel point by linear weighted sum of the depth of the kth pixel point and the depth of the kth third pixel point, wherein the fifth pixel point is a pixel point corresponding to the position of the pixel point in the second area in the target depth map, and M represents the total number of the pixel points in the second area.
Optionally, when the processor 710 adjusts the depth of the pixel point in the sparse depth map according to the dense depth map, the processor is further configured to:
determining a first connected region of the sparse depth map and a second connected region of the dense depth map;
when i is each integer from 7 to N, acquiring a region with the largest area in the overlapping region of the ith first connected region and each first target region, and determining the region as the ith candidate region, wherein N is the number of the first connected regions, and the first target region is a region corresponding to the position of the second connected region in the dense depth map;
obtaining a first average value of depths of pixel points in an ith candidate region in the sparse depth map and a second average value of depths of pixel points in an ith second target region in the dense depth map, wherein the ith second target region is a region corresponding to the ith candidate region in the dense depth map;
calculating the ratio of the first average value to the second average value, and determining the ratio as an ith target parameter;
dividing the depth of the pixel points in the ith candidate region in the sparse depth map by the ith target parameter to obtain the adjusted depth of the pixel points in the ith candidate region in the sparse depth map;
and adjusting the depth of the pixel points outside the candidate region in the sparse depth map by adopting an interpolation method according to the adjusted depth of the pixel points in each candidate region in the sparse depth map.
Optionally, after the processor 710 adjusts the depth of the pixel point in the sparse depth map according to the dense depth map to obtain a target depth map, the processor is further configured to:
obtaining a second connected region of the dense depth map;
when j takes each integer from 7 to L, calculating the average value of pixel points of a jth second connected region and the average value of the depth of the pixel points of a jth third target region, wherein the jth third target region is a region corresponding to the jth second connected region in the sparse depth map;
calculating the average value of the pixel points of the jth second connected region and the absolute value of the difference value of the average value of the pixel points of the jth third target region to serve as a first parameter of the jth third target region;
acquiring a third target area with the first parameter being greater than or equal to a third preset threshold value to serve as a fourth target area;
and filtering pixel points in the target depth map corresponding to the pixel point positions in the fourth target area.
Optionally, before performing blurring processing on the target image according to the target depth map, the processor is further configured to:
and carrying out depth filtering processing on the target depth map.
Optionally, when performing blurring processing on the target image according to the target depth map, the processor 710 is specifically configured to:
performing depth segmentation on the target depth map to obtain a depth range of a clear region;
determining a foreground area and a background area according to the depth range of the clear area;
respectively calculating the virtual radius of each pixel point of the foreground region and the virtual radius of each pixel point of the background region according to the target depth map to obtain a virtual radius map;
and performing blurring processing on the target image according to the blurring radius map.
Optionally, before acquiring the dense depth map of the image of the shooting scene, the processor 710 is further configured to:
and converting the image coordinate system of the time sensor into the image coordinate system of the first camera or the second camera, wherein the coordinate systems of the first image and the second image are the same.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 709 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiments of the image processing method, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1.一种图像处理方法,其特征在于,所述方法包括:1. an image processing method, is characterized in that, described method comprises: 获取第一图像和第二图像,其中,所述第一图像和所述第二图像为电子设备的第一摄像头和第二摄像头捕捉的同一拍摄场景的两幅图像;acquiring a first image and a second image, wherein the first image and the second image are two images of the same shooting scene captured by the first camera and the second camera of the electronic device; 根据所述第一图像和所述第二图像,确定所述第一摄像头和所述第二摄像头的稀疏深度图;determining sparse depth maps of the first camera and the second camera according to the first image and the second image; 获取所述拍摄场景的图像的稠密深度图;obtaining a dense depth map of the image of the shooting scene; 根据所述稠密深度图,调整所述稀疏深度图中的像素点的深度,得到目标深度图;According to the dense depth map, adjust the depth of the pixels in the sparse depth map to obtain a target depth map; 根据所述目标深度图,对目标图像进行虚化处理,其中,所述目标图像为所述第一图像和所述第二图像中的其中一个。According to the target depth map, blurring processing is performed on the target image, wherein the target image is one of the first image and the second image. 2.根据权利要求1所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,确定所述第一摄像头和所述第二摄像头的稀疏深度图,包括:2 . The method according to claim 1 , wherein the determining of the sparse depth maps of the first camera and the second camera according to the first image and the second image comprises: 2 . 根据所述第一图像和所述第二图像,获取所述第一摄像头和所述第二摄像头的视差图和所述视差图的置信度图;According to the first image and the second image, obtain a disparity map of the first camera and the second camera and a confidence map of the disparity map; 根据所述视差图和所述视差图的置信度图,获取所述稀疏深度图。The sparse depth map is obtained according to the disparity map and the confidence map of the disparity map. 3.根据权利要求2所述的方法,其特征在于,所述根据所述视差图和所述视差图的置信度图,获取所述稀疏深度图,包括:3. The method according to claim 2, wherein the obtaining the sparse depth map according to the disparity map and the confidence map of the disparity map comprises: 根据所述置信度图,获取所述视差图中置信度小于第一预设阈值的第一像素点,并将所述第一像素点的深度设置为第一预设值;obtaining, according to the confidence map, a first pixel whose confidence is less than a first preset threshold in the disparity map, and setting the depth of the first pixel to a first preset value; 根据双目三角化原理以及第二像素点的视差,确定所述第二像素点的深度,其中,所述第二像素点为所述视差图中除所述第一像素点之外的像素点;The depth of the second pixel is determined according to the principle of binocular triangulation and the parallax of the second pixel, where the second pixel is a pixel other than the first pixel in the parallax map ; 根据所述第一像素点的深度和所述第二像素点的深度,获得所述稀疏深度图。The sparse depth map is obtained according to the depth of the first pixel point and the depth of the second pixel point. 4.根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述稠密深度图,调整所述稀疏深度图中的像素点的深度,得到目标深度图,包括:4. The method according to any one of claims 1 to 3, wherein, according to the dense depth map, adjusting the depth of the pixels in the sparse depth map to obtain a target depth map, comprising: 获取所述稀疏深度图中的第一区域和第二区域,其中,所述第一区域的像素点的深度为第一预设值,所述第二区域的像素点的深度与第三像素点的深度之差的绝对值大于或等于第二预设阈值,所述第三像素点为所述稠密深度图中与所述第二区域的像素点位置对应的像素点;Obtain the first area and the second area in the sparse depth map, wherein the depth of the pixels in the first area is a first preset value, and the depth of the pixels in the second area is the same as that of the third pixel. The absolute value of the difference between the depths is greater than or equal to a second preset threshold, and the third pixel point is the pixel point corresponding to the pixel point position of the second area in the dense depth map; 将所述稀疏深度图中的所述第一区域的像素点的深度,设置为第四像素点的深度,其中,所述第四像素点为所述稠密深度图中与所述第一区域的像素点位置对应的像素点;Set the depth of the pixel point of the first area in the sparse depth map as the depth of the fourth pixel point, wherein the fourth pixel point is the depth of the first area in the dense depth map. The pixel point corresponding to the pixel point position; 在k取1至M中的每一个整数时,计算所述第二区域中第k个像素点的深度,与第k个所述第三像素点的深度的线性加权和,得到第k个第五像素点的深度,其中,所述第五像素点为所述目标深度图中与所述第二区域的像素点位置对应的像素点,M表示所述第二区域中的像素点的总数量。When k is an integer from 1 to M, calculate the depth of the kth pixel in the second area, and the linear weighted sum of the depth of the kth pixel of the third pixel to obtain the kth pixel. A depth of five pixels, wherein the fifth pixel is the pixel corresponding to the pixel position of the second area in the target depth map, and M represents the total number of pixels in the second area . 5.根据权利要求4所述的方法,其特征在于,所述获取所述稀疏深度图中的第一区域和第二区域之前,所述根据所述稠密深度图,调整所述稀疏深度图中的像素点的深度,还包括:5 . The method according to claim 4 , wherein before acquiring the first region and the second region in the sparse depth map, adjusting the sparse depth map according to the dense depth map. 6 . The depth of the pixel point, also includes: 确定所述稀疏深度图的第一连通区域和所述稠密深度图的第二连通区域;determining a first connected region of the sparse depth map and a second connected region of the dense depth map; 在i为1至N中的每一个整数时,获取第i个第一连通区域与每一个第一目标区域的重叠区域中面积最大的区域,并确定为第i个候选区域,N为所述第一连通区域的数量,所述第一目标区域为所述稠密深度图中与所述第二连通区域位置对应的区域;When i is each integer from 1 to N, obtain the area with the largest area in the overlapping area of the i-th first connected area and each first target area, and determine it as the i-th candidate area, and N is the The number of first connected regions, the first target region is the region corresponding to the position of the second connected region in the dense depth map; 获取所述稀疏深度图中第i个候选区域内的像素点的深度的第一平均值,以及所述稠密深度图中第i个第二目标区域内的像素点的深度的第二平均值,其中,第i个第二目标区域为所述稠密深度图中与第i个候选区域位置对应的区域;obtaining the first average value of the depths of the pixels in the ith candidate region in the sparse depth map, and the second average value of the depths of the pixels in the ith second target region in the dense depth map, Wherein, the ith second target region is the region corresponding to the position of the ith candidate region in the dense depth map; 计算所述第一平均值和所述第二平均值的比值,并确定为第i个目标参数;Calculate the ratio of the first average value and the second average value, and determine it as the ith target parameter; 分别将所述稀疏深度图中处于第i个候选区域内的像素点的深度,除以第i个目标参数,得到所述稀疏深度图中第i个候选区域内的像素点调整后的深度;Divide the depth of the pixel in the ith candidate area in the sparse depth map by the ith target parameter to obtain the adjusted depth of the pixel in the ith candidate area in the sparse depth map; 采用插值法,根据所述稀疏深度图中处于各个候选区域内的像素点调整后的深度,调整所述稀疏深度图中处于所述候选区域之外的像素点的深度。An interpolation method is used to adjust the depths of pixels in the sparse depth map outside the candidate regions according to the adjusted depths of the pixels in the sparse depth map. 6.根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述稠密深度图,调整所述稀疏深度图中的像素点的深度,得到目标深度图之后,所述方法还包括:6. The method according to any one of claims 1 to 3, wherein after adjusting the depth of pixels in the sparse depth map according to the dense depth map, after obtaining the target depth map, the The method also includes: 获取所述稠密深度图的第二连通区域;obtaining the second connected region of the dense depth map; 在j取1至L中的每一个整数时,计算第j个第二连通区域的像素点的平均值,以及与第j个第三目标区域的像素点的深度的平均值,其中,第j个第三目标区域为所述稀疏深度图中与第j个第二连通区域位置对应的区域;When j is each integer from 1 to L, calculate the average value of the pixels of the jth second connected area and the average value of the depths of the pixels of the jth third target area, where the jth The third target area is the area corresponding to the position of the jth second connected area in the sparse depth map; 计算第j个第二连通区域的像素点的平均值,与第j个第三目标区域的像素点的平均值的差值的绝对值,以作为第j个第三目标区域的第一参数;Calculate the average value of the pixel points of the jth second connected area, and the absolute value of the difference between the average value of the pixel points of the jth third target area, as the first parameter of the jth third target area; 获取第一参数大于或等于第三预设阈值的第三目标区域,以作为第四目标区域;Acquiring a third target area where the first parameter is greater than or equal to a third preset threshold as a fourth target area; 过滤所述目标深度图中与所述第四目标区域中的像素点位置对应的像素点。Filter the pixel points corresponding to the pixel point positions in the fourth target area in the target depth map. 7.一种图像处理装置,其特征在于,所述装置包括:7. An image processing device, wherein the device comprises: 图像获取模块,用于获取第一图像和第二图像,其中,所述第一图像和所述第二图像为电子设备的第一摄像头和第二摄像头捕捉的同一拍摄场景的两幅图像;an image acquisition module, configured to acquire a first image and a second image, wherein the first image and the second image are two images of the same shooting scene captured by the first camera and the second camera of the electronic device; 稀疏深度图获取模块,用于根据所述第一图像和所述第二图像,确定所述第一摄像头和所述第二摄像头的稀疏深度图;a sparse depth map acquiring module, configured to determine sparse depth maps of the first camera and the second camera according to the first image and the second image; 稠密深度图获取模块,用于获取所述拍摄场景的图像的稠密深度图;a dense depth map acquisition module, used for acquiring the dense depth map of the image of the shooting scene; 深度调整模块,用于根据所述稠密深度图,调整所述稀疏深度图中的像素点的深度,得到目标深度图;a depth adjustment module, configured to adjust the depth of pixels in the sparse depth map according to the dense depth map to obtain a target depth map; 虚化处理模块,用于根据所述目标深度图,对目标图像进行虚化处理,其中,所述目标图像为所述第一图像和所述第二图像中的其中一个。A blurring processing module, configured to perform blurring processing on a target image according to the target depth map, wherein the target image is one of the first image and the second image. 8.根据权利要求7所述的装置,其特征在于,所述稀疏深度图获取模块包括:8. The apparatus according to claim 7, wherein the sparse depth map obtaining module comprises: 参考图获取子模块,用于根据所述第一图像和所述第二图像,获取所述第一摄像头和所述第二摄像头的视差图和所述视差图的置信度图;a sub-module for obtaining a reference image, configured to obtain a disparity map of the first camera and the second camera and a confidence map of the disparity map according to the first image and the second image; 稀疏深度图获取子模块,用于根据所述视差图和所述视差图的置信度图,获取所述稀疏深度图。A sparse depth map obtaining submodule, configured to obtain the sparse depth map according to the disparity map and the confidence map of the disparity map. 9.根据权利要求8所述的装置,其特征在于,所述稀疏深度图获取子模块具体用于:9. The apparatus according to claim 8, wherein the sparse depth map acquisition submodule is specifically used for: 根据所述置信度图,获取所述视差图中置信度小于第一预设阈值的第一像素点,并将所述第一像素点的深度设置为第一预设值;obtaining, according to the confidence map, a first pixel whose confidence is less than a first preset threshold in the disparity map, and setting the depth of the first pixel to a first preset value; 根据双目三角化原理以及第二像素点的视差,确定所述第二像素点的深度,其中,所述第二像素点为所述视差图中除所述第一像素点之外的像素点;The depth of the second pixel is determined according to the principle of binocular triangulation and the parallax of the second pixel, where the second pixel is a pixel other than the first pixel in the parallax map ; 根据所述第一像素点的深度和所述第二像素点的深度,获得所述稀疏深度图。The sparse depth map is obtained according to the depth of the first pixel point and the depth of the second pixel point. 10.根据权利要求7至9中任一项所述的装置,其特征在于,所述深度调整模块包括:10. The device according to any one of claims 7 to 9, wherein the depth adjustment module comprises: 第一区域获取子模块,用于获取所述稀疏深度图中的第一区域和第二区域,其中,所述第一区域的像素点的深度为第一预设值,所述第二区域的像素点的深度与第三像素点的深度之差的绝对值大于或等于第二预设阈值,所述第三像素点为所述稠密深度图中与所述第二区域的像素点位置对应的像素点;The first area acquisition sub-module is used to acquire the first area and the second area in the sparse depth map, wherein the depth of the pixels in the first area is a first preset value, and the depth of the second area is The absolute value of the difference between the depth of the pixel point and the depth of the third pixel point is greater than or equal to the second preset threshold, and the third pixel point corresponds to the pixel point position of the second area in the dense depth map. pixel; 第一调整子模块,用于将所述稀疏深度图中的所述第一区域的像素点的深度,设置为第四像素点的深度,其中,所述第四像素点为所述稠密深度图中与所述第一区域的像素点位置对应的像素点;a first adjustment sub-module, configured to set the depth of the pixels in the first region in the sparse depth map to the depth of a fourth pixel, where the fourth pixel is the dense depth map in the pixel point corresponding to the pixel point position of the first area; 第二调整子模块,用于在k取1至M中的每一个整数时,计算所述第二区域中第k个像素点的深度,与第k个所述第三像素点的深度的线性加权和,得到第k个第五像素点的深度,其中,所述第五像素点为所述目标深度图中与所述第二区域的像素点位置对应的像素点,M表示所述第二区域中的像素点的总数量。The second adjustment sub-module is configured to calculate the depth of the kth pixel in the second region, and the linearity of the depth of the kth third pixel when k is an integer from 1 to M. Weighted sum to obtain the depth of the kth fifth pixel, wherein the fifth pixel is the pixel corresponding to the pixel position of the second area in the target depth map, and M represents the second The total number of pixels in the area. 11.根据权利要求10所述的装置,其特征在于,所述深度调整模块还包括:11. The apparatus according to claim 10, wherein the depth adjustment module further comprises: 第二区域获取子模块,用于确定所述稀疏深度图的第一连通区域和所述稠密深度图的第二连通区域;A second region acquisition submodule, configured to determine the first connected region of the sparse depth map and the second connected region of the dense depth map; 第三区域获取子模块,用于在i为1至N中的每一个整数时,获取第i个第一连通区域与每一个第一目标区域的重叠区域中面积最大的区域,并确定为第i个候选区域,N为所述第一连通区域的数量,所述第一目标区域为所述稠密深度图中与所述第二连通区域位置对应的区域;The third area acquisition sub-module is used to acquire the area with the largest area in the overlapping area of the i-th first connected area and each first target area when i is each integer from 1 to N, and determine it as the ith area i candidate regions, N is the number of the first connected region, and the first target region is the region corresponding to the position of the second connected region in the dense depth map; 第一计算子模块,用于获取所述稀疏深度图中第i个候选区域内的像素点的深度的第一平均值,以及所述稠密深度图中第i个第二目标区域内的像素点的深度的第二平均值,其中,第i个第二目标区域为所述稠密深度图中与第i个候选区域位置对应的区域;The first calculation submodule is used to obtain the first average value of the depths of the pixels in the ith candidate region in the sparse depth map, and the pixels in the ith second target region in the dense depth map The second average value of the depth of , wherein, the ith second target area is the area corresponding to the position of the ith candidate area in the dense depth map; 第二计算子模块,用于计算所述第一平均值和所述第二平均值的比值,并确定为第i个目标参数;A second calculation submodule, configured to calculate the ratio of the first average value to the second average value, and determine it as the i-th target parameter; 第三调整子模块,用于分别将所述稀疏深度图中处于第i个候选区域内的像素点的深度,除以第i个目标参数,得到所述稀疏深度图中第i个候选区域内的像素点调整后的深度;The third adjustment sub-module is used to divide the depths of the pixels in the ith candidate region in the sparse depth map by the ith target parameter to obtain the ith candidate region in the sparse depth map. The adjusted depth of the pixel points; 第四调整子模块,用于采用插值法,根据所述稀疏深度图中处于各个候选区域内的像素点调整后的深度,调整所述稀疏深度图中处于所述候选区域之外的像素点的深度。The fourth adjustment sub-module is configured to use an interpolation method to adjust the depth of the pixels outside the candidate area in the sparse depth map according to the adjusted depths of the pixels in the sparse depth map. depth. 12.根据权利要求7至9中任一项所述的装置,其特征在于,所述装置还包括第一过滤模块,用于:12. The device according to any one of claims 7 to 9, wherein the device further comprises a first filtering module for: 获取所述稠密深度图的第二连通区域;obtaining the second connected region of the dense depth map; 在j取1至L中的每一个整数时,计算第j个第二连通区域的像素点的平均值,以及与第j个第三目标区域的像素点的深度的平均值,其中,第j个第三目标区域为所述稀疏深度图中与第j个第二连通区域位置对应的区域;When j is an integer from 1 to L, calculate the average value of the pixels of the jth second connected region and the average value of the depths of the pixels of the jth third target region, where the jth The third target area is the area corresponding to the position of the jth second connected area in the sparse depth map; 计算第j个第二连通区域的像素点的平均值,与第j个第三目标区域的像素点的平均值的差值的绝对值,以作为第j个第三目标区域的第一参数;Calculate the average value of the pixel points of the jth second connected area, and the absolute value of the difference between the average value of the pixel points of the jth third target area, as the first parameter of the jth third target area; 获取第一参数大于或等于第三预设阈值的第三目标区域,以作为第四目标区域;Acquiring a third target area where the first parameter is greater than or equal to a third preset threshold as a fourth target area; 过滤所述目标深度图中与所述第四目标区域中的像素点位置对应的像素点。Filter the pixel points corresponding to the pixel point positions in the fourth target area in the target depth map.
CN202111438638.XA 2021-11-29 2021-11-29 Image processing method and device Pending CN114119701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111438638.XA CN114119701A (en) 2021-11-29 2021-11-29 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111438638.XA CN114119701A (en) 2021-11-29 2021-11-29 Image processing method and device

Publications (1)

Publication Number Publication Date
CN114119701A true CN114119701A (en) 2022-03-01

Family

ID=80367957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111438638.XA Pending CN114119701A (en) 2021-11-29 2021-11-29 Image processing method and device

Country Status (1)

Country Link
CN (1) CN114119701A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283195A (en) * 2022-03-03 2022-04-05 荣耀终端有限公司 Method for generating dynamic image, electronic device and readable storage medium
CN115049711A (en) * 2022-06-29 2022-09-13 维沃移动通信有限公司 Image registration method, device, electronic equipment and medium
US20250054167A1 (en) * 2023-08-10 2025-02-13 Gopro, Inc. Methods and apparatus for augmenting dense depth maps using sparse data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415285A (en) * 2019-08-02 2019-11-05 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN112911091A (en) * 2021-03-23 2021-06-04 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
CN112927281A (en) * 2021-04-06 2021-06-08 Oppo广东移动通信有限公司 Depth detection method, depth detection device, storage medium, and electronic apparatus
WO2021120120A1 (en) * 2019-12-19 2021-06-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device, method of controlling electric device, and computer readable storage medium
CN113034568A (en) * 2019-12-25 2021-06-25 杭州海康机器人技术有限公司 Machine vision depth estimation method, device and system
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415285A (en) * 2019-08-02 2019-11-05 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
WO2021120120A1 (en) * 2019-12-19 2021-06-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device, method of controlling electric device, and computer readable storage medium
CN113034568A (en) * 2019-12-25 2021-06-25 杭州海康机器人技术有限公司 Machine vision depth estimation method, device and system
CN112911091A (en) * 2021-03-23 2021-06-04 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
CN112927281A (en) * 2021-04-06 2021-06-08 Oppo广东移动通信有限公司 Depth detection method, depth detection device, storage medium, and electronic apparatus
CN113301320A (en) * 2021-04-07 2021-08-24 维沃移动通信(杭州)有限公司 Image information processing method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283195A (en) * 2022-03-03 2022-04-05 荣耀终端有限公司 Method for generating dynamic image, electronic device and readable storage medium
CN115049711A (en) * 2022-06-29 2022-09-13 维沃移动通信有限公司 Image registration method, device, electronic equipment and medium
US20250054167A1 (en) * 2023-08-10 2025-02-13 Gopro, Inc. Methods and apparatus for augmenting dense depth maps using sparse data

Similar Documents

Publication Publication Date Title
CN112311965B (en) Virtual shooting method, device, system and storage medium
CN113301320B (en) Image information processing method and device and electronic equipment
CN109474780B (en) Method and device for image processing
CN106899781B (en) Image processing method and electronic equipment
TWI738196B (en) Method and electronic device for image depth estimation and storage medium thereof
CN104867113B (en) The method and system of perspective image distortion correction
CN112207821B (en) Target searching method of visual robot and robot
CN106683071A (en) Image splicing method and image splicing device
CN105303514A (en) Image processing method and apparatus
CN114119701A (en) Image processing method and device
CN106952247B (en) Double-camera terminal and image processing method and system thereof
US12374045B2 (en) Efficient texture mapping of a 3-D mesh
CN112930677B (en) Method and electronic device for switching between first lens and second lens
CN107231524A (en) Image pickup method and device, computer installation and computer-readable recording medium
CN107633497A (en) A kind of image depth rendering intent, system and terminal
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN115278084B (en) Image processing method, device, electronic equipment and storage medium
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN111524087B (en) Image processing method and device, storage medium and terminal
CN117294829A (en) Depth compensation method and device
CN114792332B (en) Image registration method and device
CN114255268A (en) Disparity map processing and deep learning model training method and related equipment
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium
CN117196955A (en) Panoramic image stitching method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination