CN107633497A - A kind of image depth rendering intent, system and terminal - Google Patents
A kind of image depth rendering intent, system and terminal Download PDFInfo
- Publication number
- CN107633497A CN107633497A CN201710777729.3A CN201710777729A CN107633497A CN 107633497 A CN107633497 A CN 107633497A CN 201710777729 A CN201710777729 A CN 201710777729A CN 107633497 A CN107633497 A CN 107633497A
- Authority
- CN
- China
- Prior art keywords
- image
- rgb
- depth
- map
- coc
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 87
- 238000000034 method Methods 0.000 claims abstract description 63
- 230000004927 fusion Effects 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000001914 filtration Methods 0.000 claims description 45
- 238000005070 sampling Methods 0.000 claims description 19
- 238000002156 mixing Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 31
- 230000000694 effects Effects 0.000 description 12
- 238000003384 imaging method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000005669 field effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
This application discloses a kind of image depth rendering intent, including:Obtain corresponding with target object original RGB figures and depth map;Calculate the blur radius COC of depth map;Using convolution kernel corresponding with blur radius COC, processing is filtered to original RGB figures, is rendered image accordingly;Alpha fusions are done to rendering image and original RGB figures, obtain fused image.The application further utilizes convolution kernel corresponding with blur radius COC after the blur radius COC of depth map is calculated, and processing is filtered to original RGB figures, thus solves the problems, such as color leakage of the image in depth of field render process.In addition, the application further correspondingly discloses a kind of image depth rendering system and terminal.
Description
Technical Field
The invention relates to the technical field of computer vision and computer graphics, in particular to an image depth of field rendering method, system and terminal.
Background
Depth of field (DOF) is an important concept in the fields of photography and optical imaging. Depth of field refers to the distance from the camera lens to the scene where the scene or object being captured can take a sharp image before the camera lens or other imaging device. In the final captured image, the scene within the depth of field is sharp, while the areas before and after the depth of field form blurred images. However, when a camera of a mobile phone or a card camera is used to take a picture, the sensor and the lens of the camera are greatly limited in size and weight due to the requirement of portability of the device. Therefore, in these devices, the aperture of the lens is very small, the shot image has a large depth of field, the foreground and the background are clear, the adjusting capability of the depth of field range is very limited, and the depth of field effect which can be achieved by the single lens reflex cannot be achieved. Especially, the problem of color leakage exists in the aspect of depth rendering effect, which also greatly limits the quality and effect of images shot by the smart phone. The development trend of mobile devices is always to seek lighter and thinner, so that it is difficult to obtain the effect of large aperture and shallow depth of field by adjusting the camera of the smart phone. However, the computing performance of smartphones has continued to improve. Therefore, people hope to solve the problem of color leakage of the depth of field effect of the image by using the computing power of the smart phone and an image post-processing method, which is a hotspot of research in the field.
Disclosure of Invention
In view of the above, the present invention provides an image depth rendering method, system and terminal, which can overcome the color leakage problem of the image depth effect. The specific scheme is as follows:
an image depth rendering method, comprising:
acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object;
calculating a blur radius COC of the depth map;
filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
Preferably, the process of acquiring the original RGB map and the depth map corresponding to the target object includes:
acquiring an image of the target object to obtain a first RGB image and a second RGB image;
and determining a corresponding depth map by using the first RGB map and the second RGB map.
Preferably, the process of acquiring the original RGB map and the depth map corresponding to the target object includes:
acquiring an image of the target object to obtain K RGB images, wherein K is an integer greater than or equal to 3;
and determining a corresponding depth map by using the K RGB maps.
Preferably, the process of filtering the original RGB map by using a convolution kernel corresponding to the blur radius COC to obtain a corresponding rendered image includes:
down-sampling any one of the original RGB images to obtain a corresponding RGB small image;
filtering the RGB small graph by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small graph to obtain a rendering small graph;
and performing up-sampling on the rendering small graph to obtain the rendering image.
Preferably, the process of performing alpha fusion on the rendered image and the original RGB image to obtain a fused image includes:
and carrying out alpha fusion on the rendered image and any one of the original RGB images to obtain the fused image.
Preferably, the process of calculating the blur radius COC of the depth map includes:
down-sampling the depth map to obtain a corresponding depth small map;
and calculating the fuzzy radius COC corresponding to the depth small map.
Preferably, before the process of calculating the blur radius COC corresponding to the depth small map, the method further includes:
and performing corresponding filtering processing on the small depth map by using a fast steering filter so as to improve the precision of the small depth map.
The invention also discloses an image depth rendering system, which comprises:
the image acquisition module is used for acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object;
the fuzzy radius calculating module is used for calculating the fuzzy radius COC of the depth map;
the RGB image filtering module is used for filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and the image fusion module is used for performing alpha fusion on the rendered image and the RGB image to obtain a fused image.
Preferably, the RGB map filtering module includes:
the RGB map downsampling unit is used for downsampling any RGB map in the original RGB maps to obtain corresponding RGB minimaps;
and the RGB small image filtering unit is used for filtering the RGB small image by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small image to obtain a rendering small image.
And the rendering small graph upsampling unit is used for upsampling the rendering small graph to obtain a corresponding rendering image.
The invention also discloses a terminal, which comprises an image collector, a processor and a memory; wherein the processor executes the following steps by calling the instruction stored in the memory:
acquiring an original RGB image corresponding to a target object through the image collector, and determining a depth map corresponding to the original RGB image;
calculating a blur radius COC of the depth map;
filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
In the invention, the image depth rendering method comprises the following steps: acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object; calculating the fuzzy radius COC of the depth map; filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image; and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image. In the invention, the convolution kernel corresponding to the blur radius COC is obtained by calculating the blur radius COC of the depth image, and the original RGB image is filtered through convolution kernel filtering processing.
Moreover, in the invention, the problem of discontinuous depth of the depth map can be improved by using the rapid guiding filter to carry out filtering processing on the depth map, and the rendered image and the original RGB map are subjected to alpha fusion to obtain a fused image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating an image depth-of-field rendering method according to an embodiment of the present invention;
fig. 2 is a flowchart of an image depth rendering method according to a second embodiment of the present invention;
fig. 3 is a flowchart of an image depth-of-field rendering method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image depth-of-field rendering system according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a depth-of-field rendering method for an image, which is shown in figure 1 and comprises the following steps:
step S11: acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object;
in this embodiment, there are many methods for obtaining the original RGB map and the depth map corresponding to the target object, for example, the original RGB map of the target object is obtained through a camera function of a mobile phone, the depth map corresponding to the original RGB map can be obtained through shooting pictures of objects at different angles and through related calculation, and of course, the depth map corresponding to the original RGB map can also be obtained through shooting by a time of flight camera (TOF) or a Kinect camera, so as to obtain various depth information in the pictures.
Step S12: calculating the fuzzy radius COC of the depth map;
wherein, the calculation formula of the fuzzy radius COC is as follows:
where COC is the blur radius of the depth map, U is the object distance when the RGB map is taken, and U isfFor the focusing distance, f is the focal length of the camera and D is the diameter of the camera lens.
Step S13: filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
the calculation formula of the convolution kernel corresponding to the pixel point p in the original RGB image is as follows:
in the formula, wpRepresenting the weight of a pixel point p in the original RGB image by taking a pixel point q as a center, k being a normalization factor obtained by normalizing the fuzzy radius COC, and sigmar、σsIs the standard deviation of the value/space domain, Ip(r, g, b) is the three-dimensional color value of pixel p, IqAnd (r, g and b) is a three-dimensional color value of q, p (x, y and z) is a three-dimensional position value of the pixel point p, and q (x, y and z) is a three-dimensional position value of the pixel point q.
Then the pixel value calculation method of the pixel point p in the filtered image is as follows:
I(p)=IR(p)*wp,
in the formula IR(p) is the pixel value of pixel point p in the RGB map, wpThe weights of the convolution kernels.
It can be understood that the problem of color leakage in the depth image in the prior art can be solved by calculating the blur radius COC of the depth image and the convolution kernel corresponding to the blur radius COC, and performing filtering processing on the original RGB image to obtain a corresponding rendered image.
Step S14: and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
Wherein, the calculation formula of alpha fusion is as follows:
Idst=alpha*Iblured+(1-alpha)*Iorignal;
in the formula IbluredFor the above-mentioned rendered image, IorignalThe original RGB graph is obtained by directly calculating the fuzzy radius by taking alpha (COC) COC as a weight.
It is understood that in this embodiment, the rendered image and the original RGB map are alpha-fused to obtain a fused image, and of course, if an original RGB map with better imaging quality is desired to be obtained here, a common filtering method may be applied to remove noise in the original RGB map. By the method, the rendered image with better imaging effect of the target object can be obtained, and the steps can ensure that the region inside the depth of field of the target object is clear and the image region outside the depth of field is fuzzy, so that the ideal depth of field rendering effect of the image is achieved.
Therefore, in the invention, an original RGB (red, green and blue) image and a depth image corresponding to a target object are obtained firstly; calculating the fuzzy radius COC of the depth map; filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image; and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image. It can be understood that, in the present invention, the rendered image corresponding to the original RGB image is obtained by calculating the convolution kernel corresponding to the blur radius COC and then performing convolution kernel filtering on the pixel values in the original RGB image, and by such a calculation method, the pixel points corresponding to the pixel points in the original RGB image can be found in the rendered image corresponding to the original RGB image, and the problem of color leakage of the depth of field effect of the image in the prior art is solved by such a one-to-one mapping relationship.
The second embodiment of the present invention discloses a specific image depth rendering method, and compared with the first embodiment, the present embodiment further describes and optimizes the technical solution, and as shown in fig. 2, the method includes:
step S21: and acquiring an image of the target object to obtain a first RGB image and a second RGB image, and determining a corresponding depth map by using the first RGB image and the second RGB image.
In this embodiment, in order to obtain the original RGB map of the target object, an image capturing device may be used to obtain the original RGB map of the target object, for example, a binocular image capturing device is used to capture an image of the target object, so as to obtain a first RGB map and a second RGB map. It should be understood that the first RGB diagram and the second RGB diagram are only used to illustrate that the first RGB diagram and the second RGB diagram have different positional relationships, for example, when shooting a target object, two camera units on a binocular camera device may be respectively placed on the left side and the right side of the target object, or may be in different positions of other target objects to be shot, and this is not limited herein.
Wherein, above-mentioned binocular camera device specifically can set up in equipment such as handheld smart machine, unmanned aerial vehicle, robot, computer, intelligent TV.
It should be noted that, in this embodiment, the depth map corresponding to the RGB map is obtained by calculation using the spatial position relationship between the first RGB map and the second RGB map, and the calculation method includes, but is not limited to, stereoscopic vision (stereo vision) and recovery of a three-dimensional scene structure (structure from motion) from motion information.
Of course, the depth map corresponding to the RGB map may also be obtained without a calculation method, for example, the depth map corresponding to the RGB map is obtained by shooting with some cameras having shot depth maps, for example, the depth map corresponding to the original RGB map can be obtained by directly shooting with time of flight (TOF) cameras and Kinect cameras, so that the above calculation process can be omitted.
Step S22: the blur radius COC of the depth map is calculated.
Specifically, in order to reduce the calculation amount, in this embodiment, the process of calculating the blur radius COC of the depth map may specifically include:
and (4) downsampling the depth map to obtain a corresponding depth small map, and then calculating a fuzzy radius COC corresponding to the depth small map.
Specifically, in this embodiment, the depth map is reduced by downsampling to obtain a depth map, which can be understood as reducing the number of pixels processed in the picture, increasing the running speed of the image, and reducing the consumption of the running memory. The image is down-sampled by a method including, but not limited to, down-sampling by a gaussian pyramid or directly down-sampling the depth map to obtain a corresponding depth map.
The calculation formula of the fuzzy radius COC of the depth small graph is as follows:
in the formula, U is the object distance when the RGB image is shot, UfFor the focusing distance, f is the focal length of the camera and D is the diameter of the camera lens.
Furthermore, in order to improve the accuracy of the depth map, a fast steering filter may be used to perform corresponding filtering processing on the depth map, so as to improve the accuracy of the depth map. The calculation formula of the fast steering filter is as follows:
where I is the pixel index, k denotes the index of the local window with window ω and radius r, IiIs the pixel value of the ith pixel in the depth map, qiRepresenting the filtered output value of the ith pixel, a, of the depth mapkAnd bkThe filter linearity parameters of the k-th window, respectively, wherein,
and is
In the formula, mukAnd σkRespectively mean value and variance of pixel value in the kth window, epsilon is a regularization parameter for controlling smoothness, and the output pixel value isWhereinAre respectively ak、bkAverage value of (a).
It can be understood that, in the present embodiment, the depth map is filtered by using the fast steering filter, so as to improve the accuracy of the depth map, and the technical means can solve the problem of the discontinuity of the depth map in the prior art, and of course, the sequence of the operation steps in the whole image processing of this step can be adjusted according to the final purpose to be achieved in the actual operation, and is not limited herein.
Step S23: any one of the original RGB images is down-sampled to obtain a corresponding RGB small image.
Specifically, in this embodiment, the original RGB map is reduced by a down-sampling manner to obtain the corresponding RGB thumbnail, which can be understood that, in order to correspond to the foregoing operation process and facilitate the subsequent processing of the image, the down-sampling manner may be related to the step in the parameter step S22, and is not described herein again.
Step S24: and filtering the RGB small image by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small image to obtain a rendering small image.
The calculation formula of the convolution kernel corresponding to the pixel point p in the RGB small graph is as follows:
in the formula, wpRepresenting the weight of a pixel point p in the RGB small graph with the pixel point q as the center, k being a normalization factor obtained by normalizing the fuzzy radius COC, and sigmar、σsIs the standard deviation of the value/space domain, Ip(r, g, b) is the three-dimensional color value of pixel p, IqAnd (r, g and b) is the three-dimensional color value of the pixel point q, p (x, y and z) is the three-dimensional position value of the pixel point p, and q (x, y and z) is the three-dimensional position value of the pixel point q.
Then, the calculation formula of the pixel value of the pixel point p in the rendering small graph is as follows:
I(p)=IR(p)*wp;
in the formula IR(p) is the pixel value of point p in the original RGB map, wpThe weights of the convolution kernels.
It can be understood that the problem of color leakage in the depth-of-field rendered image in the prior art is solved by the one-to-one mapping relationship, in which the rendered image corresponding to the original RGB image can be found in the rendered image corresponding to the original RGB image by the calculation method of the convolution kernel corresponding to the blur radius COC and then the convolution kernel filtering processing is performed on the pixel values in the original RGB image.
Step S25: and performing up-sampling on the rendering small picture to obtain a rendering image.
Specifically, after the rendering thumbnail is obtained, in order to obtain an image depth rendering effect of an image with better imaging quality by fusing the rendering thumbnail with the previous RGB image, the rendering thumbnail needs to be enlarged to a picture with a size corresponding to the original RGB image, and the rendering thumbnail is up-sampled to obtain a rendering image. Of course, there may be other methods to enlarge the rendering thumbnail to obtain the rendering image, and the method of enlarging the rendering thumbnail is not limited herein.
Step S26: and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
Wherein, the calculation formula of alpha fusion is as follows:
Idst=alpha*Iblured+(1-alpha)*Iorignal;
in the formula IbluredFor up-sampled rendered images, IorignalThe original RGB graph is obtained by directly calculating the fuzzy radius by taking alpha (COC) COC as a weight.
It can be understood that any one of the original RGB images is fused with the rendered image to obtain a fused image, and it can be ensured that the image is clear in the region within the depth of field and is fuzzy in the region outside the depth of field, so that the rendered image with better rendering effect can be obtained.
The third embodiment of the invention discloses an image depth of field rendering method, which further explains and optimizes the technical scheme relative to the previous embodiment, and specifically, referring to a flow chart of an image depth of field rendering algorithm which is shown in fig. 3 and implemented based on multi-camera acquisition equipment, the method comprises the following steps:
step S31: and acquiring images of the target object to obtain K RGB images, wherein K is an integer greater than or equal to 3, and determining the corresponding depth map by using the K RGB images.
It can be understood that, in this embodiment, a multi-purpose imaging device including K imaging units is used to perform image acquisition on a target object to obtain K RGB maps, and specifically, in this case, in order to obtain more accurate image depth information in a depth map, the K RGB maps are acquired to mutually correct a depth map corresponding to an RGB map to be subjected to depth rendering.
Wherein, above-mentioned many cameras collection equipment can install in the all directions and the position of the target object of will shooing to the purpose in the middle of reaching actual operation is the standard.
Step S32: the blur radius COC of the depth map is calculated.
Specifically, in this embodiment, the process of calculating the blur radius COC of the depth map may specifically include:
and (4) downsampling the depth map to obtain a corresponding depth small map, and then calculating a fuzzy radius COC corresponding to the depth small map.
The calculation formula of the fuzzy radius COC of the depth small graph is as follows:
in the formula, U is the object distance when the RGB image is shot, UfFor the focusing distance, f is the focal length of the camera and D is the diameter of the camera lens.
It can be understood that, in order to improve the operation efficiency of the program in the process of processing the image, the size of the image may be adjusted accordingly in the process of processing the image, obviously, the operation efficiency of the program may be improved by using such a technical means, so the image is reduced by adopting a downsampling manner to reduce the amount of calculation in the process of operating the program.
Furthermore, in order to improve the accuracy of the depth small map, a fast steering filter may be used to perform corresponding filtering processing on the depth small map, so as to improve the accuracy of the depth small map.
It can be understood that, in the embodiment, the fast-guide-filter is used to filter the depth map, so as to improve the accuracy of the depth map, and the technical means can solve the problem of the discontinuity of the depth map in the prior art, and of course, the sequence of the operation steps in the whole image processing of this step can be adjusted according to the final purpose to be achieved in the actual operation, which is not limited herein.
Step S33: any RGB image in the original RGB images is downsampled to obtain corresponding RGB small images;
specifically, in this embodiment, the original RGB map is reduced by a down-sampling manner to obtain the corresponding RGB thumbnail, which can be understood that, in order to correspond to the foregoing operation process and facilitate the subsequent processing of the image, the down-sampling manner may be related to the step in the parameter step S22, and is not described herein again.
Step S34: filtering the RGB small graph by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small graph to obtain a rendering small graph;
it can be understood that the problem of color leakage in the depth-of-field rendered image in the prior art is solved by the one-to-one mapping relationship, wherein the corresponding rendering small image is obtained by calculating the convolution kernel corresponding to the blur radius COC and then performing convolution kernel filtering processing on the pixel values in the RGB small image.
Step S35: and performing up-sampling on the rendering small picture to obtain a rendering image.
It can be understood that after the rendering thumbnail is obtained, in order to obtain the depth-of-field rendering effect of the image with better imaging quality by image fusion with the previous RGB image, the rendering thumbnail needs to be enlarged to a picture with a size corresponding to the original RGB image, and the rendering thumbnail is up-sampled to obtain the rendering image. Of course, there may be other methods to enlarge the rendering thumbnail to obtain the rendering image, and the method of enlarging the rendering thumbnail is not limited herein.
Step S36: and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
It can be understood that the RGB image to be subjected to depth-of-field rendering in the original RGB image and the rendered image are fused to obtain a fused image, so that the image can be ensured to be clear in the region within the depth of field and fuzzy in the region outside the depth of field, and the rendered image with better rendering effect can be obtained.
Furthermore, in order to increase the running speed of the program, a GPGPU (general purpose computing graphics processor) may be used to perform parallel accelerated processing on the convolution kernel. It can be understood that the method provided by the embodiment of the invention not only can ensure the robustness of image imaging, but also can accelerate the rendering calculation speed of the image. The method provided by the embodiment of the invention can also be widely applied to other mobile platforms, so that the rendering effect of the depth of field of the image is ensured, and the photographing experience of a user is improved.
Correspondingly, an embodiment of the present invention further discloses an image depth rendering system, as shown in fig. 4, the system includes:
an image obtaining module 41, configured to obtain an original RGB map and a depth map corresponding to a target object;
in this embodiment, there are many methods for obtaining the original RGB map and the depth map corresponding to the target object, for example, the original RGB map of the target object is obtained through a camera function of a mobile phone, the depth map corresponding to the original RGB map can be obtained through shooting pictures of objects at different angles and through related calculation, and of course, the depth map corresponding to the original RGB map can also be obtained through shooting by a time of flight camera (TOF) or a Kinect camera, so as to obtain various depth information in the pictures.
A blur radius calculation module 42, configured to calculate a blur radius COC of the depth map;
wherein, the calculation formula of the fuzzy radius COC is as follows:
wherein COC is the fuzzy radius, U is the object distance when the RGB image is shot, and U isfFor the focusing distance, f is the focal length of the camera and D is the diameter of the camera lens.
An RGB map filtering module 43, configured to perform filtering processing on the original RGB map by using a convolution kernel corresponding to the blur radius COC to obtain a corresponding rendered image;
the calculation formula of the convolution kernel corresponding to the pixel point p in the original RGB image is as follows:
in the formula, wpRepresenting the weight of a pixel point p in the original RGB image by taking a pixel point q as a center, k being a normalization factor obtained by normalizing the fuzzy radius COC, and sigmar、σsIs the standard deviation of the value/space domain, Ip(r, g, b) is the three-dimensional color value of pixel p, IqAnd (r, g and b) is the three-dimensional color value of the pixel point q, p (x, y and z) is the three-dimensional position value of the pixel point p, and q (x, y and z) is the three-dimensional position value of the pixel point q.
Then the pixel value calculation method of the pixel point in the filtered image is as follows:
I(p)=IR(p)*wp;
in the formula IR(p) is the pixel value of point p in the original RGB map, wpThe weights of the convolution kernels.
It can be understood that the problem of color leakage in the depth image in the prior art can be solved by calculating the blur radius COC of the depth image and the convolution kernel corresponding to the blur radius COC, and performing filtering processing on the original RGB image to obtain a corresponding rendered image.
And the image fusion module 44 is configured to perform alpha fusion on the rendered image and the RGB image to obtain a fused image.
Wherein, the calculation formula of alpha fusion is as follows:
Idst=alpha*Iblured+(1-alpha)*Iorignal;
in the formula IbluredTo render an image, IorignalThe original RGB graph is obtained by directly calculating the fuzzy radius by taking alpha (COC) COC as a weight.
In this embodiment, any one of the original RGB images is fused with the rendered image to obtain a fused image, so that it is ensured that the image has a clear region within the depth of field and a blurred region outside the depth of field, thereby obtaining a rendered image with a better rendering effect.
Specifically, the image obtaining module 41 may include a first image acquiring unit and a first depth map determining unit; wherein,
the first image acquisition unit is used for acquiring an image of the target object to obtain a first RGB image and a second RGB image;
a first depth map determining unit, configured to determine a corresponding depth map by using the first RGB map and the second RGB map.
Further, in order to improve the accuracy of the information of the depth map in the depth map, the image obtaining module 41 may include a second image acquiring unit and a second depth map determining unit; wherein,
the second image acquisition unit is used for acquiring images of the target object to obtain K RGB images, wherein K is an integer greater than or equal to 3;
and the second depth map determining unit is used for determining a corresponding depth map by using the K RGB maps.
It can be understood that by acquiring a plurality of RGB images and using the difference of the imaging position relationship in the target object, the depth information in the depth image obtained by calculating the two images can be corrected, so that the depth information in the depth image is more accurate, and the accuracy of the algorithm can be improved.
Specifically, the RGB map filtering module 43 includes an RGB map down-sampling unit, an RGB thumbnail filtering unit, and a rendering thumbnail up-sampling unit; wherein,
the RGB map downsampling unit is used for downsampling any RGB map in the original RGB maps to obtain corresponding RGB minimaps;
and the RGB small image filtering unit is used for filtering the RGB small images by utilizing convolution kernels corresponding to the fuzzy radius COC and the RGB small images to obtain rendering small images.
And the rendering small graph up-sampling unit is used for up-sampling the rendering small graph to obtain a corresponding rendering image.
Specifically, the image fusion module 44 includes an image fusion unit, wherein:
and the image fusion unit is used for carrying out alpha fusion on the rendered image and any RGB image in the original RGB images to obtain the fused image.
Further, the blur radius calculation module 42 includes a depth map down-sampling unit and a blur radius calculation unit, where:
a depth map downsampling unit, configured to downsample the depth map to obtain a corresponding depth map;
and the fuzzy radius calculating unit is used for calculating the fuzzy radius COC corresponding to the depth small map.
It can be understood that, in the process of processing an image, in order to improve the operation efficiency of a program, in the process of processing a picture, the size of the image can be adjusted accordingly, and obviously, the operation efficiency of the program can be improved by using such a technical means, so that the picture is reduced by adopting a downsampling mode to reduce the calculation amount of the program in operation.
Furthermore, the image depth-of-field rendering system provided in the embodiment of the present invention further includes a depth small map filtering module 45; wherein:
a depth small-image filtering module 45, configured to perform corresponding filtering processing on the depth small image by using a fast steering filter before the blur radius COC corresponding to the depth small image is calculated by the blur radius calculation module, so as to improve the accuracy of the depth small image.
The calculation formula of the fast steering filter is as follows:
where I is the pixel index, k denotes the index of the local window with window ω and radius r, IiIs the pixel value of the ith pixel in the depth map, qiRepresenting the filtered output value of the ith pixel, a, of the depth mapkAnd bkThe filter linearity parameters of the k-th window, respectively, wherein,
and is
In the formula, mukAnd σkRespectively mean value and variance of pixel value in the kth window, epsilon is a regularization parameter for controlling smoothness, and the output pixel value isWhereinAre respectively ak、bkAverage value of (a).
It can be understood that, in the present embodiment, the depth map is filtered by using the fast steering filter, so as to improve the accuracy of the depth map, and the technical means can solve the problem of the discontinuity of the depth map in the prior art, and of course, the sequence of the operation steps in the whole image processing of this step can be adjusted according to the final purpose to be achieved in the actual operation, and is not limited herein.
For more detailed working processes of the modules and the units, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not described here.
Correspondingly, the embodiment of the present invention further discloses a terminal, as shown in fig. 5, the terminal includes an image collector 51, a processor 52 and a memory 53; the processor 52 retrieves the instructions stored in the memory 53 to execute the following steps:
acquiring an original RGB image corresponding to a target object through the image collector 51, and determining a depth map corresponding to the original RGB image;
calculating the fuzzy radius COC of the depth map;
filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
It can be understood that the terminals in this embodiment include, but are not limited to, a camera and a video camera, and the instructions in the memory 53 are not limited to the steps listed above, and the more detailed working process may refer to the corresponding contents disclosed in the foregoing embodiments, and will not be described herein again. Of course, in order to improve the operation efficiency of the processor 52, related software calling other third parties may also be used, and is not limited herein.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
The present invention has been described in detail with respect to the provided method, system and terminal for rendering depth of field of an image, and a specific example is applied in the description to explain the principle and implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. An image depth rendering method, comprising:
acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object;
calculating a blur radius COC of the depth map;
filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
2. The method of claim 1, wherein the process of obtaining the original RGB map and the depth map corresponding to the target object comprises:
acquiring an image of the target object to obtain a first RGB image and a second RGB image;
and determining a corresponding depth map by using the first RGB map and the second RGB map.
3. The method of claim 1, wherein the process of obtaining the original RGB map and the depth map corresponding to the target object comprises:
acquiring an image of the target object to obtain K RGB images, wherein K is an integer greater than or equal to 3;
and determining a corresponding depth map by using the K RGB maps.
4. The method according to claim 1, wherein the step of filtering the original RGB map by using the convolution kernel corresponding to the blur radius COC to obtain a corresponding rendered image comprises:
down-sampling any one of the original RGB images to obtain a corresponding RGB small image;
filtering the RGB small graph by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small graph to obtain a rendering small graph;
and performing up-sampling on the rendering small graph to obtain the rendering image.
5. The method as claimed in claim 4, wherein said process of alpha blending said rendered image and said original RGB map to obtain a blended image comprises:
and carrying out alpha fusion on the rendered image and any one of the original RGB images to obtain the fused image.
6. The method according to any one of claims 1 to 5, wherein the step of calculating the blur radius COC of the depth map comprises:
down-sampling the depth map to obtain a corresponding depth small map;
and calculating the fuzzy radius COC corresponding to the depth small map.
7. The method according to claim 6, wherein the step of calculating the blur radius COC corresponding to the depth small map further comprises:
and performing corresponding filtering processing on the small depth map by using a fast steering filter so as to improve the precision of the small depth map.
8. An image depth rendering system, comprising:
the image acquisition module is used for acquiring an original RGB (red, green and blue) image and a depth image corresponding to a target object;
the fuzzy radius calculating module is used for calculating the fuzzy radius COC of the depth map;
the RGB image filtering module is used for filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and the image fusion module is used for performing alpha fusion on the rendered image and the RGB image to obtain a fused image.
9. The image depth rendering system of claim 8, wherein the RGB map filtering module comprises:
the RGB map downsampling unit is used for downsampling any RGB map in the original RGB maps to obtain corresponding RGB minimaps;
and the RGB small image filtering unit is used for filtering the RGB small image by using a convolution kernel corresponding to the fuzzy radius COC and the RGB small image to obtain a rendering small image.
And the rendering small graph upsampling unit is used for upsampling the rendering small graph to obtain a corresponding rendering image.
10. A terminal is characterized by comprising an image collector, a processor and a memory; wherein the processor performs the following steps by invoking instructions stored in the memory:
acquiring an original RGB image corresponding to a target object through the image collector, and determining a depth map corresponding to the original RGB image;
calculating a blur radius COC of the depth map;
filtering the original RGB image by using a convolution kernel corresponding to the fuzzy radius COC to obtain a corresponding rendering image;
and performing alpha fusion on the rendered image and the original RGB image to obtain a fused image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710777729.3A CN107633497A (en) | 2017-08-31 | 2017-08-31 | A kind of image depth rendering intent, system and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710777729.3A CN107633497A (en) | 2017-08-31 | 2017-08-31 | A kind of image depth rendering intent, system and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107633497A true CN107633497A (en) | 2018-01-26 |
Family
ID=61101784
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710777729.3A Pending CN107633497A (en) | 2017-08-31 | 2017-08-31 | A kind of image depth rendering intent, system and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107633497A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146941A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of depth image optimization method and system based on net region division |
CN109859136A (en) * | 2019-02-01 | 2019-06-07 | 浙江理工大学 | A method of Fuzzy Processing being carried out to image in depth of field rendering |
WO2020038407A1 (en) * | 2018-08-21 | 2020-02-27 | 腾讯科技(深圳)有限公司 | Image rendering method and apparatus, image processing device, and storage medium |
CN110889410A (en) * | 2018-09-11 | 2020-03-17 | 苹果公司 | Robust use of semantic segmentation in shallow depth of field rendering |
CN112686939A (en) * | 2021-01-06 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Depth image rendering method, device and equipment and computer readable storage medium |
CN113763524A (en) * | 2021-09-18 | 2021-12-07 | 华中科技大学 | Dual-stream bokeh rendering method and system based on physical optics model and neural network |
CN115996323A (en) * | 2022-10-27 | 2023-04-21 | 广州光锥元信息科技有限公司 | Imaging method and device for simulating large aperture lens |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102693527A (en) * | 2011-02-28 | 2012-09-26 | 索尼公司 | Method and apparatus for performing a blur rendering process on an image |
CN102968814A (en) * | 2012-11-22 | 2013-03-13 | 华为技术有限公司 | Image rendering method and equipment |
CN104169966A (en) * | 2012-03-05 | 2014-11-26 | 微软公司 | Generation of depth images based upon light falloff |
US20160286200A1 (en) * | 2015-03-25 | 2016-09-29 | Electronics And Telecommunications Research Institute | Method of increasing photographing speed of photographing device |
-
2017
- 2017-08-31 CN CN201710777729.3A patent/CN107633497A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102693527A (en) * | 2011-02-28 | 2012-09-26 | 索尼公司 | Method and apparatus for performing a blur rendering process on an image |
CN104169966A (en) * | 2012-03-05 | 2014-11-26 | 微软公司 | Generation of depth images based upon light falloff |
CN102968814A (en) * | 2012-11-22 | 2013-03-13 | 华为技术有限公司 | Image rendering method and equipment |
US20160286200A1 (en) * | 2015-03-25 | 2016-09-29 | Electronics And Telecommunications Research Institute | Method of increasing photographing speed of photographing device |
Non-Patent Citations (2)
Title |
---|
曹彦珏 等: "基于后处理的实时景深模拟与应用", 《计算机应用》 * |
杨真 等: "基于快速引导滤波的景深实时渲染方法", 《中国体视学与图像分析》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146941A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of depth image optimization method and system based on net region division |
WO2020038407A1 (en) * | 2018-08-21 | 2020-02-27 | 腾讯科技(深圳)有限公司 | Image rendering method and apparatus, image processing device, and storage medium |
US11295528B2 (en) | 2018-08-21 | 2022-04-05 | Tencent Technology (Shenzhen) Company Limited | Image rendering method and apparatus, image processing device, and storage medium |
CN110889410A (en) * | 2018-09-11 | 2020-03-17 | 苹果公司 | Robust use of semantic segmentation in shallow depth of field rendering |
CN110889410B (en) * | 2018-09-11 | 2023-10-03 | 苹果公司 | Robust use of semantic segmentation in shallow depth of view rendering |
CN109859136A (en) * | 2019-02-01 | 2019-06-07 | 浙江理工大学 | A method of Fuzzy Processing being carried out to image in depth of field rendering |
CN112686939A (en) * | 2021-01-06 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Depth image rendering method, device and equipment and computer readable storage medium |
CN112686939B (en) * | 2021-01-06 | 2024-02-02 | 腾讯科技(深圳)有限公司 | Depth image rendering method, device, equipment and computer readable storage medium |
CN113763524A (en) * | 2021-09-18 | 2021-12-07 | 华中科技大学 | Dual-stream bokeh rendering method and system based on physical optics model and neural network |
CN115996323A (en) * | 2022-10-27 | 2023-04-21 | 广州光锥元信息科技有限公司 | Imaging method and device for simulating large aperture lens |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113888437B (en) | Image processing method, device, electronic device and computer readable storage medium | |
CN111641778B (en) | A shooting method, device and equipment | |
CN107633497A (en) | A kind of image depth rendering intent, system and terminal | |
CN107925751B (en) | System and method for multi-view noise reduction and high dynamic range | |
CN109474780B (en) | Method and device for image processing | |
JP6347675B2 (en) | Image processing apparatus, imaging apparatus, image processing method, imaging method, and program | |
WO2016164166A1 (en) | Automated generation of panning shots | |
CN108335323B (en) | Blurring method of image background and mobile terminal | |
CN105100615A (en) | Image preview method, device and terminal | |
WO2017045558A1 (en) | Depth-of-field adjustment method and apparatus, and terminal | |
CN106683147B (en) | A method for blurring the background of an image | |
CN111986129A (en) | HDR image generation method and device based on multi-shot image fusion and storage medium | |
CN110650295B (en) | Image processing method and device | |
JP5766077B2 (en) | Image processing apparatus and image processing method for noise reduction | |
WO2015048694A2 (en) | Systems and methods for depth-assisted perspective distortion correction | |
CN107749944A (en) | A kind of image pickup method and device | |
CN108154514A (en) | Image processing method, device and equipment | |
CN110324532A (en) | Image blurring method and device, storage medium and electronic equipment | |
CN112689850A (en) | Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium | |
CN110278366B (en) | A panoramic image blurring method, terminal and computer-readable storage medium | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
EP4500878A1 (en) | Image capture using dynamic lens positions | |
Popovic et al. | Design and implementation of real-time multi-sensor vision systems | |
KR20150032764A (en) | Method and image capturing device for generating artificially defocused blurred image | |
CN113938578A (en) | Image blurring method, storage medium and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180126 |
|
RJ01 | Rejection of invention patent application after publication |